Sample records for single sound source

  1. How the owl tracks its prey – II

    PubMed Central

    Takahashi, Terry T.

    2010-01-01

    Barn owls can capture prey in pitch darkness or by diving into snow, while homing in on the sounds made by their prey. First, the neural mechanisms by which the barn owl localizes a single sound source in an otherwise quiet environment will be explained. The ideas developed for the single source case will then be expanded to environments in which there are multiple sound sources and echoes – environments that are challenging for humans with impaired hearing. Recent controversies regarding the mechanisms of sound localization will be discussed. Finally, the case in which both visual and auditory information are available to the owl will be considered. PMID:20889819

  2. Propagation characteristics of audible noise generated by single corona source under positive DC voltage

    NASA Astrophysics Data System (ADS)

    Li, Xuebao; Cui, Xiang; Lu, Tiebing; Wang, Donglai

    2017-10-01

    The directivity and lateral profile of corona-generated audible noise (AN) from a single corona source are measured through experiments carried out in the semi-anechoic laboratory. The experimental results show that the waveform of corona-generated AN consists of a series of random sound pressure pulses whose pulse amplitudes decrease with the increase of measurement distance. A single corona source can be regarded as a non-directional AN source, and the A-weighted SPL (sound pressure level) decreases 6 dB(A) as doubling the measurement distance. Then, qualitative explanations for the rationality of treating the single corona source as a point source are given on the basis of the Ingard's theory for sound generation in corona discharge. Furthermore, we take into consideration of the ground reflection and the air attenuation to reconstruct the propagation features of AN from the single corona source. The calculated results agree with the measurement well, which validates the propagation model. Finally, the influence of the ground reflection on the SPL is presented in the paper.

  3. Underwater auditory localization by a swimming harbor seal (Phoca vitulina).

    PubMed

    Bodson, Anais; Miersch, Lars; Mauck, Bjoern; Dehnhardt, Guido

    2006-09-01

    The underwater sound localization acuity of a swimming harbor seal (Phoca vitulina) was measured in the horizontal plane at 13 different positions. The stimulus was either a double sound (two 6-kHz pure tones lasting 0.5 s separated by an interval of 0.2 s) or a single continuous sound of 1.2 s. Testing was conducted in a 10-m-diam underwater half circle arena with hidden loudspeakers installed at the exterior perimeter. The animal was trained to swim along the diameter of the half circle and to change its course towards the sound source as soon as the signal was given. The seal indicated the sound source by touching its assumed position at the board of the half circle. The deviation of the seals choice from the actual sound source was measured by means of video analysis. In trials with the double sound the seal localized the sound sources with a mean deviation of 2.8 degrees and in trials with the single sound with a mean deviation of 4.5 degrees. In a second experiment minimum audible angles of the stationary animal were found to be 9.8 degrees in front and 9.7 degrees in the back of the seal's head.

  4. A Tool for Low Noise Procedures Design and Community Noise Impact Assessment: The Rotorcraft Noise Model (RNM)

    NASA Technical Reports Server (NTRS)

    Conner, David A.; Page, Juliet A.

    2002-01-01

    To improve aircraft noise impact modeling capabilities and to provide a tool to aid in the development of low noise terminal area operations for rotorcraft and tiltrotors, the Rotorcraft Noise Model (RNM) was developed by the NASA Langley Research Center and Wyle Laboratories. RNM is a simulation program that predicts how sound will propagate through the atmosphere and accumulate at receiver locations located on flat ground or varying terrain, for single and multiple vehicle flight operations. At the core of RNM are the vehicle noise sources, input as sound hemispheres. As the vehicle "flies" along its prescribed flight trajectory, the source sound propagation is simulated and accumulated at the receiver locations (single points of interest or multiple grid points) in a systematic time-based manner. These sound signals at the receiver locations may then be analyzed to obtain single event footprints, integrated noise contours, time histories, or numerous other features. RNM may also be used to generate spectral time history data over a ground mesh for the creation of single event sound animation videos. Acoustic properties of the noise source(s) are defined in terms of sound hemispheres that may be obtained from theoretical predictions, wind tunnel experimental results, flight test measurements, or a combination of the three. The sound hemispheres may contain broadband data (source levels as a function of one-third octave band) and pure-tone data (in the form of specific frequency sound pressure levels and phase). A PC executable version of RNM is publicly available and has been adopted by a number of organizations for Environmental Impact Assessment studies of rotorcraft noise. This paper provides a review of the required input data, the theoretical framework of RNM's propagation model and the output results. Code validation results are provided from a NATO helicopter noise flight test as well as a tiltrotor flight test program that used the RNM as a tool to aid in the development of low noise approach profiles.

  5. Sound Source Localization and Speech Understanding in Complex Listening Environments by Single-sided Deaf Listeners After Cochlear Implantation.

    PubMed

    Zeitler, Daniel M; Dorman, Michael F; Natale, Sarah J; Loiselle, Louise; Yost, William A; Gifford, Rene H

    2015-09-01

    To assess improvements in sound source localization and speech understanding in complex listening environments after unilateral cochlear implantation for single-sided deafness (SSD). Nonrandomized, open, prospective case series. Tertiary referral center. Nine subjects with a unilateral cochlear implant (CI) for SSD (SSD-CI) were tested. Reference groups for the task of sound source localization included young (n = 45) and older (n = 12) normal-hearing (NH) subjects and 27 bilateral CI (BCI) subjects. Unilateral cochlear implantation. Sound source localization was tested with 13 loudspeakers in a 180 arc in front of the subject. Speech understanding was tested with the subject seated in an 8-loudspeaker sound system arrayed in a 360-degree pattern. Directionally appropriate noise, originally recorded in a restaurant, was played from each loudspeaker. Speech understanding in noise was tested using the Azbio sentence test and sound source localization quantified using root mean square error. All CI subjects showed poorer-than-normal sound source localization. SSD-CI subjects showed a bimodal distribution of scores: six subjects had scores near the mean of those obtained by BCI subjects, whereas three had scores just outside the 95th percentile of NH listeners. Speech understanding improved significantly in the restaurant environment when the signal was presented to the side of the CI. Cochlear implantation for SSD can offer improved speech understanding in complex listening environments and improved sound source localization in both children and adults. On tasks of sound source localization, SSD-CI patients typically perform as well as BCI patients and, in some cases, achieve scores at the upper boundary of normal performance.

  6. Sound source localization identification accuracy: Envelope dependencies.

    PubMed

    Yost, William A

    2017-07-01

    Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.

  7. Psychophysical investigation of an auditory spatial illusion in cats: the precedence effect.

    PubMed

    Tollin, Daniel J; Yin, Tom C T

    2003-10-01

    The precedence effect (PE) describes several spatial perceptual phenomena that occur when similar sounds are presented from two different locations and separated by a delay. The mechanisms that produce the effect are thought to be responsible for the ability to localize sounds in reverberant environments. Although the physiological bases for the PE have been studied, little is known about how these sounds are localized by species other than humans. Here we used the search coil technique to measure the eye positions of cats trained to saccade to the apparent locations of sounds. To study the PE, brief broadband stimuli were presented from two locations, with a delay between their onsets; the delayed sound meant to simulate a single reflection. Although the cats accurately localized single sources, the apparent locations of the paired sources depended on the delay. First, the cats exhibited summing localization, the perception of a "phantom" sound located between the sources, for delays < +/-400 micros for sources positioned in azimuth along the horizontal plane, but not for sources positioned in elevation along the sagittal plane. Second, consistent with localization dominance, for delays from 400 micros to about 10 ms, the cats oriented toward the leading source location only, with little influence of the lagging source, both for horizontally and vertically placed sources. Finally, the echo threshold was reached for delays >10 ms, where the cats first began to orient to the lagging source on some trials. These data reveal that cats experience the PE phenomena similarly to humans.

  8. The effect of brain lesions on sound localization in complex acoustic environments.

    PubMed

    Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg

    2014-05-01

    Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.

  9. Difference in precedence effect between children and adults signifies development of sound localization abilities in complex listening tasks

    PubMed Central

    Litovsky, Ruth Y.; Godar, Shelly P.

    2010-01-01

    The precedence effect refers to the fact that humans are able to localize sound in reverberant environments, because the auditory system assigns greater weight to the direct sound (lead) than the later-arriving sound (lag). In this study, absolute sound localization was studied for single source stimuli and for dual source lead-lag stimuli in 4–5 year old children and adults. Lead-lag delays ranged from 5–100 ms. Testing was conducted in free field, with pink noise bursts emitted from loudspeakers positioned on a horizontal arc in the frontal field. Listeners indicated how many sounds were heard and the perceived location of the first- and second-heard sounds. Results suggest that at short delays (up to 10 ms), the lead dominates sound localization strongly at both ages, and localization errors are similar to those with single-source stimuli. At longer delays errors can be large, stemming from over-integration of the lead and lag, interchanging of perceived locations of the first-heard and second-heard sounds due to temporal order confusion, and dominance of the lead over the lag. The errors are greater for children than adults. Results are discussed in the context of maturation of auditory and non-auditory factors. PMID:20968369

  10. The effect of spatial distribution on the annoyance caused by simultaneous sounds

    NASA Astrophysics Data System (ADS)

    Vos, Joos; Bronkhorst, Adelbert W.; Fedtke, Thomas

    2004-05-01

    A considerable part of the population is exposed to simultaneous and/or successive environmental sounds from different sources. In many cases, these sources are different with respect to their locations also. In a laboratory study, it was investigated whether the annoyance caused by the multiple sounds is affected by the spatial distribution of the sources. There were four independent variables: (1) sound category (stationary or moving), (2) sound type (stationary: lawn-mower, leaf-blower, and chain saw; moving: road traffic, railway, and motorbike), (3) spatial location (left, right, and combinations), and (4) A-weighted sound exposure level (ASEL of single sources equal to 50, 60, or 70 dB). In addition to the individual sounds in isolation, various combinations of two or three different sources within each sound category and sound level were presented for rating. The annoyance was mainly determined by sound level and sound source type. In most cases there were neither significant main effects of spatial distribution nor significant interaction effects between spatial distribution and the other variables. It was concluded that for rating the spatially distrib- uted sounds investigated, the noise dose can simply be determined by a summation of the levels for the left and right channels. [Work supported by CEU.

  11. Echolocation versus echo suppression in humans

    PubMed Central

    Wallmeier, Ludwig; Geßele, Nikodemus; Wiegrebe, Lutz

    2013-01-01

    Several studies have shown that blind humans can gather spatial information through echolocation. However, when localizing sound sources, the precedence effect suppresses spatial information of echoes, and thereby conflicts with effective echolocation. This study investigates the interaction of echolocation and echo suppression in terms of discrimination suppression in virtual acoustic space. In the ‘Listening’ experiment, sighted subjects discriminated between positions of a single sound source, the leading or the lagging of two sources, respectively. In the ‘Echolocation’ experiment, the sources were replaced by reflectors. Here, the same subjects evaluated echoes generated in real time from self-produced vocalizations and thereby discriminated between positions of a single reflector, the leading or the lagging of two reflectors, respectively. Two key results were observed. First, sighted subjects can learn to discriminate positions of reflective surfaces echo-acoustically with accuracy comparable to sound source discrimination. Second, in the Listening experiment, the presence of the leading source affected discrimination of lagging sources much more than vice versa. In the Echolocation experiment, however, the presence of both the lead and the lag strongly affected discrimination. These data show that the classically described asymmetry in the perception of leading and lagging sounds is strongly diminished in an echolocation task. Additional control experiments showed that the effect is owing to both the direct sound of the vocalization that precedes the echoes and owing to the fact that the subjects actively vocalize in the echolocation task. PMID:23986105

  12. Understanding auditory distance estimation by humpback whales: a computational approach.

    PubMed

    Mercado, E; Green, S R; Schneider, J N

    2008-02-01

    Ranging, the ability to judge the distance to a sound source, depends on the presence of predictable patterns of attenuation. We measured long-range sound propagation in coastal waters to assess whether humpback whales might use frequency degradation cues to range singing whales. Two types of neural networks, a multi-layer and a single-layer perceptron, were trained to classify recorded sounds by distance traveled based on their frequency content. The multi-layer network successfully classified received sounds, demonstrating that the distorting effects of underwater propagation on frequency content provide sufficient cues to estimate source distance. Normalizing received sounds with respect to ambient noise levels increased the accuracy of distance estimates by single-layer perceptrons, indicating that familiarity with background noise can potentially improve a listening whale's ability to range. To assess whether frequency patterns predictive of source distance were likely to be perceived by whales, recordings were pre-processed using a computational model of the humpback whale's peripheral auditory system. Although signals processed with this model contained less information than the original recordings, neural networks trained with these physiologically based representations estimated source distance more accurately, suggesting that listening whales should be able to range singers using distance-dependent changes in frequency content.

  13. Active control of sound radiation from a vibrating rectangular panel by sound sources and vibration inputs - An experimental comparison

    NASA Technical Reports Server (NTRS)

    Fuller, C. R.; Hansen, C. H.; Snyder, S. D.

    1991-01-01

    Active control of sound radiation from a rectangular panel by two different methods has been experimentally studied and compared. In the first method a single control force applied directly to the structure is used with a single error microphone located in the radiated acoustic field. Global attenuation of radiated sound was observed to occur by two main mechanisms. For 'on-resonance' excitation, the control force had the effect of increasing the total panel input impedance presented to the nosie source, thus reducing all radiated sound. For 'off-resonance' excitation, the control force tends not significantly to modify the panel total response amplitude but rather to restructure the relative phases of the modes leading to a more complex vibration pattern and a decrease in radiation efficiency. For acoustic control, the second method, the number of acoustic sources required for global reduction was seen to increase with panel modal order. The mechanism in this case was that the acoustic sources tended to create an inverse pressure distribution at the panel surface and thus 'unload' the panel by reducing the panel radiation impedance. In general, control by structural inputs appears more effective than control by acoustic sources for structurally radiated noise.

  14. A stepped-plate bi-frequency source for generating a difference frequency sound with a parametric array.

    PubMed

    Je, Yub; Lee, Haksue; Park, Jongkyu; Moon, Wonkyu

    2010-06-01

    An ultrasonic radiator is developed to generate a difference frequency sound from two frequencies of ultrasound in air with a parametric array. A design method is proposed for an ultrasonic radiator capable of generating highly directive, high-amplitude ultrasonic sound beams at two different frequencies in air based on a modification of the stepped-plate ultrasonic radiator. The stepped-plate ultrasonic radiator was introduced by Gallego-Juarez et al. [Ultrasonics 16, 267-271 (1978)] in their previous study and can effectively generate highly directive, large-amplitude ultrasonic sounds in air, but only at a single frequency. Because parametric array sources must be able to generate sounds at more than one frequency, a design modification is crucial to the application of a stepped-plate ultrasonic radiator as a parametric array source in air. The aforementioned method was employed to design a parametric radiator for use in air. A prototype of this design was constructed and tested to determine whether it could successfully generate a difference frequency sound with a parametric array. The results confirmed that the proposed single small-area transducer was suitable as a parametric radiator in air.

  15. Forced sound transmission through a finite-sized single leaf panel subject to a point source excitation.

    PubMed

    Wang, Chong

    2018-03-01

    In the case of a point source in front of a panel, the wavefront of the incident wave is spherical. This paper discusses spherical sound waves transmitting through a finite sized panel. The forced sound transmission performance that predominates in the frequency range below the coincidence frequency is the focus. Given the point source located along the centerline of the panel, forced sound transmission coefficient is derived through introducing the sound radiation impedance for spherical incident waves. It is found that in addition to the panel mass, forced sound transmission loss also depends on the distance from the source to the panel as determined by the radiation impedance. Unlike the case of plane incident waves, sound transmission performance of a finite sized panel does not necessarily converge to that of an infinite panel, especially when the source is away from the panel. For practical applications, the normal incidence sound transmission loss expression of plane incident waves can be used if the distance between the source and panel d and the panel surface area S satisfy d/S>0.5. When d/S ≈0.1, the diffuse field sound transmission loss expression may be a good approximation. An empirical expression for d/S=0  is also given.

  16. Electrophysiological correlates of cocktail-party listening.

    PubMed

    Lewald, Jörg; Getzmann, Stephan

    2015-10-01

    Detecting, localizing, and selectively attending to a particular sound source of interest in complex auditory scenes composed of multiple competing sources is a remarkable capacity of the human auditory system. The neural basis of this so-called "cocktail-party effect" has remained largely unknown. Here, we studied the cortical network engaged in solving the "cocktail-party" problem, using event-related potentials (ERPs) in combination with two tasks demanding horizontal localization of a naturalistic target sound presented either in silence or in the presence of multiple competing sound sources. Presentation of multiple sound sources, as compared to single sources, induced an increased P1 amplitude, a reduction in N1, and a strong N2 component, resulting in a pronounced negativity in the ERP difference waveform (N2d) around 260 ms after stimulus onset. About 100 ms later, the anterior contralateral N2 subcomponent (N2ac) occurred in the multiple-sources condition, as computed from the amplitude difference for targets in the left minus right hemispaces. Cortical source analyses of the ERP modulation, resulting from the contrast of multiple vs. single sources, generally revealed an initial enhancement of electrical activity in right temporo-parietal areas, including auditory cortex, by multiple sources (at P1) that is followed by a reduction, with the primary sources shifting from right inferior parietal lobule (at N1) to left dorso-frontal cortex (at N2d). Thus, cocktail-party listening, as compared to single-source localization, appears to be based on a complex chronology of successive electrical activities within a specific cortical network involved in spatial hearing in complex situations. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Dimensional feature weighting utilizing multiple kernel learning for single-channel talker location discrimination using the acoustic transfer function.

    PubMed

    Takashima, Ryoichi; Takiguchi, Tetsuya; Ariki, Yasuo

    2013-02-01

    This paper presents a method for discriminating the location of the sound source (talker) using only a single microphone. In a previous work, the single-channel approach for discriminating the location of the sound source was discussed, where the acoustic transfer function from a user's position is estimated by using a hidden Markov model of clean speech in the cepstral domain. In this paper, each cepstral dimension of the acoustic transfer function is newly weighted, in order to obtain the cepstral dimensions having information that is useful for classifying the user's position. Then, this paper proposes a feature-weighting method for the cepstral parameter using multiple kernel learning, defining the base kernels for each cepstral dimension of the acoustic transfer function. The user's position is trained and classified by support vector machine. The effectiveness of this method has been confirmed by sound source (talker) localization experiments performed in different room environments.

  18. Modelling sound propagation in the Southern Ocean to estimate the acoustic impact of seismic research surveys on marine mammals

    NASA Astrophysics Data System (ADS)

    Breitzke, Monika; Bohlen, Thomas

    2010-05-01

    Modelling sound propagation in the ocean is an essential tool to assess the potential risk of air-gun shots on marine mammals. Based on a 2.5-D finite-difference code a full waveform modelling approach is presented, which determines both sound exposure levels of single shots and cumulative sound exposure levels of multiple shots fired along a seismic line. Band-limited point source approximations of compact air-gun clusters deployed by R/V Polarstern in polar regions are used as sound sources. Marine mammals are simulated as static receivers. Applications to deep and shallow water models including constant and depth-dependent sound velocity profiles of the Southern Ocean show dipole-like directivities in case of single shots and tubular cumulative sound exposure level fields beneath the seismic line in case of multiple shots. Compared to a semi-infinite model an incorporation of seafloor reflections enhances the seismically induced noise levels close to the sea surface. Refraction due to sound velocity gradients and sound channelling in near-surface ducts are evident, but affect only low to moderate levels. Hence, exposure zone radii derived for different hearing thresholds are almost independent of the sound velocity structure. With decreasing thresholds radii increase according to a spherical 20 log10 r law in case of single shots and according to a cylindrical 10 log10 r law in case of multiple shots. A doubling of the shot interval diminishes the cumulative sound exposure levels by -3 dB and halves the radii. The ocean bottom properties only slightly affect the radii in shallow waters, if the normal incidence reflection coefficient exceeds 0.2.

  19. Approaches to the study of neural coding of sound source location and sound envelope in real environments

    PubMed Central

    Kuwada, Shigeyuki; Bishop, Brian; Kim, Duck O.

    2012-01-01

    The major functions of the auditory system are recognition (what is the sound) and localization (where is the sound). Although each of these has received considerable attention, rarely are they studied in combination. Furthermore, the stimuli used in the bulk of studies did not represent sound location in real environments and ignored the effects of reverberation. Another ignored dimension is the distance of a sound source. Finally, there is a scarcity of studies conducted in unanesthetized animals. We illustrate a set of efficient methods that overcome these shortcomings. We use the virtual auditory space method (VAS) to efficiently present sounds at different azimuths, different distances and in different environments. Additionally, this method allows for efficient switching between binaural and monaural stimulation and alteration of acoustic cues singly or in combination to elucidate neural mechanisms underlying localization and recognition. Such procedures cannot be performed with real sound field stimulation. Our research is designed to address the following questions: Are IC neurons specialized to process what and where auditory information? How does reverberation and distance of the sound source affect this processing? How do IC neurons represent sound source distance? Are neural mechanisms underlying envelope processing binaural or monaural? PMID:22754505

  20. Statistics of natural reverberation enable perceptual separation of sound and space

    PubMed Central

    Traer, James; McDermott, Josh H.

    2016-01-01

    In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us. PMID:27834730

  1. Statistics of natural reverberation enable perceptual separation of sound and space.

    PubMed

    Traer, James; McDermott, Josh H

    2016-11-29

    In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us.

  2. An intelligent artificial throat with sound-sensing ability based on laser induced graphene

    PubMed Central

    Tao, Lu-Qi; Tian, He; Liu, Ying; Ju, Zhen-Yi; Pang, Yu; Chen, Yuan-Quan; Wang, Dan-Yang; Tian, Xiang-Guang; Yan, Jun-Chao; Deng, Ning-Qin; Yang, Yi; Ren, Tian-Ling

    2017-01-01

    Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas. PMID:28232739

  3. An intelligent artificial throat with sound-sensing ability based on laser induced graphene.

    PubMed

    Tao, Lu-Qi; Tian, He; Liu, Ying; Ju, Zhen-Yi; Pang, Yu; Chen, Yuan-Quan; Wang, Dan-Yang; Tian, Xiang-Guang; Yan, Jun-Chao; Deng, Ning-Qin; Yang, Yi; Ren, Tian-Ling

    2017-02-24

    Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas.

  4. An intelligent artificial throat with sound-sensing ability based on laser induced graphene

    NASA Astrophysics Data System (ADS)

    Tao, Lu-Qi; Tian, He; Liu, Ying; Ju, Zhen-Yi; Pang, Yu; Chen, Yuan-Quan; Wang, Dan-Yang; Tian, Xiang-Guang; Yan, Jun-Chao; Deng, Ning-Qin; Yang, Yi; Ren, Tian-Ling

    2017-02-01

    Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas.

  5. Seismic and Biological Sources of Ambient Ocean Sound

    NASA Astrophysics Data System (ADS)

    Freeman, Simon Eric

    Sound is the most efficient radiation in the ocean. Sounds of seismic and biological origin contain information regarding the underlying processes that created them. A single hydrophone records summary time-frequency information from the volume within acoustic range. Beamforming using a hydrophone array additionally produces azimuthal estimates of sound sources. A two-dimensional array and acoustic focusing produce an unambiguous two-dimensional `image' of sources. This dissertation describes the application of these techniques in three cases. The first utilizes hydrophone arrays to investigate T-phases (water-borne seismic waves) in the Philippine Sea. Ninety T-phases were recorded over a 12-day period, implying a greater number of seismic events occur than are detected by terrestrial seismic monitoring in the region. Observation of an azimuthally migrating T-phase suggests that reverberation of such sounds from bathymetric features can occur over megameter scales. In the second case, single hydrophone recordings from coral reefs in the Line Islands archipelago reveal that local ambient reef sound is spectrally similar to sounds produced by small, hard-shelled benthic invertebrates in captivity. Time-lapse photography of the reef reveals an increase in benthic invertebrate activity at sundown, consistent with an increase in sound level. The dominant acoustic phenomenon on these reefs may thus originate from the interaction between a large number of small invertebrates and the substrate. Such sounds could be used to take census of hard-shelled benthic invertebrates that are otherwise extremely difficult to survey. A two-dimensional `map' of sound production over a coral reef in the Hawaiian Islands was obtained using two-dimensional hydrophone array in the third case. Heterogeneously distributed bio-acoustic sources were generally co-located with rocky reef areas. Acoustically dominant snapping shrimp were largely restricted to one location within the area surveyed. This distribution of sources could reveal small-scale spatial ecological limitations, such as the availability of food and shelter. While array-based passive acoustic sensing is well established in seismoacoustics, the technique is little utilized in the study of ambient biological sound. With the continuance of Moore's law and advances in battery and memory technology, inferring biological processes from ambient sound may become a more accessible tool in underwater ecological evaluation and monitoring.

  6. Sensitivity to Angular and Radial Source Movements as a Function of Acoustic Complexity in Normal and Impaired Hearing

    PubMed Central

    Grimm, Giso; Hohmann, Volker; Laugesen, Søren; Neher, Tobias

    2017-01-01

    In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far (‘radial’) and left-right (‘angular’) movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup. PMID:28675088

  7. Sensitivity to Angular and Radial Source Movements as a Function of Acoustic Complexity in Normal and Impaired Hearing.

    PubMed

    Lundbeck, Micha; Grimm, Giso; Hohmann, Volker; Laugesen, Søren; Neher, Tobias

    2017-01-01

    In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far ('radial') and left-right ('angular') movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup.

  8. Reduced order modeling of head related transfer functions for virtual acoustic displays

    NASA Astrophysics Data System (ADS)

    Willhite, Joel A.; Frampton, Kenneth D.; Grantham, D. Wesley

    2003-04-01

    The purpose of this work is to improve the computational efficiency in acoustic virtual applications by creating and testing reduced order models of the head related transfer functions used in localizing sound sources. State space models of varying order were generated from zero-elevation Head Related Impulse Responses (HRIRs) using Kungs Single Value Decomposition (SVD) technique. The inputs to the models are the desired azimuths of the virtual sound sources (from minus 90 deg to plus 90 deg, in 10 deg increments) and the outputs are the left and right ear impulse responses. Trials were conducted in an anechoic chamber in which subjects were exposed to real sounds that were emitted by individual speakers across a numbered speaker array, phantom sources generated from the original HRIRs, and phantom sound sources generated with the different reduced order state space models. The error in the perceived direction of the phantom sources generated from the reduced order models was compared to errors in localization using the original HRIRs.

  9. Aircraft laser sensing of sound velocity in water - Brillouin scattering

    NASA Technical Reports Server (NTRS)

    Hickman, G. D.; Harding, John M.; Carnes, Michael; Pressman, AL; Kattawar, George W.; Fry, Edward S.

    1991-01-01

    A real-time data source for sound speed in the upper 100 m has been proposed for exploratory development. This data source is planned to be generated via a ship- or aircraft-mounted optical pulsed laser using the spontaneous Brillouin scattering technique. The system should be capable (from a single 10 ns 500 mJ pulse) of yielding range resolved sound speed profiles in water to depths of 75-100 m to an accuracy of 1 m/s. The 100 m profiles will provide the capability of rapidly monitoring the upper-ocean vertical structure. They will also provide an extensive, subsurface-data source for existing real-time, operational ocean nowcast/forecast systems.

  10. The Physiological Basis of Chinese Höömii Generation.

    PubMed

    Li, Gelin; Hou, Qian

    2017-01-01

    The study aimed to investigate the physiological basis of vibration mode of sound source of a variety of Mongolian höömii forms of singing in China. The participant is a Mongolian höömii performing artist who was recommended by the Chinese Medical Association of Art. He used three types of höömii, namely vibration höömii, whistle höömii, and overtone höömii, which were compared with general comfortable pronunciation of /i:/ as control. Phonation was observed during /i:/. A laryngostroboscope (Storz) was used to determine vibration source-mucosal wave in the throat. For vibration höömii, bilateral ventricular folds approximated to the midline and made contact at the midline during pronunciation. Ventricular and vocal folds oscillated together as a single unit to form a composite vibration (double oscillator) sound source. For whistle höömii, ventricular folds approximated to the midline to cover part of vocal folds, but did not contact each other. It did not produce mucosal wave. The vocal folds produced mucosal wave to form a single vibration sound source. For overtone höömii, the anterior two-thirds of ventricular folds touched each other during pronunciation. The last one-third produced the mucosal wave. The vocal folds produced mucosal wave at the same time, which was a composite vibration (double oscillator) sound source mode. The Höömii form of singing, including mixed voices and multivoice, was related to the presence of dual vibration sound sources. Its high overtone form of singing (whistle höömii) was related to stenosis at the resonance chambers' initiation site (ventricular folds level). Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  11. Reduction of interior sound fields in flexible cylinders by active vibration control

    NASA Technical Reports Server (NTRS)

    Jones, J. D.; Fuller, C. R.

    1988-01-01

    The mechanisms of interior sound reduction through active control of a thin flexible shell's vibrational response are presently evaluated in view of an analytical model. The noise source is a single exterior acoustic monopole. The active control model is evaluated for harmonic excitation; the results obtained indicate spatially-averaged noise reductions in excess of 20 dB over the source plane, for acoustic resonant conditions inside the cavity.

  12. Psychoacoustical evaluation of natural and urban sounds in soundscapes.

    PubMed

    Yang, Ming; Kang, Jian

    2013-07-01

    Among various sounds in the environment, natural sounds, such as water sounds and birdsongs, have proven to be highly preferred by humans, but the reasons for these preferences have not been thoroughly researched. This paper explores differences between various natural and urban environmental sounds from the viewpoint of objective measures, especially psychoacoustical parameters. The sound samples used in this study include the recordings of single sound source categories of water, wind, birdsongs, and urban sounds including street music, mechanical sounds, and traffic noise. The samples are analyzed with a number of existing psychoacoustical parameter algorithmic models. Based on hierarchical cluster and principal components analyses of the calculated results, a series of differences has been shown among different sound types in terms of key psychoacoustical parameters. While different sound categories cannot be identified using any single acoustical and psychoacoustical parameter, identification can be made with a group of parameters, as analyzed with artificial neural networks and discriminant functions in this paper. For artificial neural networks, correlations between network predictions and targets using the average and standard deviation data of psychoacoustical parameters as inputs are above 0.95 for the three natural sound categories and above 0.90 for the urban sound category. For sound identification/classification, key parameters are fluctuation strength, loudness, and sharpness.

  13. Sound field reproduction as an equivalent acoustical scattering problem.

    PubMed

    Fazi, Filippo Maria; Nelson, Philip A

    2013-11-01

    Given a continuous distribution of acoustic sources, the determination of the source strength that ensures the synthesis of a desired sound field is shown to be identical to the solution of an equivalent acoustic scattering problem. The paper begins with the presentation of the general theory that underpins sound field reproduction with secondary sources continuously arranged on the boundary of the reproduction region. The process of reproduction by a continuous source distribution is modeled by means of an integral operator (the single layer potential). It is then shown how the solution of the sound reproduction problem corresponds to that of an equivalent scattering problem. Analytical solutions are computed for two specific instances of this problem, involving, respectively, the use of a secondary source distribution in spherical and planar geometries. The results are shown to be the same as those obtained with analyses based on High Order Ambisonics and Wave Field Synthesis, respectively, thus bringing to light a fundamental analogy between these two methods of sound reproduction. Finally, it is shown how the physical optics (Kirchhoff) approximation enables the derivation of a high-frequency simplification for the problem under consideration, this in turn being related to the secondary source selection criterion reported in the literature on Wave Field Synthesis.

  14. Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants

    PubMed Central

    Zheng, Yi; Godar, Shelly P.; Litovsky, Ruth Y.

    2015-01-01

    Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs) sound localization is known to improve when bilateral CIs (BiCIs) are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users. PMID:26288142

  15. Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants.

    PubMed

    Zheng, Yi; Godar, Shelly P; Litovsky, Ruth Y

    2015-01-01

    Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs) sound localization is known to improve when bilateral CIs (BiCIs) are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users.

  16. Speech Understanding and Sound Source Localization by Cochlear Implant Listeners Using a Pinna-Effect Imitating Microphone and an Adaptive Beamformer.

    PubMed

    Dorman, Michael F; Natale, Sarah; Loiselle, Louise

    2018-03-01

    Sentence understanding scores for patients with cochlear implants (CIs) when tested in quiet are relatively high. However, sentence understanding scores for patients with CIs plummet with the addition of noise. To assess, for patients with CIs (MED-EL), (1) the value to speech understanding of two new, noise-reducing microphone settings and (2) the effect of the microphone settings on sound source localization. Single-subject, repeated measures design. For tests of speech understanding, repeated measures on (1) number of CIs (one, two), (2) microphone type (omni, natural, adaptive beamformer), and (3) type of noise (restaurant, cocktail party). For sound source localization, repeated measures on type of signal (low-pass [LP], high-pass [HP], broadband noise). Ten listeners, ranging in age from 48 to 83 yr (mean = 57 yr), participated in this prospective study. Speech understanding was assessed in two noise environments using monaural and bilateral CIs fit with three microphone types. Sound source localization was assessed using three microphone types. In Experiment 1, sentence understanding scores (in terms of percent words correct) were obtained in quiet and in noise. For each patient, noise was first added to the signal to drive performance off of the ceiling in the bilateral CI-omni microphone condition. The other conditions were then administered at that signal-to-noise ratio in quasi-random order. In Experiment 2, sound source localization accuracy was assessed for three signal types using a 13-loudspeaker array over a 180° arc. The dependent measure was root-mean-score error. Both the natural and adaptive microphone settings significantly improved speech understanding in the two noise environments. The magnitude of the improvement varied between 16 and 19 percentage points for tests conducted in the restaurant environment and between 19 and 36 percentage points for tests conducted in the cocktail party environment. In the restaurant and cocktail party environments, both the natural and adaptive settings, when implemented on a single CI, allowed scores that were as good as, or better, than scores in the bilateral omni test condition. Sound source localization accuracy was unaltered by either the natural or adaptive settings for LP, HP, or wideband noise stimuli. The data support the use of the natural microphone setting as a default setting. The natural setting (1) provides better speech understanding in noise than the omni setting, (2) does not impair sound source localization, and (3) retains low-frequency sensitivity to signals from the rear. Moreover, bilateral CIs equipped with adaptive beamforming technology can engender speech understanding scores in noise that fall only a little short of scores for a single CI in quiet. American Academy of Audiology

  17. Design of laser monitoring and sound localization system

    NASA Astrophysics Data System (ADS)

    Liu, Yu-long; Xu, Xi-ping; Dai, Yu-ming; Qiao, Yang

    2013-08-01

    In this paper, a novel design of laser monitoring and sound localization system is proposed. It utilizes laser to monitor and locate the position of the indoor conversation. In China most of the laser monitors no matter used in labor in an instrument uses photodiode or phototransistor as a detector at present. At the laser receivers of those facilities, light beams are adjusted to ensure that only part of the window in photodiodes or phototransistors received the beams. The reflection would deviate from its original path because of the vibration of the detected window, which would cause the changing of imaging spots in photodiode or phototransistor. However, such method is limited not only because it could bring in much stray light in receivers but also merely single output of photocurrent could be obtained. Therefore a new method based on quadrant detector is proposed. It utilizes the relation of the optical integral among quadrants to locate the position of imaging spots. This method could eliminate background disturbance and acquired two-dimensional spots vibrating data pacifically. The principle of this whole system could be described as follows. Collimated laser beams are reflected from vibrate-window caused by the vibration of sound source. Therefore reflected beams are modulated by vibration source. Such optical signals are collected by quadrant detectors and then are processed by photoelectric converters and corresponding circuits. Speech signals are eventually reconstructed. In addition, sound source localization is implemented by the means of detecting three different reflected light sources simultaneously. Indoor mathematical models based on the principle of Time Difference Of Arrival (TDOA) are established to calculate the twodimensional coordinate of sound source. Experiments showed that this system is able to monitor the indoor sound source beyond 15 meters with a high quality of speech reconstruction and to locate the sound source position accurately.

  18. Determination of Jet Noise Radiation Source Locations using a Dual Sideline Cross-Correlation/Spectrum Technique

    NASA Technical Reports Server (NTRS)

    Allen, C. S.; Jaeger, S. M.

    1999-01-01

    The goal of our efforts is to extrapolate nearfield jet noise measurements to the geometric far field where the jet noise sources appear to radiate from a single point. To accomplish this, information about the location of noise sources in the jet plume, the radiation patterns of the noise sources and the sound pressure level distribution of the radiated field must be obtained. Since source locations and radiation patterns can not be found with simple single microphone measurements, a more complicated method must be used.

  19. Perception of Animacy from the Motion of a Single Sound Object.

    PubMed

    Nielsen, Rasmus Høll; Vuust, Peter; Wallentin, Mikkel

    2015-02-01

    Research in the visual modality has shown that the presence of certain dynamics in the motion of an object has a strong effect on whether or not the entity is perceived as animate. Cues for animacy are, among others, self-propelled motion and direction changes that are seemingly not caused by entities external to, or in direct contact with, the moving object. The present study aimed to extend this research into the auditory domain by determining if similar dynamics could influence the perceived animacy of a sound source. In two experiments, participants were presented with single, synthetically generated 'mosquito' sounds moving along trajectories in space, and asked to rate how certain they were that each sound-emitting entity was alive. At a random point on a linear motion trajectory, the sound source would deviate from its initial path and speed. Results confirm findings from the visual domain that a change in the velocity of motion is positively correlated with perceived animacy, and changes in direction were found to influence animacy judgment as well. This suggests that an ability to facilitate and sustain self-movement is perceived as a living quality not only in the visual domain, but in the auditory domain as well. © 2015 SAGE Publications.

  20. Acoustic positioning for space processing experiments

    NASA Technical Reports Server (NTRS)

    Whymark, R. R.

    1974-01-01

    An acoustic positioning system is described that is adaptable to a range of processing chambers and furnace systems. Operation at temperatures exceeding 1000 C is demonstrated in experiments involving the levitation of liquid and solid glass materials up to several ounces in weight. The system consists of a single source of sound that is beamed at a reflecting surface placed a distance away. Stable levitation is achieved at a succession of discrete energy minima contained throughout the volume between the reflector and the sound source. Several specimens can be handled at one time. Metal discs up to 3 inches in diameter can be levitated, solid spheres of dense material up to 0.75 inches diameter, and liquids can be freely suspended in l-g in the form of near-spherical droplets up to 0.25 inch diameter, or flattened liquid discs up to 0.6 inches diameter. Larger specimens may be handled by increasing the size of the sound source or by reducing the sound frequency.

  1. Object localization using a biosonar beam: how opening your mouth improves localization.

    PubMed

    Arditi, G; Weiss, A J; Yovel, Y

    2015-08-01

    Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions.

  2. Object localization using a biosonar beam: how opening your mouth improves localization

    PubMed Central

    Arditi, G.; Weiss, A. J.; Yovel, Y.

    2015-01-01

    Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions. PMID:26361552

  3. A precedence effect resolves phantom sound source illusions in the parasitoid fly Ormia ochracea

    PubMed Central

    Lee, Norman; Elias, Damian O.; Mason, Andrew C.

    2009-01-01

    Localizing individual sound sources under reverberant environmental conditions can be a challenge when the original source and its acoustic reflections arrive at the ears simultaneously from different paths that convey ambiguous directional information. The acoustic parasitoid fly Ormia ochracea (Diptera: Tachinidae) relies on a pair of ears exquisitely sensitive to sound direction to localize the 5-kHz tone pulsatile calling song of their host crickets. In nature, flies are expected to encounter a complex sound field with multiple sources and their reflections from acoustic clutter potentially masking temporal information relevant to source recognition and localization. In field experiments, O. ochracea were lured onto a test arena and subjected to small random acoustic asymmetries between 2 simultaneous sources. Most flies successfully localize a single source but some localize a ‘phantom’ source that is a summed effect of both source locations. Such misdirected phonotaxis can be elicited reliably in laboratory experiments that present symmetric acoustic stimulation. By varying onset delay between 2 sources, we test whether hyperacute directional hearing in O. ochracea can function to exploit small time differences to determine source location. Selective localization depends on both the relative timing and location of competing sources. Flies preferred phonotaxis to a forward source. With small onset disparities within a 10-ms temporal window of attention, flies selectively localize the leading source while the lagging source has minimal influence on orientation. These results demonstrate the precedence effect as a mechanism to overcome phantom source illusions that arise from acoustic reflections or competing sources. PMID:19332794

  4. The nocturnal acoustical intensity of the intensive care environment: an observational study.

    PubMed

    Delaney, Lori J; Currie, Marian J; Huang, Hsin-Chia Carol; Lopez, Violeta; Litton, Edward; Van Haren, Frank

    2017-01-01

    The intensive care unit (ICU) environment exposes patients to noise levels that may result in substantial sleep disruption. There is a need to accurately describe the intensity pattern and source of noise in the ICU in order to develop effective sound abatement strategies. The objectives of this study were to determine nocturnal noise levels and their variability and the related sources of noise within an Australian tertiary ICU. An observational cross-sectional study was conducted in a 24-bed open-plan ICU. Sound levels were recorded overnight during three nights at 5-s epochs using Extech (SDL 600) sound monitors. Noise sources were concurrently logged by two research assistants. The mean recorded ambient noise level in the ICU was 52.85 decibels (dB) (standard deviation (SD) 5.89), with a maximum noise recording at 98.3 dB (A). All recorded measurements exceeded the WHO recommendations. Noise variability per minute ranged from 9.9 to 44 dB (A), with peak noise levels >70 dB (A) occurring 10 times/hour (SD 11.4). Staff were identified as the most common source accounting for 35% of all noise. Mean noise levels in single-patient rooms compared with open-bed areas were 53.5 vs 53 dB ( p  = 0.37), respectively. Mean noise levels exceeded those recommended by the WHO resulting in an acoustical intensity of 193 times greater than the recommended and demonstrated a high degree of unpredictable variability, with the primary noise sources coming from staff conversations. The lack of protective effects of single rooms and the contributing effects that staffs have on noise levels are important factors when considering sound abatement strategies.

  5. Postflight analysis of the single-axis acoustic system on SPAR VI and recommendations for future flights

    NASA Technical Reports Server (NTRS)

    Naumann, R. J.; Oran, W. A.; Whymark, R. R.; Rey, C.

    1981-01-01

    The single axis acoustic levitator that was flown on SPAR VI malfunctioned. The results of a series of tests, analyses, and investigation of hypotheses that were undertaken to determine the probable cause of failure are presented, together with recommendations for future flights of the apparatus. The most probable causes of the SPAR VI failure were lower than expected sound intensity due to mechanical degradation of the sound source, and an unexpected external force that caused the experiment sample to move radially and eventually be lost from the acoustic energy well.

  6. Patch nearfield acoustic holography combined with sound field separation technique applied to a non-free field

    NASA Astrophysics Data System (ADS)

    Bi, ChuanXing; Jing, WenQian; Zhang, YongBin; Xu, Liang

    2015-02-01

    The conventional nearfield acoustic holography (NAH) is usually based on the assumption of free-field conditions, and it also requires that the measurement aperture should be larger than the actual source. This paper is to focus on the problem that neither of the above-mentioned requirements can be met, and to examine the feasibility of reconstructing the sound field radiated by partial source, based on double-layer pressure measurements made in a non-free field by using patch NAH combined with sound field separation technique. And also, the sensitivity of the reconstructed result to the measurement error is analyzed in detail. Two experiments involving two speakers in an exterior space and one speaker inside a car cabin are presented. The experimental results demonstrate that the patch NAH based on single-layer pressure measurement cannot obtain a satisfied result due to the influences of disturbing sources and reflections, while the patch NAH based on double-layer pressure measurements can successfully remove these influences and reconstruct the patch sound field effectively.

  7. Short-Latency, Goal-Directed Movements of the Pinnae to Sounds That Produce Auditory Spatial Illusions

    PubMed Central

    McClaine, Elizabeth M.; Yin, Tom C. T.

    2010-01-01

    The precedence effect (PE) is an auditory spatial illusion whereby two identical sounds presented from two separate locations with a delay between them are perceived as a fused single sound source whose position depends on the value of the delay. By training cats using operant conditioning to look at sound sources, we have previously shown that cats experience the PE similarly to humans. For delays less than ±400 μs, cats exhibit summing localization, the perception of a “phantom” sound located between the sources. Consistent with localization dominance, for delays from 400 μs to ∼10 ms, cats orient toward the leading source location only, with little influence of the lagging source. Finally, echo threshold was reached for delays >10 ms, where cats first began to orient to the lagging source. It has been hypothesized by some that the neural mechanisms that produce facets of the PE, such as localization dominance and echo threshold, must likely occur at cortical levels. To test this hypothesis, we measured both pinnae position, which were not under any behavioral constraint, and eye position in cats and found that the pinnae orientations to stimuli that produce each of the three phases of the PE illusion was similar to the gaze responses. Although both eye and pinnae movements behaved in a manner that reflected the PE, because the pinnae moved with strikingly short latencies (∼30 ms), these data suggest a subcortical basis for the PE and that the cortex is not likely to be directly involved. PMID:19889848

  8. Short-latency, goal-directed movements of the pinnae to sounds that produce auditory spatial illusions.

    PubMed

    Tollin, Daniel J; McClaine, Elizabeth M; Yin, Tom C T

    2010-01-01

    The precedence effect (PE) is an auditory spatial illusion whereby two identical sounds presented from two separate locations with a delay between them are perceived as a fused single sound source whose position depends on the value of the delay. By training cats using operant conditioning to look at sound sources, we have previously shown that cats experience the PE similarly to humans. For delays less than +/-400 mus, cats exhibit summing localization, the perception of a "phantom" sound located between the sources. Consistent with localization dominance, for delays from 400 mus to approximately 10 ms, cats orient toward the leading source location only, with little influence of the lagging source. Finally, echo threshold was reached for delays >10 ms, where cats first began to orient to the lagging source. It has been hypothesized by some that the neural mechanisms that produce facets of the PE, such as localization dominance and echo threshold, must likely occur at cortical levels. To test this hypothesis, we measured both pinnae position, which were not under any behavioral constraint, and eye position in cats and found that the pinnae orientations to stimuli that produce each of the three phases of the PE illusion was similar to the gaze responses. Although both eye and pinnae movements behaved in a manner that reflected the PE, because the pinnae moved with strikingly short latencies ( approximately 30 ms), these data suggest a subcortical basis for the PE and that the cortex is not likely to be directly involved.

  9. Sound field separation with sound pressure and particle velocity measurements.

    PubMed

    Fernandez-Grande, Efren; Jacobsen, Finn; Leclère, Quentin

    2012-12-01

    In conventional near-field acoustic holography (NAH) it is not possible to distinguish between sound from the two sides of the array, thus, it is a requirement that all the sources are confined to only one side and radiate into a free field. When this requirement cannot be fulfilled, sound field separation techniques make it possible to distinguish between outgoing and incoming waves from the two sides, and thus NAH can be applied. In this paper, a separation method based on the measurement of the particle velocity in two layers and another method based on the measurement of the pressure and the velocity in a single layer are proposed. The two methods use an equivalent source formulation with separate transfer matrices for the outgoing and incoming waves, so that the sound from the two sides of the array can be modeled independently. A weighting scheme is proposed to account for the distance between the equivalent sources and measurement surfaces and for the difference in magnitude between pressure and velocity. Experimental and numerical studies have been conducted to examine the methods. The double layer velocity method seems to be more robust to noise and flanking sound than the combined pressure-velocity method, although it requires an additional measurement surface. On the whole, the separation methods can be useful when the disturbance of the incoming field is significant. Otherwise the direct reconstruction is more accurate and straightforward.

  10. Harmonic Hopping, and Both Punctuated and Gradual Evolution of Acoustic Characters in Selasphorus Hummingbird Tail-Feathers

    PubMed Central

    Clark, Christopher James

    2014-01-01

    Models of character evolution often assume a single mode of evolutionary change, such as continuous, or discrete. Here I provide an example in which a character exhibits both types of change. Hummingbirds in the genus Selasphorus produce sound with fluttering tail-feathers during courtship. The ancestral character state within Selasphorus is production of sound with an inner tail-feather, R2, in which the sound usually evolves gradually. Calliope and Allen's Hummingbirds have evolved autapomorphic acoustic mechanisms that involve feather-feather interactions. I develop a source-filter model of these interactions. The ‘source’ comprises feather(s) that are both necessary and sufficient for sound production, and are aerodynamically coupled to neighboring feathers, which act as filters. Filters are unnecessary or insufficient for sound production, but may evolve to become sources. Allen's Hummingbird has evolved to produce sound with two sources, one with feather R3, another frequency-modulated sound with R4, and their interaction frequencies. Allen's R2 retains the ancestral character state, a ∼1 kHz “ghost” fundamental frequency masked by R3, which is revealed when R3 is experimentally removed. In the ancestor to Allen's Hummingbird, the dominant frequency has ‘hopped’ to the second harmonic without passing through intermediate frequencies. This demonstrates that although the fundamental frequency of a communication sound may usually evolve gradually, occasional jumps from one character state to another can occur in a discrete fashion. Accordingly, mapping acoustic characters on a phylogeny may produce misleading results if the physical mechanism of production is not known. PMID:24722049

  11. Development of the mathematical model for design and verification of acoustic modal analysis methods

    NASA Astrophysics Data System (ADS)

    Siner, Alexander; Startseva, Maria

    2016-10-01

    To reduce the turbofan noise it is necessary to develop methods for the analysis of the sound field generated by the blade machinery called modal analysis. Because modal analysis methods are very difficult and their testing on the full scale measurements are very expensive and tedious it is necessary to construct some mathematical models allowing to test modal analysis algorithms fast and cheap. At this work the model allowing to set single modes at the channel and to analyze generated sound field is presented. Modal analysis of the sound generated by the ring array of point sound sources is made. Comparison of experimental and numerical modal analysis results is presented at this work.

  12. Acoustic Performance of a Real-Time Three-Dimensional Sound-Reproduction System

    NASA Technical Reports Server (NTRS)

    Faller, Kenneth J., II; Rizzi, Stephen A.; Aumann, Aric R.

    2013-01-01

    The Exterior Effects Room (EER) is a 39-seat auditorium at the NASA Langley Research Center and was built to support psychoacoustic studies of aircraft community noise. The EER has a real-time simulation environment which includes a three-dimensional sound-reproduction system. This system requires real-time application of equalization filters to compensate for spectral coloration of the sound reproduction due to installation and room effects. This paper describes the efforts taken to develop the equalization filters for use in the real-time sound-reproduction system and the subsequent analysis of the system s acoustic performance. The acoustic performance of the compensated and uncompensated sound-reproduction system is assessed for its crossover performance, its performance under stationary and dynamic conditions, the maximum spatialized sound pressure level it can produce from a single virtual source, and for the spatial uniformity of a generated sound field. Additionally, application examples are given to illustrate the compensated sound-reproduction system performance using recorded aircraft flyovers

  13. Evaluation of the Acoustic Measurement Capability of the NASA Langley V/STOL Wind Tunnel Open Test Section with Acoustically Absorbent Ceiling and Floor Treatments

    NASA Technical Reports Server (NTRS)

    Theobald, M. A.

    1978-01-01

    The single source location used for helicopter model studies was utilized in a study to determine the distances and directions upstream of the model accurate at which measurements of the direct acoustic field could be obtained. The method used was to measure the decrease of sound pressure levels with distance from a noise source and thereby determine the Hall radius as a function of frequency and direction. Test arrangements and procedures are described. Graphs show the normalized sound pressure level versus distance curves for the glass fiber floor treatment and for the foam floor treatment.

  14. Accurate Sound Localization in Reverberant Environments is Mediated by Robust Encoding of Spatial Cues in the Auditory Midbrain

    PubMed Central

    Devore, Sasha; Ihlefeld, Antje; Hancock, Kenneth; Shinn-Cunningham, Barbara; Delgutte, Bertrand

    2009-01-01

    In reverberant environments, acoustic reflections interfere with the direct sound arriving at a listener’s ears, distorting the spatial cues for sound localization. Yet, human listeners have little difficulty localizing sounds in most settings. Because reverberant energy builds up over time, the source location is represented relatively faithfully during the early portion of a sound, but this representation becomes increasingly degraded later in the stimulus. We show that the directional sensitivity of single neurons in the auditory midbrain of anesthetized cats follows a similar time course, although onset dominance in temporal response patterns results in more robust directional sensitivity than expected, suggesting a simple mechanism for improving directional sensitivity in reverberation. In parallel behavioral experiments, we demonstrate that human lateralization judgments are consistent with predictions from a population rate model decoding the observed midbrain responses, suggesting a subcortical origin for robust sound localization in reverberant environments. PMID:19376072

  15. The effects of environmental variability and spatial sampling on the three-dimensional inversion problem.

    PubMed

    Bender, Christopher M; Ballard, Megan S; Wilson, Preston S

    2014-06-01

    The overall goal of this work is to quantify the effects of environmental variability and spatial sampling on the accuracy and uncertainty of estimates of the three-dimensional ocean sound-speed field. In this work, ocean sound speed estimates are obtained with acoustic data measured by a sparse autonomous observing system using a perturbative inversion scheme [Rajan, Lynch, and Frisk, J. Acoust. Soc. Am. 82, 998-1017 (1987)]. The vertical and horizontal resolution of the solution depends on the bandwidth of acoustic data and on the quantity of sources and receivers, respectively. Thus, for a simple, range-independent ocean sound speed profile, a single source-receiver pair is sufficient to estimate the water-column sound-speed field. On the other hand, an environment with significant variability may not be fully characterized by a large number of sources and receivers, resulting in uncertainty in the solution. This work explores the interrelated effects of environmental variability and spatial sampling on the accuracy and uncertainty of the inversion solution though a set of case studies. Synthetic data representative of the ocean variability on the New Jersey shelf are used.

  16. Numerical Modelling of the Sound Fields in Urban Streets with Diffusely Reflecting Boundaries

    NASA Astrophysics Data System (ADS)

    KANG, J.

    2002-12-01

    A radiosity-based theoretical/computer model has been developed to study the fundamental characteristics of the sound fields in urban streets resulting from diffusely reflecting boundaries, and to investigate the effectiveness of architectural changes and urban design options on noise reduction. Comparison between the theoretical prediction and the measurement in a scale model of an urban street shows very good agreement. Computations using the model in hypothetical rectangular streets demonstrate that though the boundaries are diffusely reflective, the sound attenuation along the length is significant, typically at 20-30 dB/100 m. The sound distribution in a cross-section is generally even unless the cross-section is very close to the source. In terms of the effectiveness of architectural changes and urban design options, it has been shown that over 2-4 dB extra attenuation can be obtained either by increasing boundary absorption evenly or by adding absorbent patches on the façades or the ground. Reducing building height has a similar effect. A gap between buildings can provide about 2-3 dB extra sound attenuation, especially in the vicinity of the gap. The effectiveness of air absorption on increasing sound attenuation along the length could be 3-9 dB at high frequencies. If a treatment is effective with a single source, it is also effective with multiple sources. In addition, it has been demonstrated that if the façades in a street are diffusely reflective, the sound field of the street does not change significantly whether the ground is diffusely or geometrically reflective.

  17. Community Response to Multiple Sound Sources: Integrating Acoustic and Contextual Approaches in the Analysis

    PubMed Central

    Lercher, Peter; De Coensel, Bert; Dekonink, Luc; Botteldooren, Dick

    2017-01-01

    Sufficient data refer to the relevant prevalence of sound exposure by mixed traffic sources in many nations. Furthermore, consideration of the potential effects of combined sound exposure is required in legal procedures such as environmental health impact assessments. Nevertheless, current practice still uses single exposure response functions. It is silently assumed that those standard exposure-response curves accommodate also for mixed exposures—although some evidence from experimental and field studies casts doubt on this practice. The ALPNAP-study population (N = 1641) shows sufficient subgroups with combinations of rail-highway, highway-main road and rail-highway-main road sound exposure. In this paper we apply a few suggested approaches of the literature to investigate exposure-response curves and its major determinants in the case of exposure to multiple traffic sources. Highly/moderate annoyance and full scale mean annoyance served as outcome. The results show several limitations of the current approaches. Even facing the inherent methodological limitations (energy equivalent summation of sound, rating of overall annoyance) the consideration of main contextual factors jointly occurring with the sources (such as vibration, air pollution) or coping activities and judgments of the wider area soundscape increases the variance explanation from up to 8% (bivariate), up to 15% (base adjustments) up to 55% (full contextual model). The added predictors vary significantly, depending on the source combination. (e.g., significant vibration effects with main road/railway, not highway). Although no significant interactions were found, the observed additive effects are of public health importance. Especially in the case of a three source exposure situation the overall annoyance is already high at lower levels and the contribution of the acoustic indicators is small compared with the non-acoustic and contextual predictors. Noise mapping needs to go down to levels of 40 dBA,Lden to ensure the protection of quiet areas and prohibit the silent “filling up” of these areas with new sound sources. Eventually, to better predict the annoyance in the exposure range between 40 and 60 dBA and support the protection of quiet areas in city and rural areas in planning sound indicators need to be oriented at the noticeability of sound and consider other traffic related by-products (air quality, vibration, coping strain) in future studies and environmental impact assessments. PMID:28632198

  18. Single-channel mixed signal blind source separation algorithm based on multiple ICA processing

    NASA Astrophysics Data System (ADS)

    Cheng, Xiefeng; Li, Ji

    2017-01-01

    Take separating the fetal heart sound signal from the mixed signal that get from the electronic stethoscope as the research background, the paper puts forward a single-channel mixed signal blind source separation algorithm based on multiple ICA processing. Firstly, according to the empirical mode decomposition (EMD), the single-channel mixed signal get multiple orthogonal signal components which are processed by ICA. The multiple independent signal components are called independent sub component of the mixed signal. Then by combining with the multiple independent sub component into single-channel mixed signal, the single-channel signal is expanded to multipath signals, which turns the under-determined blind source separation problem into a well-posed blind source separation problem. Further, the estimate signal of source signal is get by doing the ICA processing. Finally, if the separation effect is not very ideal, combined with the last time's separation effect to the single-channel mixed signal, and keep doing the ICA processing for more times until the desired estimated signal of source signal is get. The simulation results show that the algorithm has good separation effect for the single-channel mixed physiological signals.

  19. Optimum sensor placement for microphone arrays

    NASA Astrophysics Data System (ADS)

    Rabinkin, Daniel V.

    Microphone arrays can be used for high-quality sound pickup in reverberant and noisy environments. Sound capture using conventional single microphone methods suffers severe degradation under these conditions. The beamforming capabilities of microphone array systems allow highly directional sound capture, providing enhanced signal-to-noise ratio (SNR) when compared to single microphone performance. The overall performance of an array system is governed by its ability to locate and track sound sources and its ability to capture sound from desired spatial volumes. These abilities are strongly affected by the spatial placement of microphone sensors. A method is needed to optimize placement for a specified number of sensors in a given acoustical environment. The objective of the optimization is to obtain the greatest average system SNR for sound capture in the region of interest. A two-step sound source location method is presented. In the first step, time delay of arrival (TDOA) estimates for select microphone pairs are determined using a modified version of the Omologo-Svaizer cross-power spectrum phase expression. In the second step, the TDOA estimates are used in a least-mean-squares gradient descent search algorithm to obtain a location estimate. Statistics for TDOA estimate error as a function of microphone pair/sound source geometry and acoustic environment are gathered from a set of experiments. These statistics are used to model position estimation accuracy for a given array geometry. The effectiveness of sound source capture is also dependent on array geometry and the acoustical environment. Simple beamforming and time delay compensation (TDC) methods provide spatial selectivity but suffer performance degradation in reverberant environments. Matched filter array (MFA) processing can mitigate the effects of reverberation. The shape and gain advantage of the capture region for these techniques is described and shown to be highly influenced by the placement of array sensors. A procedure is developed to evaluate a given array configuration based on the above-mentioned metrics. Constrained placement optimizations are performed that maximize SNR for both TDC and MFA capture methods. Results are compared for various acoustic environments and various enclosure sizes. General guidelines are presented for placement strategy and bandwidth dependence, as they relate to reverberation levels, ambient noise, and enclosure geometry. An overall performance function is described based on these metrics. Performance of the microphone array system is also constrained by the design limitations of the supporting hardware. Two newly developed hardware architectures are presented that support the described algorithms. A low- cost 8-channel system with off-the-shelf componentry was designed and its performance evaluated. A massively parallel 512-channel custom-built system is in development-its capabilities and the rationale for its design are described.

  20. Influence of double stimulation on sound-localization behavior in barn owls.

    PubMed

    Kettler, Lutz; Wagner, Hermann

    2014-12-01

    Barn owls do not immediately approach a source after they hear a sound, but wait for a second sound before they strike. This represents a gain in striking behavior by avoiding responses to random incidents. However, the first stimulus is also expected to change the threshold for perceiving the subsequent second sound, thus possibly introducing some costs. We mimicked this situation in a behavioral double-stimulus paradigm utilizing saccadic head turns of owls. The first stimulus served as an adapter, was presented in frontal space, and did not elicit a head turn. The second stimulus, emitted from a peripheral source, elicited the head turn. The time interval between both stimuli was varied. Data obtained with double stimulation were compared with data collected with a single stimulus from the same positions as the second stimulus in the double-stimulus paradigm. Sound-localization performance was quantified by the response latency, accuracy, and precision of the head turns. Response latency was increased with double stimuli, while accuracy and precision were decreased. The effect depended on the inter-stimulus interval. These results suggest that waiting for a second stimulus may indeed impose costs on sound localization by adaptation and this reduces the gain obtained by waiting for a second stimulus.

  1. Single-sensor multispeaker listening with acoustic metamaterials

    PubMed Central

    Xie, Yangbo; Tsai, Tsung-Han; Konneker, Adam; Popa, Bogdan-Ioan; Brady, David J.; Cummer, Steven A.

    2015-01-01

    Designing a “cocktail party listener” that functionally mimics the selective perception of a human auditory system has been pursued over the past decades. By exploiting acoustic metamaterials and compressive sensing, we present here a single-sensor listening device that separates simultaneous overlapping sounds from different sources. The device with a compact array of resonant metamaterials is demonstrated to distinguish three overlapping and independent sources with 96.67% correct audio recognition. Segregation of the audio signals is achieved using physical layer encoding without relying on source characteristics. This hardware approach to multichannel source separation can be applied to robust speech recognition and hearing aids and may be extended to other acoustic imaging and sensing applications. PMID:26261314

  2. Judging sound rotation when listeners and sounds rotate: Sound source localization is a multisystem process.

    PubMed

    Yost, William A; Zhong, Xuan; Najam, Anbar

    2015-11-01

    In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.

  3. Reverberation impairs brainstem temporal representations of voiced vowel sounds: challenging “periodicity-tagged” segregation of competing speech in rooms

    PubMed Central

    Sayles, Mark; Stasiak, Arkadiusz; Winter, Ian M.

    2015-01-01

    The auditory system typically processes information from concurrently active sound sources (e.g., two voices speaking at once), in the presence of multiple delayed, attenuated and distorted sound-wave reflections (reverberation). Brainstem circuits help segregate these complex acoustic mixtures into “auditory objects.” Psychophysical studies demonstrate a strong interaction between reverberation and fundamental-frequency (F0) modulation, leading to impaired segregation of competing vowels when segregation is on the basis of F0 differences. Neurophysiological studies of complex-sound segregation have concentrated on sounds with steady F0s, in anechoic environments. However, F0 modulation and reverberation are quasi-ubiquitous. We examine the ability of 129 single units in the ventral cochlear nucleus (VCN) of the anesthetized guinea pig to segregate the concurrent synthetic vowel sounds /a/ and /i/, based on temporal discharge patterns under closed-field conditions. We address the effects of added real-room reverberation, F0 modulation, and the interaction of these two factors, on brainstem neural segregation of voiced speech sounds. A firing-rate representation of single-vowels' spectral envelopes is robust to the combination of F0 modulation and reverberation: local firing-rate maxima and minima across the tonotopic array code vowel-formant structure. However, single-vowel F0-related periodicity information in shuffled inter-spike interval distributions is significantly degraded in the combined presence of reverberation and F0 modulation. Hence, segregation of double-vowels' spectral energy into two streams (corresponding to the two vowels), on the basis of temporal discharge patterns, is impaired by reverberation; specifically when F0 is modulated. All unit types (primary-like, chopper, onset) are similarly affected. These results offer neurophysiological insights to perceptual organization of complex acoustic scenes under realistically challenging listening conditions. PMID:25628545

  4. Study on sound-speed dispersion in a sandy sediment at frequency ranges of 0.5-3 kHz and 90-170 kHz

    NASA Astrophysics Data System (ADS)

    Yu, Sheng-qi; Liu, Bao-hua; Yu, Kai-ben; Kan, Guang-ming; Yang, Zhi-guo

    2017-03-01

    In order to study the properties of sound-speed dispersion in a sandy sediment, the sound speed was measured both at high frequency (90-170 kHz) and low frequency (0.5-3 kHz) in laboratory environments. At high frequency, a sampling measurement was conducted with boiled and uncooked sand samples collected from the bottom of a large water tank. The sound speed was directly obtained through transmission measurement using single source and single hydrophone. At low frequency, an in situ measurement was conducted in the water tank, where the sandy sediment had been homogeneously paved at the bottom for a long time. The sound speed was indirectly inverted according to the traveling time of signals received by three buried hydrophones in the sandy sediment and the geometry in experiment. The results show that the mean sound speed is approximate 1710-1713 m/s with a weak positive gradient in the sand sample after being boiled (as a method to eliminate bubbles as much as possible) at high frequency, which agrees well with the predictions of Biot theory, the effective density fluid model (EDFM) and Buckingham's theory. However, the sound speed in the uncooked sandy sediment obviously decreases (about 80%) both at high frequency and low frequency due to plenty of bubbles in existence. And the sound-speed dispersion performs a weak negative gradient at high frequency. Finally, a water-unsaturated Biot model is presented for trying to explain the decrease of sound speed in the sandy sediment with plenty of bubbles.

  5. Mathematically trivial control of sound using a parametric beam focusing source.

    PubMed

    Tanaka, Nobuo; Tanaka, Motoki

    2011-01-01

    By exploiting a case regarded as trivial, this paper presents global active noise control using a parametric beam focusing source (PBFS). As with a dipole model, one is used for a primary sound source and the other for a control sound source, the control effect for minimizing a total acoustic power depends on the distance between the two. When the distance becomes zero, the total acoustic power becomes null, hence nothing less than a trivial case. Because of the constraints in practice, there exist difficulties in placing a control source close enough to a primary source. However, by projecting a sound beam of a parametric array loudspeaker onto the target sound source (primary source), a virtual sound source may be created on the target sound source, thereby enabling the collocation of the sources. In order to further ensure feasibility of the trivial case, a PBFS is then introduced in an effort to meet the size of the two sources. Reflected sound wave of the PBFS, which is tantamount to the virtual sound source output, aims to suppress the primary sound. Finally, a numerical analysis as well as an experiment is conducted, verifying the validity of the proposed methodology.

  6. A Flexible 360-Degree Thermal Sound Source Based on Laser Induced Graphene

    PubMed Central

    Tao, Lu-Qi; Liu, Ying; Ju, Zhen-Yi; Tian, He; Xie, Qian-Yi; Yang, Yi; Ren, Tian-Ling

    2016-01-01

    A flexible sound source is essential in a whole flexible system. It’s hard to integrate a conventional sound source based on a piezoelectric part into a whole flexible system. Moreover, the sound pressure from the back side of a sound source is usually weaker than that from the front side. With the help of direct laser writing (DLW) technology, the fabrication of a flexible 360-degree thermal sound source becomes possible. A 650-nm low-power laser was used to reduce the graphene oxide (GO). The stripped laser induced graphene thermal sound source was then attached to the surface of a cylindrical bottle so that it could emit sound in a 360-degree direction. The sound pressure level and directivity of the sound source were tested, and the results were in good agreement with the theoretical results. Because of its 360-degree sound field, high flexibility, high efficiency, low cost, and good reliability, the 360-degree thermal acoustic sound source will be widely applied in consumer electronics, multi-media systems, and ultrasonic detection and imaging. PMID:28335239

  7. Blind people are more sensitive than sighted people to binaural sound-location cues, particularly inter-aural level differences.

    PubMed

    Nilsson, Mats E; Schenkman, Bo N

    2016-02-01

    Blind people use auditory information to locate sound sources and sound-reflecting objects (echolocation). Sound source localization benefits from the hearing system's ability to suppress distracting sound reflections, whereas echolocation would benefit from "unsuppressing" these reflections. To clarify how these potentially conflicting aspects of spatial hearing interact in blind versus sighted listeners, we measured discrimination thresholds for two binaural location cues: inter-aural level differences (ILDs) and inter-aural time differences (ITDs). The ILDs or ITDs were present in single clicks, in the leading component of click pairs, or in the lagging component of click pairs, exploiting processes related to both sound source localization and echolocation. We tested 23 blind (mean age = 54 y), 23 sighted-age-matched (mean age = 54 y), and 42 sighted-young (mean age = 26 y) listeners. The results suggested greater ILD sensitivity for blind than for sighted listeners. The blind group's superiority was particularly evident for ILD-lag-click discrimination, suggesting not only enhanced ILD sensitivity in general but also increased ability to unsuppress lagging clicks. This may be related to the blind person's experience of localizing reflected sounds, for which ILDs may be more efficient than ITDs. On the ITD-discrimination tasks, the blind listeners performed better than the sighted age-matched listeners, but not better than the sighted young listeners. ITD sensitivity declines with age, and the equal performance of the blind listeners compared to a group of substantially younger listeners is consistent with the notion that blind people's experience may offset age-related decline in ITD sensitivity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  8. A study of sound generation in subsonic rotors, volume 2

    NASA Technical Reports Server (NTRS)

    Chalupnik, J. D.; Clark, L. T.

    1975-01-01

    Computer programs were developed for use in the analysis of sound generation by subsonic rotors. Program AIRFOIL computes the spectrum of radiated sound from a single airfoil immersed in a laminar flow field. Program ROTOR extends this to a rotating frame, and provides a model for sound generation in subsonic rotors. The program also computes tone sound generation due to steady state forces on the blades. Program TONE uses a moving source analysis to generate a time series for an array of forces moving in a circular path. The resultant time series are than Fourier transformed to render the results in spectral form. Program SDATA is a standard time series analysis package. It reads in two discrete time series and forms auto and cross covariances and normalizes these to form correlations. The program then transforms the covariances to yield auto and cross power spectra by means of a Fourier transformation.

  9. Characterizing, synthesizing, and/or canceling out acoustic signals from sound sources

    DOEpatents

    Holzrichter, John F [Berkeley, CA; Ng, Lawrence C [Danville, CA

    2007-03-13

    A system for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate and animate sound sources. Electromagnetic sensors monitor excitation sources in sound producing systems, such as animate sound sources such as the human voice, or from machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The systems disclosed enable accurate calculation of transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.

  10. Auditory scene analysis in school-aged children with developmental language disorders

    PubMed Central

    Sussman, E.; Steinschneider, M.; Lee, W.; Lawson, K.

    2014-01-01

    Natural sound environments are dynamic, with overlapping acoustic input originating from simultaneously active sources. A key function of the auditory system is to integrate sensory inputs that belong together and segregate those that come from different sources. We hypothesized that this skill is impaired in individuals with phonological processing difficulties. There is considerable disagreement about whether phonological impairments observed in children with developmental language disorders can be attributed to specific linguistic deficits or to more general acoustic processing deficits. However, most tests of general auditory abilities have been conducted with a single set of sounds. We assessed the ability of school-aged children (7–15 years) to parse complex auditory non-speech input, and determined whether the presence of phonological processing impairments was associated with stream perception performance. A key finding was that children with language impairments did not show the same developmental trajectory for stream perception as typically developing children. In addition, children with language impairments required larger frequency separations between sounds to hear distinct streams compared to age-matched peers. Furthermore, phonological processing ability was a significant predictor of stream perception measures, but only in the older age groups. No such association was found in the youngest children. These results indicate that children with language impairments have difficulty parsing speech streams, or identifying individual sound events when there are competing sound sources. We conclude that language group differences may in part reflect fundamental maturational disparities in the analysis of complex auditory scenes. PMID:24548430

  11. Directional Hearing and Sound Source Localization in Fishes.

    PubMed

    Sisneros, Joseph A; Rogers, Peter H

    2016-01-01

    Evidence suggests that the capacity for sound source localization is common to mammals, birds, reptiles, and amphibians, but surprisingly it is not known whether fish locate sound sources in the same manner (e.g., combining binaural and monaural cues) or what computational strategies they use for successful source localization. Directional hearing and sound source localization in fishes continues to be important topics in neuroethology and in the hearing sciences, but the empirical and theoretical work on these topics have been contradictory and obscure for decades. This chapter reviews the previous behavioral work on directional hearing and sound source localization in fishes including the most recent experiments on sound source localization by the plainfin midshipman fish (Porichthys notatus), which has proven to be an exceptional species for fish studies of sound localization. In addition, the theoretical models of directional hearing and sound source localization for fishes are reviewed including a new model that uses a time-averaged intensity approach for source localization that has wide applicability with regard to source type, acoustic environment, and time waveform.

  12. Modeling underwater noise propagation from marine hydrokinetic power devices through a time-domain, velocity-pressure solution

    DOE PAGES

    Hafla, Erin; Johnson, Erick; Johnson, C. Nathan; ...

    2018-06-01

    Marine hydrokinetic (MHK) devices generate electricity from the motion of tidal and ocean currents, as well as ocean waves, to provide an additional source of renewable energy available to the United States. These devices are a source of anthropogenic noise in the marine ecosystem and must meet regulatory guidelines that mandate a maximum amount of noise that may be generated. In the absence of measured levels from in situ deployments, a model for predicting the propagation of sound from an array of MHK sources in a real environment is essential. A set of coupled, linearized velocity-pressure equations in the time-domainmore » are derived and presented in this paper, which are an alternative solution to the Helmholtz and wave equation methods traditionally employed. Discretizing these equations on a three-dimensional (3D), finite-difference grid ultimately permits a finite number of complex sources and spatially varying sound speeds, bathymetry, and bed composition. The solution to this system of equations has been parallelized in an acoustic-wave propagation package developed at Sandia National Labs, called Paracousti. This work presents the broadband sound pressure levels from a single source in two-dimensional (2D) ideal and Pekeris wave-guides and in a 3D domain with a sloping boundary. Furthermore, the paper concludes with demonstration of Paracousti for an array of MHK sources in a simple wave-guide.« less

  13. Modeling underwater noise propagation from marine hydrokinetic power devices through a time-domain, velocity-pressure solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hafla, Erin; Johnson, Erick; Johnson, C. Nathan

    Marine hydrokinetic (MHK) devices generate electricity from the motion of tidal and ocean currents, as well as ocean waves, to provide an additional source of renewable energy available to the United States. These devices are a source of anthropogenic noise in the marine ecosystem and must meet regulatory guidelines that mandate a maximum amount of noise that may be generated. In the absence of measured levels from in situ deployments, a model for predicting the propagation of sound from an array of MHK sources in a real environment is essential. A set of coupled, linearized velocity-pressure equations in the time-domainmore » are derived and presented in this paper, which are an alternative solution to the Helmholtz and wave equation methods traditionally employed. Discretizing these equations on a three-dimensional (3D), finite-difference grid ultimately permits a finite number of complex sources and spatially varying sound speeds, bathymetry, and bed composition. The solution to this system of equations has been parallelized in an acoustic-wave propagation package developed at Sandia National Labs, called Paracousti. This work presents the broadband sound pressure levels from a single source in two-dimensional (2D) ideal and Pekeris wave-guides and in a 3D domain with a sloping boundary. Furthermore, the paper concludes with demonstration of Paracousti for an array of MHK sources in a simple wave-guide.« less

  14. Low-frequency acoustic pressure, velocity, and intensity thresholds in a bottlenose dolphin (Tursiops truncatus) and white whale (Delphinapterus leucas)

    NASA Astrophysics Data System (ADS)

    Finneran, James J.; Carder, Donald A.; Ridgway, Sam H.

    2002-01-01

    The relative contributions of acoustic pressure and particle velocity to the low-frequency, underwater hearing abilities of the bottlenose dolphin (Tursiops truncatus) and white whale (Delphinapterus leucas) were investigated by measuring (masked) hearing thresholds while manipulating the relationship between the pressure and velocity. This was accomplished by varying the distance within the near field of a single underwater sound projector (experiment I) and using two underwater sound projectors and an active sound control system (experiment II). The results of experiment I showed no significant change in pressure thresholds as the distance between the subject and the sound source was changed. In contrast, velocity thresholds tended to increase and intensity thresholds tended to decrease as the source distance decreased. These data suggest that acoustic pressure is a better indicator of threshold, compared to particle velocity or mean active intensity, in the subjects tested. Interpretation of the results of experiment II (the active sound control system) was difficult because of complex acoustic conditions and the unknown effects of the subject on the generated acoustic field; however, these data also tend to support the results of experiment I and suggest that odontocete thresholds should be reported in units of acoustic pressure, rather than intensity.

  15. Source and listener directivity for interactive wave-based sound propagation.

    PubMed

    Mehra, Ravish; Antani, Lakulish; Kim, Sujeong; Manocha, Dinesh

    2014-04-01

    We present an approach to model dynamic, data-driven source and listener directivity for interactive wave-based sound propagation in virtual environments and computer games. Our directional source representation is expressed as a linear combination of elementary spherical harmonic (SH) sources. In the preprocessing stage, we precompute and encode the propagated sound fields due to each SH source. At runtime, we perform the SH decomposition of the varying source directivity interactively and compute the total sound field at the listener position as a weighted sum of precomputed SH sound fields. We propose a novel plane-wave decomposition approach based on higher-order derivatives of the sound field that enables dynamic HRTF-based listener directivity at runtime. We provide a generic framework to incorporate our source and listener directivity in any offline or online frequency-domain wave-based sound propagation algorithm. We have integrated our sound propagation system in Valve's Source game engine and use it to demonstrate realistic acoustic effects such as sound amplification, diffraction low-passing, scattering, localization, externalization, and spatial sound, generated by wave-based propagation of directional sources and listener in complex scenarios. We also present results from our preliminary user study.

  16. Active control of aircraft engine inlet noise using compact sound sources and distributed error sensors

    NASA Technical Reports Server (NTRS)

    Burdisso, Ricardo (Inventor); Fuller, Chris R. (Inventor); O'Brien, Walter F. (Inventor); Thomas, Russell H. (Inventor); Dungan, Mary E. (Inventor)

    1996-01-01

    An active noise control system using a compact sound source is effective to reduce aircraft engine duct noise. The fan noise from a turbofan engine is controlled using an adaptive filtered-x LMS algorithm. Single multi channel control systems are used to control the fan blade passage frequency (BPF) tone and the BPF tone and the first harmonic of the BPF tone for a plane wave excitation. A multi channel control system is used to control any spinning mode. The multi channel control system to control both fan tones and a high pressure compressor BPF tone simultaneously. In order to make active control of turbofan inlet noise a viable technology, a compact sound source is employed to generate the control field. This control field sound source consists of an array of identical thin, cylindrically curved panels with an inner radius of curvature corresponding to that of the engine inlet. These panels are flush mounted inside the inlet duct and sealed on all edges to prevent leakage around the panel and to minimize the aerodynamic losses created by the addition of the panels. Each panel is driven by one or more piezoelectric force transducers mounted on the surface of the panel. The response of the panel to excitation is maximized when it is driven at its resonance; therefore, the panel is designed such that its fundamental frequency is near the tone to be canceled, typically 2000-4000 Hz.

  17. Active control of aircraft engine inlet noise using compact sound sources and distributed error sensors

    NASA Technical Reports Server (NTRS)

    Burdisso, Ricardo (Inventor); Fuller, Chris R. (Inventor); O'Brien, Walter F. (Inventor); Thomas, Russell H. (Inventor); Dungan, Mary E. (Inventor)

    1994-01-01

    An active noise control system using a compact sound source is effective to reduce aircraft engine duct noise. The fan noise from a turbofan engine is controlled using an adaptive filtered-x LMS algorithm. Single multi channel control systems are used to control the fan blade passage frequency (BPF) tone and the BPF tone and the first harmonic of the BPF tone for a plane wave excitation. A multi channel control system is used to control any spinning mode. The multi channel control system to control both fan tones and a high pressure compressor BPF tone simultaneously. In order to make active control of turbofan inlet noise a viable technology, a compact sound source is employed to generate the control field. This control field sound source consists of an array of identical thin, cylindrically curved panels with an inner radius of curvature corresponding to that of the engine inlet. These panels are flush mounted inside the inlet duct and sealed on all edges to prevent leakage around the panel and to minimize the aerodynamic losses created by the addition of the panels. Each panel is driven by one or more piezoelectric force transducers mounted on the surface of the panel. The response of the panel to excitation is maximized when it is driven at its resonance; therefore, the panel is designed such that its fundamental frequency is near the tone to be canceled, typically 2000-4000 Hz.

  18. Sound source identification and sound radiation modeling in a moving medium using the time-domain equivalent source method.

    PubMed

    Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang

    2015-05-01

    Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.

  19. Measurement of Correlation Between Flow Density, Velocity, and Density*velocity(sup 2) with Far Field Noise in High Speed Jets

    NASA Technical Reports Server (NTRS)

    Panda, Jayanta; Seasholtz, Richard G.; Elam, Kristie A.

    2002-01-01

    To locate noise sources in high-speed jets, the sound pressure fluctuations p', measured at far field locations, were correlated with each of radial velocity v, density rho, and phov(exp 2) fluctuations measured from various points in jet plumes. The experiments follow the cause-and-effect method of sound source identification, where correlation is related to the first, and correlation to the second source terms of Lighthill's equation. Three fully expanded, unheated plumes of Mach number 0.95, 1.4 and 1.8 were studied for this purpose. The velocity and density fluctuations were measured simultaneously using a recently developed, non-intrusive, point measurement technique based on molecular Rayleigh scattering. It was observed that along the jet centerline the density fluctuation spectra S(sub rho) have different shapes than the radial velocity spectra S(sub v), while data obtained from the peripheral shear layer show similarity between the two spectra. Density fluctuations in the jet showed significantly higher correlation, than either rhov(sub 2) or v fluctuations. It is found that a single point correlation from the peak sound emitting region at the end of the potential core can account for nearly 10% of all noise at 30 to the jet axis. The correlation, representing the effectiveness of a longitudinal quadrupole in generating noise 90 to the jet axis, is found to be zero within experimental uncertainty. In contrast rhov(exp 2) fluctuations were better correlated with sound pressure fluctuation at the 30 location. The strongest source of sound is found to lie at the centerline and beyond the end of potential core.

  20. Behavioral responses of a harbor porpoise (Phocoena phocoena) to playbacks of broadband pile driving sounds.

    PubMed

    Kastelein, Ronald A; van Heerden, Dorianne; Gransier, Robin; Hoek, Lean

    2013-12-01

    The high under-water sound pressure levels (SPLs) produced during pile driving to build offshore wind turbines may affect harbor porpoises. To estimate the discomfort threshold of pile driving sounds, a porpoise in a quiet pool was exposed to playbacks (46 strikes/min) at five SPLs (6 dB steps: 130-154 dB re 1 μPa). The spectrum of the impulsive sound resembled the spectrum of pile driving sound at tens of kilometers from the pile driving location in shallow water such as that found in the North Sea. The animal's behavior during test and baseline periods was compared. At and above a received broadband SPL of 136 dB re 1 μPa [zero-peak sound pressure level: 151 dB re 1 μPa; t90: 126 ms; sound exposure level of a single strike (SELss): 127 dB re 1 μPa(2) s] the porpoise's respiration rate increased in response to the pile driving sounds. At higher levels, he also jumped out of the water more often. Wild porpoises are expected to move tens of kilometers away from offshore pile driving locations; response distances will vary with context, the sounds' source level, parameters influencing sound propagation, and background noise levels. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Decadal trends in Indian Ocean ambient sound.

    PubMed

    Miksis-Olds, Jennifer L; Bradley, David L; Niu, Xiaoyue Maggie

    2013-11-01

    The increase of ocean noise documented in the North Pacific has sparked concern on whether the observed increases are a global or regional phenomenon. This work provides evidence of low frequency sound increases in the Indian Ocean. A decade (2002-2012) of recordings made off the island of Diego Garcia, UK in the Indian Ocean was parsed into time series according to frequency band and sound level. Quarterly sound level comparisons between the first and last years were also performed. The combination of time series and temporal comparison analyses over multiple measurement parameters produced results beyond those obtainable from a single parameter analysis. The ocean sound floor has increased over the past decade in the Indian Ocean. Increases were most prominent in recordings made south of Diego Garcia in the 85-105 Hz band. The highest sound level trends differed between the two sides of the island; the highest sound levels decreased in the north and increased in the south. Rate, direction, and magnitude of changes among the multiple parameters supported interpretation of source functions driving the trends. The observed sound floor increases are consistent with concurrent increases in shipping, wind speed, wave height, and blue whale abundance in the Indian Ocean.

  2. How Do Honeybees Attract Nestmates Using Waggle Dances in Dark and Noisy Hives?

    PubMed Central

    Hasegawa, Yuji; Ikeno, Hidetoshi

    2011-01-01

    It is well known that honeybees share information related to food sources with nestmates using a dance language that is representative of symbolic communication among non-primates. Some honeybee species engage in visually apparent behavior, walking in a figure-eight pattern inside their dark hives. It has been suggested that sounds play an important role in this dance language, even though a variety of wing vibration sounds are produced by honeybee behaviors in hives. It has been shown that dances emit sounds primarily at about 250–300 Hz, which is in the same frequency range as honeybees' flight sounds. Thus the exact mechanism whereby honeybees attract nestmates using waggle dances in such a dark and noisy hive is as yet unclear. In this study, we used a flight simulator in which honeybees were attached to a torque meter in order to analyze the component of bees' orienting response caused only by sounds, and not by odor or by vibrations sensed by their legs. We showed using single sound localization that honeybees preferred sounds around 265 Hz. Furthermore, according to sound discrimination tests using sounds of the same frequency, honeybees preferred rhythmic sounds. Our results demonstrate that frequency and rhythmic components play a complementary role in localizing dance sounds. Dance sounds were presumably developed to share information in a dark and noisy environment. PMID:21603608

  3. Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology

    NASA Astrophysics Data System (ADS)

    Niwa, Kenta; Nishino, Takanori; Takeda, Kazuya

    A sound field reproduction method is proposed that uses blind source separation and a head-related transfer function. In the proposed system, multichannel acoustic signals captured at distant microphones are decomposed to a set of location/signal pairs of virtual sound sources based on frequency-domain independent component analysis. After estimating the locations and the signals of the virtual sources by convolving the controlled acoustic transfer functions with each signal, the spatial sound is constructed at the selected point. In experiments, a sound field made by six sound sources is captured using 48 distant microphones and decomposed into sets of virtual sound sources. Since subjective evaluation shows no significant difference between natural and reconstructed sound when six virtual sources and are used, the effectiveness of the decomposing algorithm as well as the virtual source representation are confirmed.

  4. A method for evaluating the relation between sound source segregation and masking

    PubMed Central

    Lutfi, Robert A.; Liu, Ching-Ju

    2011-01-01

    Sound source segregation refers to the ability to hear as separate entities two or more sound sources comprising a mixture. Masking refers to the ability of one sound to make another sound difficult to hear. Often in studies, masking is assumed to result from a failure of segregation, but this assumption may not always be correct. Here a method is offered to identify the relation between masking and sound source segregation in studies and an example is given of its application. PMID:21302979

  5. A double-panel active segmented partition module using decoupled analog feedback controllers: numerical model.

    PubMed

    Sagers, Jason D; Leishman, Timothy W; Blotter, Jonathan D

    2009-06-01

    Low-frequency sound transmission has long plagued the sound isolation performance of lightweight partitions. Over the past 2 decades, researchers have investigated actively controlled structures to prevent sound transmission from a source space into a receiving space. An approach using active segmented partitions (ASPs) seeks to improve low-frequency sound isolation capabilities. An ASP is a partition which has been mechanically and acoustically segmented into a number of small individually controlled modules. This paper provides a theoretical and numerical development of a single ASP module configuration, wherein each panel of the double-panel structure is independently actuated and controlled by an analog feedback controller. A numerical model is developed to estimate frequency response functions for the purpose of controller design, to understand the effects of acoustic coupling between the panels, to predict the transmission loss of the module in both passive and active states, and to demonstrate that the proposed ASP module will produce bidirectional sound isolation.

  6. An avoidance behavior model for migrating whale populations

    NASA Astrophysics Data System (ADS)

    Buck, John R.; Tyack, Peter L.

    2003-04-01

    A new model is presented for the avoidance behavior of migrating marine mammals in the presence of a noise stimulus. This model assumes that each whale will adjust its movement pattern near a sound source to maintain its exposure below its own individually specific maximum received sound-pressure level, called its avoidance threshold. The probability distribution function (PDF) of this avoidance threshold across individuals characterizes the migrating population. The avoidance threshold PDF may be estimated by comparing the distribution of migrating whales during playback and control conditions at their closest point of approach to the sound source. The proposed model was applied to the January 1998 experiment which placed a single acoustic source from the U.S. Navy SURTASS-LFA system in the migration corridor of grey whales off the California coast. This analysis found that the median avoidance threshold for this migrating grey whale population was 135 dB, with 90% confidence that the median threshold was within +/-3 dB of this value. This value is less than the 141 dB value for 50% avoidance obtained when the 1984 ``Probability of Avoidance'' model of Malme et al.'s was applied to the same data. [Work supported by ONR.

  7. Audio reproduction for personal ambient home assistance: concepts and evaluations for normal-hearing and hearing-impaired persons.

    PubMed

    Huber, Rainer; Meis, Markus; Klink, Karin; Bartsch, Christian; Bitzer, Joerg

    2014-01-01

    Within the Lower Saxony Research Network Design of Environments for Ageing (GAL), a personal activity and household assistant (PAHA), an ambient reminder system, has been developed. One of its central output modality to interact with the user is sound. The study presented here evaluated three different system technologies for sound reproduction using up to five loudspeakers, including the "phantom source" concept. Moreover, a technology for hearing loss compensation for the mostly older users of the PAHA was implemented and evaluated. Evaluation experiments with 21 normal hearing and hearing impaired test subjects were carried out. The results show that after direct comparison of the sound presentation concepts, the presentation by the single TV speaker was most preferred, whereas the phantom source concept got the highest acceptance ratings as far as the general concept is concerned. The localization accuracy of the phantom source concept was good as long as the exact listening position was known to the algorithm and speech stimuli were used. Most subjects preferred the original signals over the pre-processed, dynamic-compressed signals, although processed speech was often described as being clearer.

  8. Monaural Sound Localization Based on Structure-Induced Acoustic Resonance

    PubMed Central

    Kim, Keonwook; Kim, Youngwoong

    2015-01-01

    A physical structure such as a cylindrical pipe controls the propagated sound spectrum in a predictable way that can be used to localize the sound source. This paper designs a monaural sound localization system based on multiple pyramidal horns around a single microphone. The acoustic resonance within the horn provides a periodicity in the spectral domain known as the fundamental frequency which is inversely proportional to the radial horn length. Once the system accurately estimates the fundamental frequency, the horn length and corresponding angle can be derived by the relationship. The modified Cepstrum algorithm is employed to evaluate the fundamental frequency. In an anechoic chamber, localization experiments over azimuthal configuration show that up to 61% of the proper signal is recognized correctly with 30% misfire. With a speculated detection threshold, the system estimates direction 52% in positive-to-positive and 34% in negative-to-positive decision rate, on average. PMID:25668214

  9. System and method for characterizing synthesizing and/or canceling out acoustic signals from inanimate sound sources

    DOEpatents

    Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.

    2003-01-01

    A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.

  10. System and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources

    DOEpatents

    Holzrichter, John F; Burnett, Greg C; Ng, Lawrence C

    2013-05-21

    A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.

  11. System and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources

    DOEpatents

    Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.

    2007-10-16

    A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.

  12. Satellite sound broadcasting system, portable reception

    NASA Technical Reports Server (NTRS)

    Golshan, Nasser; Vaisnys, Arvydas

    1990-01-01

    Studies are underway at JPL in the emerging area of Satellite Sound Broadcast Service (SSBS) for direct reception by low cost portable, semi portable, mobile and fixed radio receivers. This paper addresses the portable reception of digital broadcasting of monophonic audio with source material band limited to 5 KHz (source audio comparable to commercial AM broadcasting). The proposed system provides transmission robustness, uniformity of performance over the coverage area and excellent frequency reuse. Propagation problems associated with indoor portable reception are considered in detail and innovative antenna concepts are suggested to mitigate these problems. It is shown that, with the marriage of proper technologies a single medium power satellite can provide substantial direct satellite audio broadcast capability to CONUS in UHF or L Bands, for high quality portable indoor reception by low cost radio receivers.

  13. SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization

    PubMed Central

    Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah

    2014-01-01

    Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431

  14. Investigation of orifice aeroacoustics by means of multi-port methods

    NASA Astrophysics Data System (ADS)

    Sack, Stefan; Åbom, Mats

    2017-10-01

    Comprehensive methods to cascade active multi-ports, e.g., for acoustic network prediction, have until now only been available for plane waves. This paper presents procedures to combine multi-ports with an arbitrary number of considered duct modes. A multi-port method is used to extract complex mode amplitudes from experimental data of single and tandem in-duct orifice plates for Helmholtz numbers up to around 4 and, hence, beyond the cut-on of several higher order modes. The theory of connecting single multi-ports to linear cascades is derived for the passive properties (the scattering of the system) and the active properties (the source cross-spectrum matrix of the system). One scope of this paper is to investigate the influence of the hydrodynamic near field on the accuracy of both the passive and the active predictions in multi-port cascades. The scattering and the source cross-spectrum matrix of tandem orifice configurations is measured for three cases, namely, with a distance between the plates of 10 duct diameter, for which the downstream orifice is outside the jet of the upstream orifice, 4 duct diameter, and 2 duct diameter (both inside the jet). The results are compared with predictions from single orifice measurements. It is shown that the scattering is only sensitive to disturbed inflow in certain frequency ranges where coupling between the flow and sound field exists, whereas the source cross-spectrum matrix is very sensitive to disturbed inflow for all frequencies. An important part of the analysis is based on an eigenvalue analysis of the scattering matrix and the source cross-spectrum matrix to evaluate the potential of sound amplification and dominant source mechanisms.

  15. Effect of Blast Injury on Auditory Localization in Military Service Members.

    PubMed

    Kubli, Lina R; Brungart, Douglas; Northern, Jerry

    Among the many advantages of binaural hearing are the abilities to localize sounds in space and to attend to one sound in the presence of many sounds. Binaural hearing provides benefits for all listeners, but it may be especially critical for military personnel who must maintain situational awareness in complex tactical environments with multiple speech and noise sources. There is concern that Military Service Members who have been exposed to one or more high-intensity blasts during their tour of duty may have difficulty with binaural and spatial ability due to degradation in auditory and cognitive processes. The primary objective of this study was to assess the ability of blast-exposed Military Service Members to localize speech sounds in quiet and in multisource environments with one or two competing talkers. Participants were presented with one, two, or three topic-related (e.g., sports, food, travel) sentences under headphones and required to attend to, and then locate the source of, the sentence pertaining to a prespecified target topic within a virtual space. The listener's head position was monitored by a head-mounted tracking device that continuously updated the apparent spatial location of the target and competing speech sounds as the subject turned within the virtual space. Measurements of auditory localization ability included mean absolute error in locating the source of the target sentence, the time it took to locate the target sentence within 30 degrees, target/competitor confusion errors, response time, and cumulative head motion. Twenty-one blast-exposed Active-Duty or Veteran Military Service Members (blast-exposed group) and 33 non-blast-exposed Service Members and beneficiaries (control group) were evaluated. In general, the blast-exposed group performed as well as the control group if the task involved localizing the source of a single speech target. However, if the task involved two or three simultaneous talkers, localization ability was compromised for some participants in the blast-exposed group. Blast-exposed participants were less accurate in their localization responses and required more exploratory head movements to find the location of the target talker. Results suggest that blast-exposed participants have more difficulty than non-blast-exposed participants in localizing sounds in complex acoustic environments. This apparent deficit in spatial hearing ability highlights the need to develop new diagnostic tests using complex listening tasks that involve multiple sound sources that require speech segregation and comprehension.

  16. Spherical loudspeaker array for local active control of sound.

    PubMed

    Rafaely, Boaz

    2009-05-01

    Active control of sound has been employed to reduce noise levels around listeners' head using destructive interference from noise-canceling sound sources. Recently, spherical loudspeaker arrays have been studied as multiple-channel sound sources, capable of generating sound fields with high complexity. In this paper, the potential use of a spherical loudspeaker array for local active control of sound is investigated. A theoretical analysis of the primary and secondary sound fields around a spherical sound source reveals that the natural quiet zones for the spherical source have a shell-shape. Using numerical optimization, quiet zones with other shapes are designed, showing potential for quiet zones with extents that are significantly larger than the well-known limit of a tenth of a wavelength for monopole sources. The paper presents several simulation examples showing quiet zones in various configurations.

  17. Localizing the sources of two independent noises: Role of time varying amplitude differences

    PubMed Central

    Yost, William A.; Brown, Christopher A.

    2013-01-01

    Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region. PMID:23556597

  18. Localizing the sources of two independent noises: role of time varying amplitude differences.

    PubMed

    Yost, William A; Brown, Christopher A

    2013-04-01

    Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region.

  19. Aeroacoustics of Flight Vehicles: Theory and Practice. Volume 2. Noise Control

    DTIC Science & Technology

    1991-08-01

    noisiness, Localization and Precedence The ability to determine the location of sound sources is one of the major benefits of having a binaural hearing... binaural hearing is commonly called the Haas. or precedence, effect (ref. 16). This refers to the ability to hear as a single acoustic event the...propellers are operated at slightly different rpm values, beating interference between the two sources occurs, and the noise level in the cabin rises and

  20. Issues in Humanoid Audition and Sound Source Localization by Active Audition

    NASA Astrophysics Data System (ADS)

    Nakadai, Kazuhiro; Okuno, Hiroshi G.; Kitano, Hiroaki

    In this paper, we present an active audition system which is implemented on the humanoid robot "SIG the humanoid". The audition system for highly intelligent humanoids localizes sound sources and recognizes auditory events in the auditory scene. Active audition reported in this paper enables SIG to track sources by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. However, such an active head movement inevitably creates motor noises.The system adaptively cancels motor noises using motor control signals and the cover acoustics. The experimental result demonstrates that active audition by integration of audition, vision, and motor control attains sound source tracking in variety of conditions.onditions.

  1. Sound source localization method in an environment with flow based on Amiet-IMACS

    NASA Astrophysics Data System (ADS)

    Wei, Long; Li, Min; Qin, Sheng; Fu, Qiang; Yang, Debin

    2017-05-01

    A sound source localization method is proposed to localize and analyze the sound source in an environment with airflow. It combines the improved mapping of acoustic correlated sources (IMACS) method and Amiet's method, and is called Amiet-IMACS. It can localize uncorrelated and correlated sound sources with airflow. To implement this approach, Amiet's method is used to correct the sound propagation path in 3D, which improves the accuracy of the array manifold matrix and decreases the position error of the localized source. Then, the mapping of acoustic correlated sources (MACS) method, which is as a high-resolution sound source localization algorithm, is improved by self-adjusting the constraint parameter at each irritation process to increase convergence speed. A sound source localization experiment using a pair of loud speakers in an anechoic wind tunnel under different flow speeds is conducted. The experiment exhibits the advantage of Amiet-IMACS in localizing a more accurate sound source position compared with implementing IMACS alone in an environment with flow. Moreover, the aerodynamic noise produced by a NASA EPPLER 862 STRUT airfoil model in airflow with a velocity of 80 m/s is localized using the proposed method, which further proves its effectiveness in a flow environment. Finally, the relationship between the source position of this airfoil model and its frequency, along with its generation mechanism, is determined and interpreted.

  2. Single Neurons in the Avian Auditory Cortex Encode Individual Identity and Propagation Distance in Naturally Degraded Communication Calls.

    PubMed

    Mouterde, Solveig C; Elie, Julie E; Mathevon, Nicolas; Theunissen, Frédéric E

    2017-03-29

    One of the most complex tasks performed by sensory systems is "scene analysis": the interpretation of complex signals as behaviorally relevant objects. The study of this problem, universal to species and sensory modalities, is particularly challenging in audition, where sounds from various sources and localizations, degraded by propagation through the environment, sum to form a single acoustical signal. Here we investigated in a songbird model, the zebra finch, the neural substrate for ranging and identifying a single source. We relied on ecologically and behaviorally relevant stimuli, contact calls, to investigate the neural discrimination of individual vocal signature as well as sound source distance when calls have been degraded through propagation in a natural environment. Performing electrophysiological recordings in anesthetized birds, we found neurons in the auditory forebrain that discriminate individual vocal signatures despite long-range degradation, as well as neurons discriminating propagation distance, with varying degrees of multiplexing between both information types. Moreover, the neural discrimination performance of individual identity was not affected by propagation-induced degradation beyond what was induced by the decreased intensity. For the first time, neurons with distance-invariant identity discrimination properties as well as distance-discriminant neurons are revealed in the avian auditory cortex. Because these neurons were recorded in animals that had prior experience neither with the vocalizers of the stimuli nor with long-range propagation of calls, we suggest that this neural population is part of a general-purpose system for vocalizer discrimination and ranging. SIGNIFICANCE STATEMENT Understanding how the brain makes sense of the multitude of stimuli that it continually receives in natural conditions is a challenge for scientists. Here we provide a new understanding of how the auditory system extracts behaviorally relevant information, the vocalizer identity and its distance to the listener, from acoustic signals that have been degraded by long-range propagation in natural conditions. We show, for the first time, that single neurons, in the auditory cortex of zebra finches, are capable of discriminating the individual identity and sound source distance in conspecific communication calls. The discrimination of identity in propagated calls relies on a neural coding that is robust to intensity changes, signals' quality, and decreases in the signal-to-noise ratio. Copyright © 2017 Mouterde et al.

  3. Promoting the perception of two and three concurrent sound objects: An event-related potential study.

    PubMed

    Kocsis, Zsuzsanna; Winkler, István; Bendixen, Alexandra; Alain, Claude

    2016-09-01

    The auditory environment typically comprises several simultaneously active sound sources. In contrast to the perceptual segregation of two concurrent sounds, the perception of three simultaneous sound objects has not yet been studied systematically. We conducted two experiments in which participants were presented with complex sounds containing sound segregation cues (mistuning, onset asynchrony, differences in frequency or amplitude modulation or in sound location), which were set up to promote the perceptual organization of the tonal elements into one, two, or three concurrent sounds. In Experiment 1, listeners indicated whether they heard one, two, or three concurrent sounds. In Experiment 2, participants watched a silent subtitled movie while EEG was recorded to extract the object-related negativity (ORN) component of the event-related potential. Listeners predominantly reported hearing two sounds when the segregation promoting manipulations were applied to the same tonal element. When two different tonal elements received manipulations promoting them to be heard as separate auditory objects, participants reported hearing two and three concurrent sounds objects with equal probability. The ORN was elicited in most conditions; sounds that included the amplitude- or the frequency-modulation cue generated the smallest ORN amplitudes. Manipulating two different tonal elements yielded numerically and often significantly smaller ORNs than the sum of the ORNs elicited when the same cues were applied on a single tonal element. These results suggest that ORN reflects the presence of multiple concurrent sounds, but not their number. The ORN results are compatible with the horse-race principle of combining different cues of concurrent sound segregation. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Sound radiation from railway sleepers

    NASA Astrophysics Data System (ADS)

    Zhang, Xianying; Thompson, David J.; Squicciarini, Giacomo

    2016-05-01

    The sleepers supporting the rails of a railway track are an important source of noise at low frequencies. The sound radiation from the sleepers has been calculated using a three-dimensional boundary element model including the effect of both reflective and partially absorptive ground. When the sleeper flexibility and support stiffness are taken into account, it is found that the radiation ratio of the sleeper can be approximated by that of a rigid half-sleeper. When multiple sleepers are excited through the rail, their sound radiation is increased. This effect has been calculated for cases where the sleeper is embedded in a rigid or partially absorptive ground. It is shown that it is sufficient to consider only three sleepers in determining their radiation ratio when installed in track. At low frequencies the vibration of the track is localised to the three sleepers nearest the excitation point whereas at higher frequencies the distance between the sleepers is large enough for them to be treated independently. Consequently the sound radiation increases by up to 5 dB below 100 Hz compared with the result for a single sleeper whereas above 300 Hz the result can be approximated by that for a single sleeper. Measurements on a 1/5 scale model railway track are used to verify the numerical predictions with good agreement being found for all configurations.

  5. Characterization of the acoustic field generated by a horn shaped ultrasonic transducer

    NASA Astrophysics Data System (ADS)

    Hu, B.; Lerch, J. E.; Chavan, A. H.; Weber, J. K. R.; Tamalonis, A.; Suthar, K. J.; DiChiara, A. D.

    2017-09-01

    A horn shaped Langevin ultrasonic transducer used in a single axis levitator was characterized to better understand the role of the acoustic profile in establishing stable traps. The method of characterization included acoustic beam profiling performed by raster scanning an ultrasonic microphone as well as finite element analysis of the horn and its interface with the surrounding air volume. The results of the model are in good agreement with measurements and demonstrate the validity of the approach for both near and far field analyses. Our results show that this style of transducer produces a strong acoustic beam with a total divergence angle of 10°, a near-field point close to the transducer surface and a virtual sound source. These are desirable characteristics for a sound source used for acoustic trapping experiments.

  6. Characterization of the acoustic field generated by a horn shaped ultrasonic transducer

    DOE PAGES

    Hu, B.; Lerch, J. E.; Chavan, A. H.; ...

    2017-09-04

    A horn shaped Langevin ultrasonic transducer used in a single axis levitator was characterized to better understand the role of the acoustic profile in establishing stable traps. The method of characterization included acoustic beam profiling performed by raster scanning an ultrasonic microphone as well as finite element analysis of the horn and its interface with the surrounding air volume. The results of the model are in good agreement with measurements and demonstrate the validity of the approach for both near and far field analysis. Our results show that this style of transducer produces a strong acoustic beam with a totalmore » divergence angle of 10 degrees, a nearfield point close to the transducer surface and a virtual sound source. These are desirable characteristics for a sound source used for acoustic trapping experiments.« less

  7. Characterization of the acoustic field generated by a horn shaped ultrasonic transducer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, B.; Lerch, J. E.; Chavan, A. H.

    A horn shaped Langevin ultrasonic transducer used in a single axis levitator was characterized to better understand the role of the acoustic profile in establishing stable traps. The method of characterization included acoustic beam profiling performed by raster scanning an ultrasonic microphone as well as finite element analysis of the horn and its interface with the surrounding air volume. The results of the model are in good agreement with measurements and demonstrate the validity of the approach for both near and far field analyses. Our results show that this style of transducer produces a strong acoustic beam with a totalmore » divergence angle of 10 degree, a near-field point close to the transducer surface and a virtual sound source. These are desirable characteristics for a sound source used for acoustic trapping experiments« less

  8. Characterization of the acoustic field generated by a horn shaped ultrasonic transducer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, B.; Lerch, J. E.; Chavan, A. H.

    A horn shaped Langevin ultrasonic transducer used in a single axis levitator was characterized to better understand the role of the acoustic profile in establishing stable traps. The method of characterization included acoustic beam profiling performed by raster scanning an ultrasonic microphone as well as finite element analysis of the horn and its interface with the surrounding air volume. The results of the model are in good agreement with measurements and demonstrate the validity of the approach for both near and far field analysis. Our results show that this style of transducer produces a strong acoustic beam with a totalmore » divergence angle of 10 degrees, a nearfield point close to the transducer surface and a virtual sound source. These are desirable characteristics for a sound source used for acoustic trapping experiments.« less

  9. The effects of spatially separated call components on phonotaxis in túngara frogs: evidence for auditory grouping.

    PubMed

    Farris, Hamilton E; Rand, A Stanley; Ryan, Michael J

    2002-01-01

    Numerous animals across disparate taxa must identify and locate complex acoustic signals imbedded in multiple overlapping signals and ambient noise. A requirement of this task is the ability to group sounds into auditory streams in which sounds are perceived as emanating from the same source. Although numerous studies over the past 50 years have examined aspects of auditory grouping in humans, surprisingly few assays have demonstrated auditory stream formation or the assignment of multicomponent signals to a single source in non-human animals. In our study, we present evidence for auditory grouping in female túngara frogs. In contrast to humans, in which auditory grouping may be facilitated by the cues produced when sounds arrive from the same location, we show that spatial cues play a limited role in grouping, as females group discrete components of the species' complex call over wide angular separations. Furthermore, we show that once grouped the separate call components are weighted differently in recognizing and locating the call, so called 'what' and 'where' decisions, respectively. Copyright 2002 S. Karger AG, Basel

  10. ''1/f noise'' in music: Music from 1/f noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voss, R.F.; Clarke, J.

    1978-01-01

    The spectral density of fluctuations in the audio power of many musical selections and of English speech varies approximately as 1/f (f is the frequency) down to a frequency of 5 x 10/sup -4/ Hz. This result implies that the audio-power fluctuations are correlated over all times in the same manner as ''1/f noise'' in electronic components. The frequency fluctuations of music also have a 1/f spectral density at frequencies down to the inverse of the length of the piece of music. The frequency fluctuations of English speech have a quite different behavior, with a single characteristic time of aboutmore » 0.1 s, the average length of a syllable. The observations on music suggest that 1/f noise is a good choice for stochastic composition. Compositions in which the frequency and duration of each note were determined by 1/f noise sources sounded pleasing. Those generated by white-noise sources sounded too random, while those generated by 1/f/sup 2/ noise sounded too correlated.« less

  11. Tracking of Pacific walruses in the Chukchi Sea using a single hydrophone.

    PubMed

    Mouy, Xavier; Hannay, David; Zykov, Mikhail; Martin, Bruce

    2012-02-01

    The vocal repertoire of Pacific walruses includes underwater sound pulses referred to as knocks and bell-like calls. An extended acoustic monitoring program was performed in summer 2007 over a large region of the eastern Chukchi Sea using autonomous seabed-mounted acoustic recorders. Walrus knocks were identified in many of the recordings and most of these sounds included multiple bottom and surface reflected signals. This paper investigates the use of a localization technique based on relative multipath arrival times (RMATs) for potential behavior studies. First, knocks are detected using a semi-automated kurtosis-based algorithm. Then RMATs are matched to values predicted by a ray-tracing model. Walrus tracks with vertical and horizontal movements were obtained. The tracks included repeated dives between 4.0 m and 15.5 m depth and a deep dive to the sea bottom (53 m). Depths at which bell-like sounds are produced, average knock production rate and source levels estimates of the knocks were determined. Bell sounds were produced at all depths throughout the dives. Average knock production rates varied from 59 to 75 knocks/min. Average source level of the knocks was estimated to 177.6 ± 7.5 dB re 1 μPa peak @ 1 m. © 2012 Acoustical Society of America

  12. Speech segregation based-on binaural cue: interaural time difference (itd) and interaural level difference (ild)

    NASA Astrophysics Data System (ADS)

    Nur Farid, Mifta; Arifianto, Dhany

    2016-11-01

    A person who is suffering from hearing loss can be helped by using hearing aids and the most optimal performance of hearing aids are binaural hearing aids because it has similarities to human auditory system. In a conversation at a cocktail party, a person can focus on a single conversation even though the background sound and other people conversation is quite loud. This phenomenon is known as the cocktail party effect. In an early study, has been explained that binaural hearing have an important contribution to the cocktail party effect. So in this study, will be performed separation on the input binaural sound with 2 microphone sensors of two sound sources based on both the binaural cue, interaural time difference (ITD) and interaural level difference (ILD) using binary mask. To estimate value of ITD, is used cross-correlation method which the value of ITD represented as time delay of peak shifting at time-frequency unit. Binary mask is estimated based on pattern of ITD and ILD to relative strength of target that computed statistically using probability density estimation. Results of sound source separation performing well with the value of speech intelligibility using the percent correct word by 86% and 3 dB by SNR.

  13. Active room compensation for sound reinforcement using sound field separation techniques.

    PubMed

    Heuchel, Franz M; Fernandez-Grande, Efren; Agerkvist, Finn T; Shabalina, Elena

    2018-03-01

    This work investigates how the sound field created by a sound reinforcement system can be controlled at low frequencies. An indoor control method is proposed which actively absorbs the sound incident on a reflecting boundary using an array of secondary sources. The sound field is separated into incident and reflected components by a microphone array close to the secondary sources, enabling the minimization of reflected components by means of optimal signals for the secondary sources. The method is purely feed-forward and assumes constant room conditions. Three different sound field separation techniques for the modeling of the reflections are investigated based on plane wave decomposition, equivalent sources, and the Spatial Fourier transform. Simulations and an experimental validation are presented, showing that the control method performs similarly well at enhancing low frequency responses with the three sound separation techniques. Resonances in the entire room are reduced, although the microphone array and secondary sources are confined to a small region close to the reflecting wall. Unlike previous control methods based on the creation of a plane wave sound field, the investigated method works in arbitrary room geometries and primary source positions.

  14. 24 CFR 5.703 - Physical condition standards for HUD housing that is decent, safe, sanitary and in good repair...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... components, such as fencing and retaining walls, grounds, lighting, mailboxes/project signs, parking lots... exterior. Each building on the site must be structurally sound, secure, habitable, and in good repair. Each... source of potable water (note for example that single room occupancy units need not contain water...

  15. An evaluation of talker localization based on direction of arrival estimation and statistical sound source identification

    NASA Astrophysics Data System (ADS)

    Nishiura, Takanobu; Nakamura, Satoshi

    2002-11-01

    It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.

  16. Sound source localization and segregation with internally coupled ears: the treefrog model

    PubMed Central

    Christensen-Dalsgaard, Jakob

    2016-01-01

    Acoustic signaling plays key roles in mediating many of the reproductive and social behaviors of anurans (frogs and toads). Moreover, acoustic signaling often occurs at night, in structurally complex habitats, such as densely vegetated ponds, and in dense breeding choruses characterized by high levels of background noise and acoustic clutter. Fundamental to anuran behavior is the ability of the auditory system to determine accurately the location from where sounds originate in space (sound source localization) and to assign specific sounds in the complex acoustic milieu of a chorus to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating directions for future research on these animals that will require the collaborative efforts of biologists, physicists, and roboticists. PMID:27730384

  17. Application of acoustic radiosity methods to noise propagation within buildings

    NASA Astrophysics Data System (ADS)

    Muehleisen, Ralph T.; Beamer, C. Walter

    2005-09-01

    The prediction of sound pressure levels in rooms from transmitted sound is a difficult problem. The sound energy in the source room incident on the common wall must be accurately predicted. In the receiving room, the propagation of sound from the planar wall source must also be accurately predicted. The radiosity method naturally computes the spatial distribution of sound energy incident on a wall and also naturally predicts the propagation of sound from a planar area source. In this paper, the application of the radiosity method to sound transmission problems is introduced and explained.

  18. Global Bathymetry: Machine Learning for Data Editing

    NASA Astrophysics Data System (ADS)

    Sandwell, D. T.; Tea, B.; Freund, Y.

    2017-12-01

    The accuracy of global bathymetry depends primarily on the coverage and accuracy of the sounding data and secondarily on the depth predicted from gravity. A main focus of our research is to add newly-available data to the global compilation. Most data sources have 1-12% of erroneous soundings caused by a wide array of blunders and measurement errors. Over the years we have hand-edited this data using undergraduate employees at UCSD (440 million soundings at 500 m resolution). We are developing a machine learning approach to refine the flagging of the older soundings and provide automated editing of newly-acquired soundings. The approach has three main steps: 1) Combine the sounding data with additional information that may inform the machine learning algorithm. The additional parameters include: depth predicted from gravity; distance to the nearest sounding from other cruises; seafloor age; spreading rate; sediment thickness; and vertical gravity gradient. 2) Use available edit decisions as training data sets for a boosted tree algorithm with a binary logistic objective function and L2 regularization. Initial results with poor quality single beam soundings show that the automated algorithm matches the hand-edited data 89% of the time. The results show that most of the information for detecting outliers comes from predicted depth with secondary contributions from distance to the nearest sounding and longitude. A similar analysis using very high quality multibeam data shows that the automated algorithm matches the hand-edited data 93% of the time. Again, most of the information for detecting outliers comes from predicted depth secondary contributions from distance to the nearest sounding and longitude. 3) The third step in the process is to use the machine learning parameters, derived from the training data, to edit 12 million newly acquired single beam sounding data provided by the National Geospatial-Intelligence Agency. The output of the learning algorithm will be confidence ratedindicating which edits the algorithm is confident on and which it is not confident. We expect the majority ( 90%) of edits to be confident and not require human intervention. Human intervention will be required only on the 10% unconfident decisions, thus reducing the amount of human work by a factor of 10 or more.

  19. Ejectable underwater sound source recovery assembly

    NASA Technical Reports Server (NTRS)

    Irick, S. C. (Inventor)

    1974-01-01

    An underwater sound source is described that may be ejectably mounted on any mobile device that travels over water, to facilitate in the location and recovery of the device when submerged. A length of flexible line maintains a connection between the mobile device and the sound source. During recovery, the sound source is located be particularly useful in the recovery of spent rocket motors that bury in the ocean floor upon impact.

  20. Visual Presentation Effects on Identification of Multiple Environmental Sounds

    PubMed Central

    Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio

    2016-01-01

    This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478

  1. Newborn infants detect cues of concurrent sound segregation.

    PubMed

    Bendixen, Alexandra; Háden, Gábor P; Németh, Renáta; Farkas, Dávid; Török, Miklós; Winkler, István

    2015-01-01

    Separating concurrent sounds is fundamental for a veridical perception of one's auditory surroundings. Sound components that are harmonically related and start at the same time are usually grouped into a common perceptual object, whereas components that are not in harmonic relation or have different onset times are more likely to be perceived in terms of separate objects. Here we tested whether neonates are able to pick up the cues supporting this sound organization principle. We presented newborn infants with a series of complex tones with their harmonics in tune (creating the percept of a unitary sound object) and with manipulated variants, which gave the impression of two concurrently active sound sources. The manipulated variant had either one mistuned partial (single-cue condition) or the onset of this mistuned partial was also delayed (double-cue condition). Tuned and manipulated sounds were presented in random order with equal probabilities. Recording the neonates' electroencephalographic responses allowed us to evaluate their processing of the sounds. Results show that, in both conditions, mistuned sounds elicited a negative displacement of the event-related potential (ERP) relative to tuned sounds from 360 to 400 ms after sound onset. The mistuning-related ERP component resembles the object-related negativity (ORN) component in adults, which is associated with concurrent sound segregation. Delayed onset additionally led to a negative displacement from 160 to 200 ms, which was probably more related to the physical parameters of the sounds than to their perceptual segregation. The elicitation of an ORN-like response in newborn infants suggests that neonates possess the basic capabilities of segregating concurrent sounds by detecting inharmonic relations between the co-occurring sounds. © 2015 S. Karger AG, Basel.

  2. Source levels of social sounds in migrating humpback whales (Megaptera novaeangliae).

    PubMed

    Dunlop, Rebecca A; Cato, Douglas H; Noad, Michael J; Stokes, Dale M

    2013-07-01

    The source level of an animal sound is important in communication, since it affects the distance over which the sound is audible. Several measurements of source levels of whale sounds have been reported, but the accuracy of many is limited because the distance to the source and the acoustic transmission loss were estimated rather than measured. This paper presents measurements of source levels of social sounds (surface-generated and vocal sounds) of humpback whales from a sample of 998 sounds recorded from 49 migrating humpback whale groups. Sources were localized using a wide baseline five hydrophone array and transmission loss was measured for the site. Social vocalization source levels were found to range from 123 to 183 dB re 1 μPa @ 1 m with a median of 158 dB re 1 μPa @ 1 m. Source levels of surface-generated social sounds ("breaches" and "slaps") were narrower in range (133 to 171 dB re 1 μPa @ 1 m) but slightly higher in level (median of 162 dB re 1 μPa @ 1 m) compared to vocalizations. The data suggest that group composition has an effect on group vocalization source levels in that singletons and mother-calf-singing escort groups tend to vocalize at higher levels compared to other group compositions.

  3. Dynamic Spatial Hearing by Human and Robot Listeners

    NASA Astrophysics Data System (ADS)

    Zhong, Xuan

    This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.

  4. Wave field synthesis of moving virtual sound sources with complex radiation properties.

    PubMed

    Ahrens, Jens; Spors, Sascha

    2011-11-01

    An approach to the synthesis of moving virtual sound sources with complex radiation properties in wave field synthesis is presented. The approach exploits the fact that any stationary sound source of finite spatial extent radiates spherical waves at sufficient distance. The angular dependency of the radiation properties of the source under consideration is reflected by the amplitude and phase distribution on the spherical wave fronts. The sound field emitted by a uniformly moving monopole source is derived and the far-field radiation properties of the complex virtual source under consideration are incorporated in order to derive a closed-form expression for the loudspeaker driving signal. The results are illustrated via numerical simulations of the synthesis of the sound field of a sample moving complex virtual source.

  5. On the suitability of ISO 16717-1 reference spectra for rating airborne sound insulation.

    PubMed

    Mašović, Draško B; Pavlović, Dragana S Šumarac; Mijić, Miomir M

    2013-11-01

    A standard proposal for rating airborne sound insulation in buildings [ISO 16717-1 (2012)] defines the reference noise spectra. Since their shapes influence the calculated values of single-number descriptors, reference spectra should approximate well typical noise spectra in buildings. There is, however, very little data in the existing literature on a typical noise spectrum in dwellings. A spectral analysis of common noise sources in dwellings is presented in this paper, as a result of an extensive monitoring of various noisy household activities. Apart from music with strong bass content, the proposed "living" reference spectrum overestimates noise levels at low frequencies.

  6. Recording and Calculating Gunshot Sound—Change of the Volume in Reference to the Distance

    NASA Astrophysics Data System (ADS)

    Nikolaos, Tsiatis E.

    2010-01-01

    An experiment was conducted in an open practice ground (shooting range) regarding the recording of the sound of gunshots. Shots were fired using various types of firearms (seven pistols, five revolvers, two submachine guns, one rifle, and one shotgun) in different calibers, from several various distances with reference to the recording sources. Both, a conventional sound level meter (device) and a measurement microphone were used, having been placed in a fixed point behind the shooting line. The sound of each shot was recorded (from the device). At the same time the signal received by the microphone was transferred to a connected computer through an appropriate audio interface with a pre-amplifier. Each sound wave was stored and depicted as a wave function. After the physic-mathematical analysis of these depictions, the volume was calculated in the accepted engineering units(Decibels or dB) of Sound Pressure Level (SPL). The distances from the recording sources were 9.60 meters, 14.40 m, 19.20 m, and 38.40 m. The experiment was carried out by using the following calibers: .22 LR, 6.35 mm(.25 AUTO), 7.62 mm Tokarev(7,62×25), 7.65 mm(.32 AUTO), 9 mm Parabellum(9×19), 9 mm Short(9×17), 9 mm Makarov(9×18), .45 AUTO, .32 S&W, .38 S&W, .38 SPECIAL, .357 Magnum, 7,62 mm Kalashnikov(7,62×39) and 12 GA. Tables are given for the environmental conditions (temperature, humidity, altitude & barometric pressure), the length of the barrel of each gun, technical characteristics of the used ammunition, as well as for the volume taken from the SLM. The data for the sound intensity were collected after 168 gunshots (158 single shot & 10 bursts). According to the results, a decreasing of the volume, equivalent to the increasing of the distance, was remarked, as it was expected. Values seem to follow the Inverse square Law. For every doubling of the distance from the sound source, the sound intensity diminishes by 5.9904±0.2325 decibels (on average). In addition, we have the chance of determining the volume of the gunshot sound coming from a certain type of weapon. A further application could be the calculation of the distance from a shooting firearm if one is aware of a recorded volume.

  7. Temporal coherence for pure tones in budgerigars (Melopsittacus undulatus) and humans (Homo sapiens).

    PubMed

    Neilans, Erikson G; Dent, Micheal L

    2015-02-01

    Auditory scene analysis has been suggested as a universal process that exists across all animals. Relative to humans, however, little work has been devoted to how animals perceptually isolate different sound sources. Frequency separation of sounds is arguably the most common parameter studied in auditory streaming, but it is not the only factor contributing to how the auditory scene is perceived. Researchers have found that in humans, even at large frequency separations, synchronous tones are heard as a single auditory stream, whereas asynchronous tones with the same frequency separations are perceived as 2 distinct sounds. These findings demonstrate how both the timing and frequency separation of sounds are important for auditory scene analysis. It is unclear how animals, such as budgerigars (Melopsittacus undulatus), perceive synchronous and asynchronous sounds. In this study, budgerigars and humans (Homo sapiens) were tested on their perception of synchronous, asynchronous, and partially overlapping pure tones using the same psychophysical procedures. Species differences were found between budgerigars and humans in how partially overlapping sounds were perceived, with budgerigars more likely to segregate overlapping sounds and humans more apt to fuse the 2 sounds together. The results also illustrated that temporal cues are particularly important for stream segregation of overlapping sounds. Lastly, budgerigars were found to segregate partially overlapping sounds in a manner predicted by computational models of streaming, whereas humans were not. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  8. Assessing sound exposure from shipping in coastal waters using a single hydrophone and Automatic Identification System (AIS) data.

    PubMed

    Merchant, Nathan D; Witt, Matthew J; Blondel, Philippe; Godley, Brendan J; Smith, George H

    2012-07-01

    Underwater noise from shipping is a growing presence throughout the world's oceans, and may be subjecting marine fauna to chronic noise exposure with potentially severe long-term consequences. The coincidence of dense shipping activity and sensitive marine ecosystems in coastal environments is of particular concern, and noise assessment methodologies which describe the high temporal variability of sound exposure in these areas are needed. We present a method of characterising sound exposure from shipping using continuous passive acoustic monitoring combined with Automatic Identification System (AIS) shipping data. The method is applied to data recorded in Falmouth Bay, UK. Absolute and relative levels of intermittent ship noise contributions to the 24-h sound exposure level are determined using an adaptive threshold, and the spatial distribution of potential ship sources is then analysed using AIS data. This technique can be used to prioritize shipping noise mitigation strategies in coastal marine environments. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Wind-instrument reflection function measurements in the time domain.

    PubMed

    Keefe, D H

    1996-04-01

    Theoretical and computational analyses of wind-instrument sound production in the time domain have emerged as useful tools for understanding musical instrument acoustics, yet there exist few experimental measurements of the air-column response directly in the time domain. A new experimental, time-domain technique is proposed to measure the reflection function response of woodwind and brass-instrument air columns. This response is defined at the location of sound regeneration in the mouthpiece or double reed. A probe assembly comprised of an acoustic source and microphone is inserted directly into the air column entryway using a foam plug to ensure a leak-free fit. An initial calibration phase involves measurements on a single cylindrical tube of known dimensions. Measurements are presented on an alto saxophone and euphonium. The technique has promise for testing any musical instrument air columns using a single probe assembly and foam plugs over a range of diameters typical of air-column entryways.

  10. Flight parameter estimation using instantaneous frequency and direction of arrival measurements from a single acoustic sensor node.

    PubMed

    Lo, Kam W

    2017-03-01

    When an airborne sound source travels past a stationary ground-based acoustic sensor node in a straight line at constant altitude and constant speed that is not much less than the speed of sound in air, the movement of the source during the propagation of the signal from the source to the sensor node (commonly referred to as the "retardation effect") enables the full set of flight parameters of the source to be estimated by measuring the direction of arrival (DOA) of the signal at the sensor node over a sufficiently long period of time. This paper studies the possibility of using instantaneous frequency (IF) measurements from the sensor node to improve the precision of the flight parameter estimates when the source spectrum contains a harmonic line of constant frequency. A simplified Cramer-Rao lower bound analysis shows that the standard deviations in the estimates of the flight parameters can be reduced when IF measurements are used together with DOA measurements. Two flight parameter estimation algorithms that utilize both IF and DOA measurements are described and their performances are evaluated using both simulated data and real data.

  11. Noise Source Identification in a Reverberant Field Using Spherical Beamforming

    NASA Astrophysics Data System (ADS)

    Choi, Young-Chul; Park, Jin-Ho; Yoon, Doo-Byung; Kwon, Hyu-Sang

    Identification of noise sources, their locations and strengths, has been taken great attention. The method that can identify noise sources normally assumes that noise sources are located at a free field. However, the sound in a reverberant field consists of that coming directly from the source plus sound reflected or scattered by the walls or objects in the field. In contrast to the exterior sound field, reflections are added to sound field. Therefore, the source location estimated by the conventional methods may give unacceptable error. In this paper, we explain the effects of reverberant field on interior source identification process and propose the method that can identify noise sources in the reverberant field.

  12. Slow Temporal Integration Enables Robust Neural Coding and Perception of a Cue to Sound Source Location.

    PubMed

    Brown, Andrew D; Tollin, Daniel J

    2016-09-21

    In mammals, localization of sound sources in azimuth depends on sensitivity to interaural differences in sound timing (ITD) and level (ILD). Paradoxically, while typical ILD-sensitive neurons of the auditory brainstem require millisecond synchrony of excitatory and inhibitory inputs for the encoding of ILDs, human and animal behavioral ILD sensitivity is robust to temporal stimulus degradations (e.g., interaural decorrelation due to reverberation), or, in humans, bilateral clinical device processing. Here we demonstrate that behavioral ILD sensitivity is only modestly degraded with even complete decorrelation of left- and right-ear signals, suggesting the existence of a highly integrative ILD-coding mechanism. Correspondingly, we find that a majority of auditory midbrain neurons in the central nucleus of the inferior colliculus (of chinchilla) effectively encode ILDs despite complete decorrelation of left- and right-ear signals. We show that such responses can be accounted for by relatively long windows of bilateral excitatory-inhibitory interaction, which we explicitly measure using trains of narrowband clicks. Neural and behavioral data are compared with the outputs of a simple model of ILD processing with a single free parameter, the duration of excitatory-inhibitory interaction. Behavioral, neural, and modeling data collectively suggest that ILD sensitivity depends on binaural integration of excitation and inhibition within a ≳3 ms temporal window, significantly longer than observed in lower brainstem neurons. This relatively slow integration potentiates a unique role for the ILD system in spatial hearing that may be of particular importance when informative ITD cues are unavailable. In mammalian hearing, interaural differences in the timing (ITD) and level (ILD) of impinging sounds carry critical information about source location. However, natural sounds are often decorrelated between the ears by reverberation and background noise, degrading the fidelity of both ITD and ILD cues. Here we demonstrate that behavioral ILD sensitivity (in humans) and neural ILD sensitivity (in single neurons of the chinchilla auditory midbrain) remain robust under stimulus conditions that render ITD cues undetectable. This result can be explained by "slow" temporal integration arising from several-millisecond-long windows of excitatory-inhibitory interaction evident in midbrain, but not brainstem, neurons. Such integrative coding can account for the preservation of ILD sensitivity despite even extreme temporal degradations in ecological acoustic stimuli. Copyright © 2016 the authors 0270-6474/16/369908-14$15.00/0.

  13. Reproduction following small group cuttings in virgin Douglas-fir.

    Treesearch

    Norman P. Worthington

    1953-01-01

    Quick and adequate regeneration of Douglas-fir forests as they are harvested is a major forest management problem in the Puget Sound region. Clear-cutting by staggered settings has not always resulted in adequate regeneration even where no part of the area is more than one-fourth mile from a seed source. Single tree selection, experimented with extensively, has many...

  14. A Numerical Experiment on the Role of Surface Shear Stress in the Generation of Sound

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Wang, Meng; Merriam, Marshal (Technical Monitor)

    1996-01-01

    The sound generated due to a localized flow over an infinite flat surface is considered. It is known that the unsteady surface pressure, while appearing in a formal solution to the Lighthill equation, does not constitute a source of sound but rather represents the effect of image quadrupoles. The question of whether a similar surface shear stress term constitutes a true source of dipole sound is less settled. Some have boldly assumed it is a true source while others have argued that, like the surface pressure, it depends on the sound field (via an acoustic boundary layer) and is therefore not a true source. A numerical experiment based on the viscous, compressible Navier-Stokes equations was undertaken to investigate the issue. A small region of a wall was oscillated tangentially. The directly computed sound field was found to to agree with an acoustic analogy based calculation which regards the surface shear as an acoustically compact dipole source of sound.

  15. CLIVAR Mode Water Dynamics Experiment (CLIMODE), Fall 2006 R/V Oceanus Voyage 434, November 16, 2006-December 3, 2006

    DTIC Science & Technology

    2007-12-01

    except for the dive zero time which needed to be programmed during the cruise when the deployment schedule dates were confirmed. _ ACM - Aanderaa ACM...guards bolted on to complete the frame prior to deployment. Sound Source - Sound sources were scheduled to be redeployed. Sound sources were originally...battery voltages and a vacuum. A +27 second time drift was noted and the time was reset. The sound source was scheduled to go to full power on November

  16. Human annoyance and reactions to hotel room specific noises

    NASA Astrophysics Data System (ADS)

    Everhard, Ian L.

    2004-05-01

    A new formula is presented where multiple annoyance sources and transmission loss values of any partition are combined to produce a new single number rating of annoyance. The explanation of the formula is based on theoretical psychoacoustics and survey testing used to create variables used to weight the results. An imaginary hotel room is processed through the new formula and is rated based on theoretical survey results that would be taken by guests of the hotel. The new single number rating compares the multiple sources of annoyance to a single imaginary unbiased source where absolute level is the only factor in stimulating a linear rise in annoyance [Fidell et al., J. Acoust. Soc. Am. 66, 1427 (1979); D. M. Jones and D. E. Broadbent, ``Human performance and noise,'' in Handbook of Noise Control, 3rd ed., edited by C. M. Harris (ASA, New York, 1998), Chap. 24; J. P. Conroy and J. S. Roland, ``STC Field Testing and Results,'' in Sound and Vibration Magazine, Acoustical Publications, pp. 10-15 (July 2003)].

  17. An integrated system for dynamic control of auditory perspective in a multichannel sound field

    NASA Astrophysics Data System (ADS)

    Corey, Jason Andrew

    An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to create a perceptually strong impression of source location and movement within a simulated space.

  18. Sound localization and auditory response capabilities in round goby (Neogobius melanostomus)

    NASA Astrophysics Data System (ADS)

    Rollo, Audrey K.; Higgs, Dennis M.

    2005-04-01

    A fundamental role in vertebrate auditory systems is determining the direction of a sound source. While fish show directional responses to sound, sound localization remains in dispute. The species used in the current study, Neogobius melanostomus (round goby) uses sound in reproductive contexts, with both male and female gobies showing directed movement towards a calling male. The two-choice laboratory experiment was used (active versus quiet speaker) to analyze behavior of gobies in response to sound stimuli. When conspecific male spawning sounds were played, gobies moved in a direct path to the active speaker, suggesting true localization to sound. Of the animals that responded to conspecific sounds, 85% of the females and 66% of the males moved directly to the sound source. Auditory playback of natural and synthetic sounds showed differential behavioral specificity. Of gobies that responded, 89% were attracted to the speaker playing Padogobius martensii sounds, 87% to 100 Hz tone, 62% to white noise, and 56% to Gobius niger sounds. Swimming speed, as well as mean path angle to the speaker, will be presented during the presentation. Results suggest a strong localization of the round goby to a sound source, with some differential sound specificity.

  19. Experiments of multichannel least-square methods for sound field reproduction inside aircraft mock-up: Objective evaluations

    NASA Astrophysics Data System (ADS)

    Gauthier, P.-A.; Camier, C.; Lebel, F.-A.; Pasco, Y.; Berry, A.; Langlois, J.; Verron, C.; Guastavino, C.

    2016-08-01

    Sound environment reproduction of various flight conditions in aircraft mock-ups is a valuable tool for the study, prediction, demonstration and jury testing of interior aircraft sound quality and annoyance. To provide a faithful reproduced sound environment, time, frequency and spatial characteristics should be preserved. Physical sound field reproduction methods for spatial sound reproduction are mandatory to immerse the listener's body in the proper sound fields so that localization cues are recreated at the listener's ears. Vehicle mock-ups pose specific problems for sound field reproduction. Confined spaces, needs for invisible sound sources and very specific acoustical environment make the use of open-loop sound field reproduction technologies such as wave field synthesis (based on free-field models of monopole sources) not ideal. In this paper, experiments in an aircraft mock-up with multichannel least-square methods and equalization are reported. The novelty is the actual implementation of sound field reproduction with 3180 transfer paths and trim panel reproduction sources in laboratory conditions with a synthetic target sound field. The paper presents objective evaluations of reproduced sound fields using various metrics as well as sound field extrapolation and sound field characterization.

  20. Modeling of reverberant room responses for two-dimensional spatial sound field analysis and synthesis.

    PubMed

    Bai, Mingsian R; Li, Yi; Chiang, Yi-Hao

    2017-10-01

    A unified framework is proposed for analysis and synthesis of two-dimensional spatial sound field in reverberant environments. In the sound field analysis (SFA) phase, an unbaffled 24-element circular microphone array is utilized to encode the sound field based on the plane-wave decomposition. Depending on the sparsity of the sound sources, the SFA stage can be implemented in two manners. For sparse-source scenarios, a one-stage algorithm based on compressive sensing algorithm is utilized. Alternatively, a two-stage algorithm can be used, where the minimum power distortionless response beamformer is used to localize the sources and Tikhonov regularization algorithm is used to extract the source amplitudes. In the sound field synthesis (SFS), a 32-element rectangular loudspeaker array is employed to decode the target sound field using pressure matching technique. To establish the room response model, as required in the pressure matching step of the SFS phase, an SFA technique for nonsparse-source scenarios is utilized. Choice of regularization parameters is vital to the reproduced sound field. In the SFS phase, three SFS approaches are compared in terms of localization performance and voice reproduction quality. Experimental results obtained in a reverberant room are presented and reveal that an accurate room response model is vital to immersive rendering of the reproduced sound field.

  1. Experiments on the applicability of MAE techniques for predicting sound diffraction by irregular terrains. [Matched Asymptotic Expansion

    NASA Technical Reports Server (NTRS)

    Berthelot, Yves H.; Pierce, Allan D.; Kearns, James A.

    1987-01-01

    The sound field diffracted by a single smooth hill of finite impedance is studied both analytically, within the context of the theory of Matched Asymptotic Expansions (MAE), and experimentally, under laboratory scale modeling conditions. Special attention is given to the sound field on the diffracting surface and throughout the transition region between the illuminated and the shadow zones. The MAE theory yields integral equations that are amenable to numerical computations. Experimental results are obtained with a spark source producing a pulse of 42 microsec duration and about 130 Pa at 1 m. The insertion loss of the hill is inferred from measurements of the acoustic signals at two locations in the field, with subsequent Fourier analysis on an IBM PC/AT. In general, experimental results support the predictions of the MAE theory, and provide a basis for the analysis of more complicated geometries.

  2. Position-dependent hearing in three species of bushcrickets (Tettigoniidae, Orthoptera)

    PubMed Central

    Lakes-Harlan, Reinhard; Scherberich, Jan

    2015-01-01

    A primary task of auditory systems is the localization of sound sources in space. Sound source localization in azimuth is usually based on temporal or intensity differences of sounds between the bilaterally arranged ears. In mammals, localization in elevation is possible by transfer functions at the ear, especially the pinnae. Although insects are able to locate sound sources, little attention is given to the mechanisms of acoustic orientation to elevated positions. Here we comparatively analyse the peripheral hearing thresholds of three species of bushcrickets in respect to sound source positions in space. The hearing thresholds across frequencies depend on the location of a sound source in the three-dimensional hearing space in front of the animal. Thresholds differ for different azimuthal positions and for different positions in elevation. This position-dependent frequency tuning is species specific. Largest differences in thresholds between positions are found in Ancylecha fenestrata. Correspondingly, A. fenestrata has a rather complex ear morphology including cuticular folds covering the anterior tympanal membrane. The position-dependent tuning might contribute to sound source localization in the habitats. Acoustic orientation might be a selective factor for the evolution of morphological structures at the bushcricket ear and, speculatively, even for frequency fractioning in the ear. PMID:26543574

  3. Position-dependent hearing in three species of bushcrickets (Tettigoniidae, Orthoptera).

    PubMed

    Lakes-Harlan, Reinhard; Scherberich, Jan

    2015-06-01

    A primary task of auditory systems is the localization of sound sources in space. Sound source localization in azimuth is usually based on temporal or intensity differences of sounds between the bilaterally arranged ears. In mammals, localization in elevation is possible by transfer functions at the ear, especially the pinnae. Although insects are able to locate sound sources, little attention is given to the mechanisms of acoustic orientation to elevated positions. Here we comparatively analyse the peripheral hearing thresholds of three species of bushcrickets in respect to sound source positions in space. The hearing thresholds across frequencies depend on the location of a sound source in the three-dimensional hearing space in front of the animal. Thresholds differ for different azimuthal positions and for different positions in elevation. This position-dependent frequency tuning is species specific. Largest differences in thresholds between positions are found in Ancylecha fenestrata. Correspondingly, A. fenestrata has a rather complex ear morphology including cuticular folds covering the anterior tympanal membrane. The position-dependent tuning might contribute to sound source localization in the habitats. Acoustic orientation might be a selective factor for the evolution of morphological structures at the bushcricket ear and, speculatively, even for frequency fractioning in the ear.

  4. Computational study of the interaction between a shock and a near-wall vortex using a weighted compact nonlinear scheme

    NASA Astrophysics Data System (ADS)

    Zuo, Zhifeng; Maekawa, Hiroshi

    2014-02-01

    The interaction between a moderate-strength shock wave and a near-wall vortex is studied numerically by solving the two-dimensional, unsteady compressible Navier-Stokes equations using a weighted compact nonlinear scheme with a simple low-dissipation advection upstream splitting method for flux splitting. Our main purpose is to clarify the development of the flow field and the generation of sound waves resulting from the interaction. The effects of the vortex-wall distance on the sound generation associated with variations in the flow structures are also examined. The computational results show that three sound sources are involved in this problem: (i) a quadrupolar sound source due to the shock-vortex interaction; (ii) a dipolar sound source due to the vortex-wall interaction; and (iii) a dipolar sound source due to unsteady wall shear stress. The sound field is the combination of the sound waves produced by all three sound sources. In addition to the interaction of the incident shock with the vortex, a secondary shock-vortex interaction is caused by the reflection of the reflected shock (MR2) from the wall. The flow field is dominated by the primary and secondary shock-vortex interactions. The generation mechanism of the third sound, which is newly discovered, due to the MR2-vortex interaction is presented. The pressure variations generated by (ii) become significant with decreasing vortex-wall distance. The sound waves caused by (iii) are extremely weak compared with those caused by (i) and (ii) and are negligible in the computed sound field.

  5. Effects of exposure to pile-driving sounds on the lake sturgeon, Nile tilapia and hogchoker

    PubMed Central

    Halvorsen, Michele B.; Casper, Brandon M.; Matthews, Frazer; Carlson, Thomas J.; Popper, Arthur N.

    2012-01-01

    Pile-driving and other impulsive sound sources have the potential to injure or kill fishes. One mechanism that produces injuries is the rapid motion of the walls of the swim bladder as it repeatedly contacts nearby tissues. To further understand the involvement of the swim bladder in tissue damage, a specially designed wave tube was used to expose three species to pile-driving sounds. Species included lake sturgeon (Acipenser fulvescens)—with an open (physostomous) swim bladder, Nile tilapia (Oreochromis niloticus)—with a closed (physoclistous) swim bladder and the hogchoker (Trinectes maculatus)—a flatfish without a swim bladder. There were no visible injuries in any of the exposed hogchokers, whereas a variety of injuries were observed in the lake sturgeon and Nile tilapia. At the loudest cumulative and single-strike sound exposure levels (SELcum and SELss respectively), the Nile tilapia had the highest total injuries and the most severe injuries per fish. As exposure levels decreased, the number and severity of injuries were more similar between the two species. These results suggest that the presence and type of swim bladder correlated with injury at higher sound levels, while the extent of injury at lower sound levels was similar for both kinds of swim bladders. PMID:23055066

  6. Effects of exposure to pile-driving sounds on the lake sturgeon, Nile tilapia and hogchoker.

    PubMed

    Halvorsen, Michele B; Casper, Brandon M; Matthews, Frazer; Carlson, Thomas J; Popper, Arthur N

    2012-12-07

    Pile-driving and other impulsive sound sources have the potential to injure or kill fishes. One mechanism that produces injuries is the rapid motion of the walls of the swim bladder as it repeatedly contacts nearby tissues. To further understand the involvement of the swim bladder in tissue damage, a specially designed wave tube was used to expose three species to pile-driving sounds. Species included lake sturgeon (Acipenser fulvescens)--with an open (physostomous) swim bladder, Nile tilapia (Oreochromis niloticus)--with a closed (physoclistous) swim bladder and the hogchoker (Trinectes maculatus)--a flatfish without a swim bladder. There were no visible injuries in any of the exposed hogchokers, whereas a variety of injuries were observed in the lake sturgeon and Nile tilapia. At the loudest cumulative and single-strike sound exposure levels (SEL(cum) and SEL(ss) respectively), the Nile tilapia had the highest total injuries and the most severe injuries per fish. As exposure levels decreased, the number and severity of injuries were more similar between the two species. These results suggest that the presence and type of swim bladder correlated with injury at higher sound levels, while the extent of injury at lower sound levels was similar for both kinds of swim bladders.

  7. Sampling Singular and Aggregate Point Sources of Carbon Dioxide from Space Using OCO-2

    NASA Astrophysics Data System (ADS)

    Schwandner, F. M.; Gunson, M. R.; Eldering, A.; Miller, C. E.; Nguyen, H.; Osterman, G. B.; Taylor, T.; O'Dell, C.; Carn, S. A.; Kahn, B. H.; Verhulst, K. R.; Crisp, D.; Pieri, D. C.; Linick, J.; Yuen, K.; Sanchez, R. M.; Ashok, M.

    2016-12-01

    Anthropogenic carbon dioxide (CO2) sources increasingly tip the natural balance between natural carbon sources and sinks. Space-borne measurements offer opportunities to detect and analyze point source emission signals anywhere on Earth. Singular continuous point source plumes from power plants or volcanoes turbulently mix into their proximal background fields. In contrast, plumes of aggregate point sources such as cities, and transportation or fossil fuel distribution networks, mix into each other and may therefore result in broader and more persistent excess signals of total column averaged CO2 (XCO2). NASA's first satellite dedicated to atmospheric CO2observation, the Orbiting Carbon Observatory-2 (OCO-2), launched in July 2014 and now leads the afternoon constellation of satellites (A-Train). While continuously collecting measurements in eight footprints across a narrow ( < 10 km) wide swath it occasionally cross-cuts coincident emission plumes. For singular point sources like volcanoes and coal fired power plants, we have developed OCO-2 data discovery tools and a proxy detection method for plumes using SO2-sensitive TIR imaging data (ASTER). This approach offers a path toward automating plume detections with subsequent matching and mining of OCO-2 data. We found several distinct singular source CO2signals. For aggregate point sources, we investigated whether OCO-2's multi-sounding swath observing geometry can reveal intra-urban spatial emission structures in the observed variability of XCO2 data. OCO-2 data demonstrate that we can detect localized excess XCO2 signals of 2 to 6 ppm against suburban and rural backgrounds. Compared to single-shot GOSAT soundings which detected urban/rural XCO2differences in megacities (Kort et al., 2012), the OCO-2 swath geometry opens up the path to future capabilities enabling urban characterization of greenhouse gases using hundreds of soundings over a city at each satellite overpass. California Institute of Technology

  8. Reconstruction of sound source signal by analytical passive TR in the environment with airflow

    NASA Astrophysics Data System (ADS)

    Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu

    2017-03-01

    In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.

  9. Prediction of break-out sound from a rectangular cavity via an elastically mounted panel.

    PubMed

    Wang, Gang; Li, Wen L; Du, Jingtao; Li, Wanyou

    2016-02-01

    The break-out sound from a cavity via an elastically mounted panel is predicted in this paper. The vibroacoustic system model is derived based on the so-called spectro-geometric method in which the solution over each sub-domain is invariably expressed as a modified Fourier series expansion. Unlike the traditional modal superposition methods, the continuity of the normal velocities is faithfully enforced on the interfaces between the flexible panel and the (interior and exterior) acoustic media. A fully coupled vibro-acoustic system is obtained by taking into account the strong coupling between the vibration of the elastic panel and the sound fields on the both sides. The typical time-consuming calculations of quadruple integrals encountered in determining the sound power radiation from a panel has been effectively avoided by reducing them, via discrete cosine transform, into a number of single integrals which are subsequently calculated analytically in a closed form. Several numerical examples are presented to validate the system model, understand the effects on the sound transmissions of panel mounting conditions, and demonstrate the dependence on the size of source room of the "measured" transmission loss.

  10. Localization of sound sources in a room with one microphone

    NASA Astrophysics Data System (ADS)

    Peić Tukuljac, Helena; Lissek, Hervé; Vandergheynst, Pierre

    2017-08-01

    Estimation of the location of sound sources is usually done using microphone arrays. Such settings provide an environment where we know the difference between the received signals among different microphones in the terms of phase or attenuation, which enables localization of the sound sources. In our solution we exploit the properties of the room transfer function in order to localize a sound source inside a room with only one microphone. The shape of the room and the position of the microphone are assumed to be known. The design guidelines and limitations of the sensing matrix are given. Implementation is based on the sparsity in the terms of voxels in a room that are occupied by a source. What is especially interesting about our solution is that we provide localization of the sound sources not only in the horizontal plane, but in the terms of the 3D coordinates inside the room.

  11. Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation

    PubMed Central

    Oliva, Aude

    2017-01-01

    Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals. PMID:28451630

  12. Converting a Monopole Emission into a Dipole Using a Subwavelength Structure

    NASA Astrophysics Data System (ADS)

    Fan, Xu-Dong; Zhu, Yi-Fan; Liang, Bin; Cheng, Jian-chun; Zhang, Likun

    2018-03-01

    High-efficiency emission of multipoles is unachievable by a source much smaller than the wavelength, preventing compact acoustic devices for generating directional sound beams. Here, we present a primary scheme towards solving this problem by numerically and experimentally enclosing a monopole sound source in a structure with a dimension of around 1 /10 sound wavelength to emit a dipolar field. The radiated sound power is found to be more than twice that of a bare dipole. Our study of efficient emission of directional low-frequency sound from a monopole source in a subwavelength space may have applications such as focused ultrasound for imaging, directional underwater sound beams, miniaturized sonar, etc.

  13. Broadband Processing in a Noisy Shallow Ocean Environment: A Particle Filtering Approach

    DOE PAGES

    Candy, J. V.

    2016-04-14

    Here we report that when a broadband source propagates sound in a shallow ocean the received data can become quite complicated due to temperature-related sound-speed variations and therefore a highly dispersive environment. Noise and uncertainties disrupt this already chaotic environment even further because disturbances propagate through the same inherent acoustic channel. The broadband (signal) estimation/detection problem can be decomposed into a set of narrowband solutions that are processed separately and then combined to achieve more enhancement of signal levels than that available from a single frequency, thereby allowing more information to be extracted leading to a more reliable source detection.more » A Bayesian solution to the broadband modal function tracking, pressure-field enhancement, and source detection problem is developed that leads to nonparametric estimates of desired posterior distributions enabling the estimation of useful statistics and an improved processor/detector. In conclusion, to investigate the processor capabilities, we synthesize an ensemble of noisy, broadband, shallow-ocean measurements to evaluate its overall performance using an information theoretical metric for the preprocessor and the receiver operating characteristic curve for the detector.« less

  14. Effect of the spectrum of a high-intensity sound source on the sound-absorbing properties of a resonance-type acoustic lining

    NASA Astrophysics Data System (ADS)

    Ipatov, M. S.; Ostroumov, M. N.; Sobolev, A. F.

    2012-07-01

    Experimental results are presented on the effect of both the sound pressure level and the type of spectrum of a sound source on the impedance of an acoustic lining. The spectra under study include those of white noise, a narrow-band signal, and a signal with a preset waveform. It is found that, to obtain reliable data on the impedance of an acoustic lining from the results of interferometric measurements, the total sound pressure level of white noise or the maximal sound pressure level of a pure tone (at every oscillation frequency) needs to be identical to the total sound pressure level of the actual source at the site of acoustic lining on the channel wall.

  15. 3D Sound Techniques for Sound Source Elevation in a Loudspeaker Listening Environment

    NASA Astrophysics Data System (ADS)

    Kim, Yong Guk; Jo, Sungdong; Kim, Hong Kook; Jang, Sei-Jin; Lee, Seok-Pil

    In this paper, we propose several 3D sound techniques for sound source elevation in stereo loudspeaker listening environments. The proposed method integrates a head-related transfer function (HRTF) for sound positioning and early reflection for adding reverberant circumstance. In addition, spectral notch filtering and directional band boosting techniques are also included for increasing elevation perception capability. In order to evaluate the elevation performance of the proposed method, subjective listening tests are conducted using several kinds of sound sources such as white noise, sound effects, speech, and music samples. It is shown from the tests that the degrees of perceived elevation by the proposed method are around the 17º to 21º when the stereo loudspeakers are located on the horizontal plane.

  16. Techniques and instrumentation for the measurement of transient sound energy flux

    NASA Astrophysics Data System (ADS)

    Watkinson, P. S.; Fahy, F. J.

    1983-12-01

    The evaluation of sound intensity distributions, and sound powers, of essentially continuous sources such as automotive engines, electric motors, production line machinery, furnaces, earth moving machinery and various types of process plants were studied. Although such systems are important sources of community disturbance and, to a lesser extent, of industrial health hazard, the most serious sources of hearing hazard in industry are machines operating on an impact principle, such as drop forges, hammers and punches. Controlled experiments to identify major noise source regions and mechanisms are difficult because it is normally impossible to install them in quiet, anechoic environments. The potential for sound intensity measurement to provide a means of overcoming these difficulties has given promising results, indicating the possibility of separation of directly radiated and reverberant sound fields. However, because of the complexity of transient sound fields, a fundamental investigation is necessary to establish the practicability of intensity field decomposition, which is basic to source characterization techniques.

  17. Perceptual constancy in auditory perception of distance to railway tracks.

    PubMed

    De Coensel, Bert; Nilsson, Mats E; Berglund, Birgitta; Brown, A L

    2013-07-01

    Distance to a sound source can be accurately estimated solely from auditory information. With a sound source such as a train that is passing by at a relatively large distance, the most important auditory information for the listener for estimating its distance consists of the intensity of the sound, spectral changes in the sound caused by air absorption, and the motion-induced rate of change of intensity. However, these cues are relative because prior information/experience of the sound source-its source power, its spectrum and the typical speed at which it moves-is required for such distance estimates. This paper describes two listening experiments that allow investigation of further prior contextual information taken into account by listeners-viz., whether they are indoors or outdoors. Asked to estimate the distance to the track of a railway, it is shown that listeners assessing sounds heard inside the dwelling based their distance estimates on the expected train passby sound level outdoors rather than on the passby sound level actually experienced indoors. This form of perceptual constancy may have consequences for the assessment of annoyance caused by railway noise.

  18. Recent paleoseismicity record in Prince William Sound, Alaska, USA

    NASA Astrophysics Data System (ADS)

    Kuehl, Steven A.; Miller, Eric J.; Marshall, Nicole R.; Dellapenna, Timothy M.

    2017-12-01

    Sedimentological and geochemical investigation of sediment cores collected in the deep (>400 m) central basin of Prince William Sound, along with geochemical fingerprinting of sediment source areas, are used to identify earthquake-generated sediment gravity flows. Prince William Sound receives sediment from two distinct sources: from offshore (primarily Copper River) through Hinchinbrook Inlet, and from sources within the Sound (primarily Columbia Glacier). These sources are found to have diagnostic elemental ratios indicative of provenance; Copper River Basin sediments were significantly higher in Sr/Pb and Cu/Pb, whereas Prince William Sound sediments were significantly higher in K/Ca and Rb/Sr. Within the past century, sediment gravity flows deposited within the deep central channel of Prince William Sound have robust geochemical (provenance) signatures that can be correlated with known moderate to large earthquakes in the region. Given the thick Holocene sequence in the Sound ( 200 m) and correspondingly high sedimentation rates (>1 cm year-1), this relationship suggests that sediments within the central basin of Prince William Sound may contain an extraordinary high-resolution record of paleoseismicity in the region.

  19. Multiple sound source localization using gammatone auditory filtering and direct sound componence detection

    NASA Astrophysics Data System (ADS)

    Chen, Huaiyu; Cao, Li

    2017-06-01

    In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.

  20. Interior sound field control using generalized singular value decomposition in the frequency domain.

    PubMed

    Pasco, Yann; Gauthier, Philippe-Aubert; Berry, Alain; Moreau, Stéphane

    2017-01-01

    The problem of controlling a sound field inside a region surrounded by acoustic control sources is considered. Inspired by the Kirchhoff-Helmholtz integral, the use of double-layer source arrays allows such a control and avoids the modification of the external sound field by the control sources by the approximation of the sources as monopole and radial dipole transducers. However, the practical implementation of the Kirchhoff-Helmholtz integral in physical space leads to large numbers of control sources and error sensors along with excessive controller complexity in three dimensions. The present study investigates the potential of the Generalized Singular Value Decomposition (GSVD) to reduce the controller complexity and separate the effect of control sources on the interior and exterior sound fields, respectively. A proper truncation of the singular basis provided by the GSVD factorization is shown to lead to effective cancellation of the interior sound field at frequencies below the spatial Nyquist frequency of the control sources array while leaving the exterior sound field almost unchanged. Proofs of concept are provided through simulations achieved for interior problems by simulations in a free field scenario with circular arrays and in a reflective environment with square arrays.

  1. Acoustic Monitoring of the Arctic Ice Cap

    NASA Astrophysics Data System (ADS)

    Porter, D. L.; Goemmer, S. A.; Chayes, D. N.

    2012-12-01

    Introduction The monitoring of the Arctic Ice Cap is important economically, tactically, and strategically. In the scenario of ice cap retreat, new paths of commerce open, e.g. waterways from Northern Europe to the Far East. Where ship-going commerce is conducted, the U.S. Navy and U.S. Coast Guard have always stood guard and been prepared to assist from acts of nature and of man. It is imperative that in addition to measuring the ice from satellites, e.g. Icesat, that we have an ability to measure the ice extent, its thickness, and roughness. These parameters play an important part in the modeling of the ice and the processes that control its growth or shrinking and its thickness. The proposed system consists of three subsystems. The first subsystem is an acoustic source, the second is an array of geophones and the third is a system to supply energy and transmit the results back to the analysis laboratory. The subsystems are described below. We conclude with a plan on how to tackle this project and the payoff to the ice cap modeler and hence the users, i.e. commerce and defense. System Two historically tested methods to generate a large amplitude multi-frequency sound source include explosives and air guns. A new method developed and tested by the University of Texas, ARL is a combustive Sound Source [Wilson, et al., 1995]. The combustive sound source is a submerged combustion chamber that is filled with the byproducts of the electrolysis of sea water, i.e. Hydrogen and Oxygen, an explosive mixture which is ignited via a spark. Thus, no additional compressors, gases, or explosives need to be transported to the Arctic to generate an acoustic pulse capable of the sediment and the ice. The second subsystem would be geophones capable of listening in the O(10 Hz) range and transmitting that data back to the laboratory. Thus two single arrays of geophones arranged orthogonal to each other with a range of 1000's of kilometers and a combustive sound source where the two arrays intersect would comprise an ice cap monitoring system. The third subsystem is the energy and telemetry required to run the systems. The geophones are low energy compared to the combustive sound source and might be supplied by batteries and a solar panel (at least for half the year). The combustive sound source needs a large continuously energy supply. Two energy harvesting ideas, which need further investigation, are a wind turbine, and a Stirling engine that runs off the temperature difference between the ocean and the atmosphere. Analysis It is expected that the recording of the acoustics energy, as it travels through the ice and is detected by the geophones, will provide estimates of ice anisotropy and coherence. These give estimates of the ice roughness and thickness, respectively, and are key parameters for modeling the changes in the ice cap cover in the Artic. Reference: P. S. Wilson, T. G. Muir, J. A. Behrens, and J. L. Elizey, "Applications of the combustive sound source," J. Acoust. Soc. Am. 97, 3298(A) (1995).

  2. Series expansions of rotating two and three dimensional sound fields.

    PubMed

    Poletti, M A

    2010-12-01

    The cylindrical and spherical harmonic expansions of oscillating sound fields rotating at a constant rate are derived. These expansions are a generalized form of the stationary sound field expansions. The derivations are based on the representation of interior and exterior sound fields using the simple source approach and determination of the simple source solutions with uniform rotation. Numerical simulations of rotating sound fields are presented to verify the theory.

  3. Characterizing large river sounds: Providing context for understanding the environmental effects of noise produced by hydrokinetic turbines.

    PubMed

    Bevelhimer, Mark S; Deng, Z Daniel; Scherelis, Constantin

    2016-01-01

    Underwater noise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels and natural underwater sound sources in static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines on the Mississippi River, where sounds created by flowing water are part of all measurements, both natural ambient and anthropogenic sources. Vessel sizes ranged from a small fishing boat with 60 hp outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, which were, on average, 40 dB greater than the sound created by an operating hydrokinetic turbine. A comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed sound attenuation.

  4. Neural Decoding of Bistable Sounds Reveals an Effect of Intention on Perceptual Organization

    PubMed Central

    2018-01-01

    Auditory signals arrive at the ear as a mixture that the brain must decompose into distinct sources based to a large extent on acoustic properties of the sounds. An important question concerns whether listeners have voluntary control over how many sources they perceive. This has been studied using pure high (H) and low (L) tones presented in the repeating pattern HLH-HLH-, which can form a bistable percept heard either as an integrated whole (HLH-) or as segregated into high (H-H-) and low (-L-) sequences. Although instructing listeners to try to integrate or segregate sounds affects reports of what they hear, this could reflect a response bias rather than a perceptual effect. We had human listeners (15 males, 12 females) continuously report their perception of such sequences and recorded neural activity using MEG. During neutral listening, a classifier trained on patterns of neural activity distinguished between periods of integrated and segregated perception. In other conditions, participants tried to influence their perception by allocating attention either to the whole sequence or to a subset of the sounds. They reported hearing the desired percept for a greater proportion of time than when listening neutrally. Critically, neural activity supported these reports; stimulus-locked brain responses in auditory cortex were more likely to resemble the signature of segregation when participants tried to hear segregation than when attempting to perceive integration. These results indicate that listeners can influence how many sound sources they perceive, as reflected in neural responses that track both the input and its perceptual organization. SIGNIFICANCE STATEMENT Can we consciously influence our perception of the external world? We address this question using sound sequences that can be heard either as coming from a single source or as two distinct auditory streams. Listeners reported spontaneous changes in their perception between these two interpretations while we recorded neural activity to identify signatures of such integration and segregation. They also indicated that they could, to some extent, choose between these alternatives. This claim was supported by corresponding changes in responses in auditory cortex. By linking neural and behavioral correlates of perception, we demonstrate that the number of objects that we perceive can depend not only on the physical attributes of our environment, but also on how we intend to experience it. PMID:29440556

  5. Sound Source Localization Using Non-Conformal Surface Sound Field Transformation Based on Spherical Harmonic Wave Decomposition

    PubMed Central

    Zhang, Lanyue; Ding, Dandan; Yang, Desen; Wang, Jia; Shi, Jie

    2017-01-01

    Spherical microphone arrays have been paid increasing attention for their ability to locate a sound source with arbitrary incident angle in three-dimensional space. Low-frequency sound sources are usually located by using spherical near-field acoustic holography. The reconstruction surface and holography surface are conformal surfaces in the conventional sound field transformation based on generalized Fourier transform. When the sound source is on the cylindrical surface, it is difficult to locate by using spherical surface conformal transform. The non-conformal sound field transformation by making a transfer matrix based on spherical harmonic wave decomposition is proposed in this paper, which can achieve the transformation of a spherical surface into a cylindrical surface by using spherical array data. The theoretical expressions of the proposed method are deduced, and the performance of the method is simulated. Moreover, the experiment of sound source localization by using a spherical array with randomly and uniformly distributed elements is carried out. Results show that the non-conformal surface sound field transformation from a spherical surface to a cylindrical surface is realized by using the proposed method. The localization deviation is around 0.01 m, and the resolution is around 0.3 m. The application of the spherical array is extended, and the localization ability of the spherical array is improved. PMID:28489065

  6. New insights into insect's silent flight. Part II: sound source and noise control

    NASA Astrophysics Data System (ADS)

    Xue, Qian; Geng, Biao; Zheng, Xudong; Liu, Geng; Dong, Haibo

    2016-11-01

    The flapping flight of aerial animals has excellent aerodynamic performance but meanwhile generates low noise. In this study, the unsteady flow and acoustic characteristics of the flapping wing are numerically investigated for three-dimensional (3D) models of Tibicen linnei cicada at free forward flight conditions. Single cicada wing is modelled as a membrane with prescribed motion reconstructed by Wan et al. (2015). The flow field and acoustic field around the flapping wing are solved with immersed-boundary-method based incompressible flow solver and linearized-perturbed-compressible-equations based acoustic solver. The 3D simulation allows examination of both directivity and frequency composition of the produced sound in a full space. The mechanism of sound generation of flapping wing is analyzed through correlations between acoustic signals and flow features. Along with a flexible wing model, a rigid wing model is also simulated. The results from these two cases will be compared to investigate the effects of wing flexibility on sound generation. This study is supported by NSF CBET-1313217 and AFOSR FA9550-12-1-0071.

  7. Sound reduction by metamaterial-based acoustic enclosure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Shanshan; Li, Pei; Zhou, Xiaoming

    In many practical systems, acoustic radiation control on noise sources contained within a finite volume by an acoustic enclosure is of great importance, but difficult to be accomplished at low frequencies due to the enhanced acoustic-structure interaction. In this work, we propose to use acoustic metamaterials as the enclosure to efficiently reduce sound radiation at their negative-mass frequencies. Based on a circularly-shaped metamaterial model, sound radiation properties by either central or eccentric sources are analyzed by numerical simulations for structured metamaterials. The parametric analyses demonstrate that the barrier thickness, the cavity size, the source type, and the eccentricity of themore » source have a profound effect on the sound reduction. It is found that increasing the thickness of the metamaterial barrier is an efficient approach to achieve large sound reduction over the negative-mass frequencies. These results are helpful in designing highly efficient acoustic enclosures for blockage of sound in low frequencies.« less

  8. Cross-correlation, triangulation, and curved-wavefront focusing of coral reef sound using a bi-linear hydrophone array.

    PubMed

    Freeman, Simon E; Buckingham, Michael J; Freeman, Lauren A; Lammers, Marc O; D'Spain, Gerald L

    2015-01-01

    A seven element, bi-linear hydrophone array was deployed over a coral reef in the Papahãnaumokuãkea Marine National Monument, Northwest Hawaiian Islands, in order to investigate the spatial, temporal, and spectral properties of biological sound in an environment free of anthropogenic influences. Local biological sound sources, including snapping shrimp and other organisms, produced curved-wavefront acoustic arrivals at the array, allowing source location via focusing to be performed over an area of 1600 m(2). Initially, however, a rough estimate of source location was obtained from triangulation of pair-wise cross-correlations of the sound. Refinements to these initial source locations, and source frequency information, were then obtained using two techniques, conventional and adaptive focusing. It was found that most of the sources were situated on or inside the reef structure itself, rather than over adjacent sandy areas. Snapping-shrimp-like sounds, all with similar spectral characteristics, originated from individual sources predominantly in one area to the east of the array. To the west, the spectral and spatial distributions of the sources were more varied, suggesting the presence of a multitude of heterogeneous biological processes. In addition to the biological sounds, some low-frequency noise due to distant breaking waves was received from end-fire north of the array.

  9. Underwater Acoustic Source Localisation Among Blind and Sighted Scuba Divers: Comparative study.

    PubMed

    Cambi, Jacopo; Livi, Ludovica; Livi, Walter

    2017-05-01

    Many blind individuals demonstrate enhanced auditory spatial discrimination or localisation of sound sources in comparison to sighted subjects. However, this hypothesis has not yet been confirmed with regards to underwater spatial localisation. This study therefore aimed to investigate underwater acoustic source localisation among blind and sighted scuba divers. This study took place between February and June 2015 in Elba, Italy, and involved two experimental groups of divers with either acquired (n = 20) or congenital (n = 10) blindness and a control group of 30 sighted divers. Each subject took part in five attempts at an under-water acoustic source localisation task, in which the divers were requested to swim to the source of a sound originating from one of 24 potential locations. The control group had their sight obscured during the task. The congenitally blind divers demonstrated significantly better underwater sound localisation compared to the control group or those with acquired blindness ( P = 0.0007). In addition, there was a significant correlation between years of blindness and underwater sound localisation ( P <0.0001). Congenital blindness was found to positively affect the ability of a diver to recognise the source of a sound in an underwater environment. As the correct localisation of sounds underwater may help individuals to avoid imminent danger, divers should perform sound localisation tests during training sessions.

  10. The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl.

    PubMed

    Baxter, Caitlin S; Nelson, Brian S; Takahashi, Terry T

    2013-02-01

    Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643-655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map.

  11. Detecting small scale CO2 emission structures using OCO-2

    NASA Astrophysics Data System (ADS)

    Schwandner, Florian M.; Eldering, Annmarie; Verhulst, Kristal R.; Miller, Charles E.; Nguyen, Hai M.; Oda, Tomohiro; O'Dell, Christopher; Rao, Preeti; Kahn, Brian; Crisp, David; Gunson, Michael R.; Sanchez, Robert M.; Ashok, Manasa; Pieri, David; Linick, Justin P.; Yuen, Karen

    2016-04-01

    Localized carbon dioxide (CO2) emission structures cover spatial domains of less than 50 km diameter and include cities and transportation networks, as well as fossil fuel production, upgrading and distribution infra-structure. Anthropogenic sources increasingly upset the natural balance between natural carbon sources and sinks. Mitigation of resulting climate change impacts requires management of emissions, and emissions management requires monitoring, reporting and verification. Space-borne measurements provide a unique opportunity to detect, quantify, and analyze small scale and point source emissions on a global scale. NASA's first satellite dedicated to atmospheric CO2 observation, the July 2014 launched Orbiting Carbon Observatory (OCO-2), now leads the afternoon constellation of satellites (A-Train). Its continuous swath of 2 to 10 km in width and eight footprints across can slice through coincident emission plumes and may provide momentary cross sections. First OCO-2 results demonstrate that we can detect localized source signals in the form of urban total column averaged CO2 enhancements of ~2 ppm against suburban and rural backgrounds. OCO-2's multi-sounding swath observing geometry reveals intra-urban spatial structures reflected in XCO2 data, previously unobserved from space. The transition from single-shot GOSAT soundings detecting urban/rural differences (Kort et al., 2012) to hundreds of soundings per OCO-2 swath opens up the path to future capabilities enabling urban tomography of greenhouse gases. For singular point sources like coal fired power plants, we have developed proxy detections of plumes using bands of imaging spectrometers with sensitivity to SO2 in the thermal infrared (ASTER). This approach provides a means to automate plume detection with subsequent matching and mining of OCO-2 data for enhanced detection efficiency and validation. © California Institute of Technology

  12. Experimental study using Nearfield Acoustical Holography of sound transmission fuselage sidewall structures

    NASA Technical Reports Server (NTRS)

    Maynard, J. D.

    1983-01-01

    This project involves the development of the Nearfield Acoustic Holography (NAH) technique (in particular its extension from single frequency to wideband noise measurement) and its application in a detailed study of the noise radiation characteristics of several samples of aircraft sidewall panels. With the extensive amount of information provided by the NAH technique, the properties of the sound field radiated by the panels may be correlated with their structure, mounting, and excitation (single frequency or wideband, spatially correlated or uncorrelated, structure-borne). The work accomplished at the beginning of this grant period included: (1) Calibration of the 256 microphone array and test of its accuracy. (2) extension of the facility to permit measurements on wideband noise sources. The extensions incuded the addition of high-speed data acquisition hardware and an array processor, and the development of new software. (3) Installation of motion picture graphics for correlating panel motion with structure, mounting, radiation, etc. (4) Development of new holographic data processing techniques.

  13. Characterizing large river sounds: Providing context for understanding the environmental effects of noise produced by hydrokinetic turbines

    DOE PAGES

    Bevelhimer, Mark S.; Deng, Z. Daniel; Scherelis, Constantin C.

    2016-01-06

    Underwaternoise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels and natural underwater sound sources in static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines on the Mississippi River, where soundsmore » created by flowing water are part of all measurements, both natural ambient and anthropogenic sources. Vessel sizes ranged from a small fishing boat with 60 hp outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, which were, on average, 40 dB greater than the sound created by an operating hydrokinetic turbine. As a result, a comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed sound attenuation.« less

  14. Personal sound zone reproduction with room reflections

    NASA Astrophysics Data System (ADS)

    Olik, Marek

    Loudspeaker-based sound systems, capable of a convincing reproduction of different audio streams to listeners in the same acoustic enclosure, are a convenient alternative to headphones. Such systems aim to generate "sound zones" in which target sound programmes are to be reproduced with minimum interference from any alternative programmes. This can be achieved with appropriate filtering of the source (loudspeaker) signals, so that the target sound's energy is directed to the chosen zone while being attenuated elsewhere. The existing methods are unable to produce the required sound energy ratio (acoustic contrast) between the zones with a small number of sources when strong room reflections are present. Optimization of parameters is therefore required for systems with practical limitations to improve their performance in reflective acoustic environments. One important parameter is positioning of sources with respect to the zones and room boundaries. The first contribution of this thesis is a comparison of the key sound zoning methods implemented on compact and distributed geometrical source arrangements. The study presents previously unpublished detailed evaluation and ranking of such arrangements for systems with a limited number of sources in a reflective acoustic environment similar to a domestic room. Motivated by the requirement to investigate the relationship between source positioning and performance in detail, the central contribution of this thesis is a study on optimizing source arrangements when strong individual room reflections occur. Small sound zone systems are studied analytically and numerically to reveal relationships between the geometry of source arrays and performance in terms of acoustic contrast and array effort (related to system efficiency). Three novel source position optimization techniques are proposed to increase the contrast, and geometrical means of reducing the effort are determined. Contrary to previously published case studies, this work presents a systematic examination of the key problem of first order reflections and proposes general optimization techniques, thus forming an important contribution. The remaining contribution considers evaluation and comparison of the proposed techniques with two alternative approaches to sound zone generation under reflective conditions: acoustic contrast control (ACC) combined with anechoic source optimization and sound power minimization (SPM). The study provides a ranking of the examined approaches which could serve as a guideline for method selection for rooms with strong individual reflections.

  15. Marine mammal audibility of selected shallow-water survey sources.

    PubMed

    MacGillivray, Alexander O; Racca, Roberto; Li, Zizheng

    2014-01-01

    Most attention about the acoustic effects of marine survey sound sources on marine mammals has focused on airgun arrays, with other common sources receiving less scrutiny. Sound levels above hearing threshold (sensation levels) were modeled for six marine mammal species and seven different survey sources in shallow water. The model indicated that odontocetes were most likely to hear sounds from mid-frequency sources (fishery, communication, and hydrographic systems), mysticetes from low-frequency sources (sub-bottom profiler and airguns), and pinnipeds from both mid- and low-frequency sources. High-frequency sources (side-scan and multibeam) generated the lowest estimated sensation levels for all marine mammal species groups.

  16. Performance of active feedforward control systems in non-ideal, synthesized diffuse sound fields.

    PubMed

    Misol, Malte; Bloch, Christian; Monner, Hans Peter; Sinapius, Michael

    2014-04-01

    The acoustic performance of passive or active panel structures is usually tested in sound transmission loss facilities. A reverberant sending room, equipped with one or a number of independent sound sources, is used to generate a diffuse sound field excitation which acts as a disturbance source on the structure under investigation. The spatial correlation and coherence of such a synthesized non-ideal diffuse-sound-field excitation, however, might deviate significantly from the ideal case. This has consequences for the operation of an active feedforward control system which heavily relies on the acquisition of coherent disturbance source information. This work, therefore, evaluates the spatial correlation and coherence of ideal and non-ideal diffuse sound fields and considers the implications on the performance of a feedforward control system. The system under consideration is an aircraft-typical double panel system, equipped with an active sidewall panel (lining), which is realized in a transmission loss facility. Experimental results for different numbers of sound sources in the reverberation room are compared to simulation results of a comparable generic double panel system excited by an ideal diffuse sound field. It is shown that the number of statistically independent noise sources acting on the primary structure of the double panel system depends not only on the type of diffuse sound field but also on the sample lengths of the processed signals. The experimental results show that the number of reference sensors required for a defined control performance exhibits an inverse relationship to control filter length.

  17. The influence of underwater data transmission sounds on the displacement behaviour of captive harbour seals (Phoca vitulina).

    PubMed

    Kastelein, Ronald A; van der Heul, Sander; Verboom, Willem C; Triesscheijn, Rob J V; Jennings, Nancy V

    2006-02-01

    To prevent grounding of ships and collisions between ships in shallow coastal waters, an underwater data collection and communication network (ACME) using underwater sounds to encode and transmit data is currently under development. Marine mammals might be affected by ACME sounds since they may use sound of a similar frequency (around 12 kHz) for communication, orientation, and prey location. If marine mammals tend to avoid the vicinity of the acoustic transmitters, they may be kept away from ecologically important areas by ACME sounds. One marine mammal species that may be affected in the North Sea is the harbour seal (Phoca vitulina). No information is available on the effects of ACME-like sounds on harbour seals, so this study was carried out as part of an environmental impact assessment program. Nine captive harbour seals were subjected to four sound types, three of which may be used in the underwater acoustic data communication network. The effect of each sound was judged by comparing the animals' location in a pool during test periods to that during baseline periods, during which no sound was produced. Each of the four sounds could be made into a deterrent by increasing its amplitude. The seals reacted by swimming away from the sound source. The sound pressure level (SPL) at the acoustic discomfort threshold was established for each of the four sounds. The acoustic discomfort threshold is defined as the boundary between the areas that the animals generally occupied during the transmission of the sounds and the areas that they generally did not enter during transmission. The SPLs at the acoustic discomfort thresholds were similar for each of the sounds (107 dB re 1 microPa). Based on this discomfort threshold SPL, discomfort zones at sea for several source levels (130-180 dB re 1 microPa) of the sounds were calculated, using a guideline sound propagation model for shallow water. The discomfort zone is defined as the area around a sound source that harbour seals are expected to avoid. The definition of the discomfort zone is based on behavioural discomfort, and does not necessarily coincide with the physical discomfort zone. Based on these results, source levels can be selected that have an acceptable effect on harbour seals in particular areas. The discomfort zone of a communication sound depends on the sound, the source level, and the propagation characteristics of the area in which the sound system is operational. The source level of the communication system should be adapted to each area (taking into account the width of a sea arm, the local sound propagation, and the importance of an area to the affected species). The discomfort zone should not coincide with ecologically important areas (for instance resting, breeding, suckling, and feeding areas), or routes between these areas.

  18. Monitoring Anthropogenic Ocean Sound from Shipping Using an Acoustic Sensor Network and a Compressive Sensing Approach.

    PubMed

    Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian

    2016-03-22

    Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches.

  19. Monitoring Anthropogenic Ocean Sound from Shipping Using an Acoustic Sensor Network and a Compressive Sensing Approach †

    PubMed Central

    Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian

    2016-01-01

    Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches. PMID:27011187

  20. Underwater Acoustic Source Localisation Among Blind and Sighted Scuba Divers

    PubMed Central

    Cambi, Jacopo; Livi, Ludovica; Livi, Walter

    2017-01-01

    Objectives Many blind individuals demonstrate enhanced auditory spatial discrimination or localisation of sound sources in comparison to sighted subjects. However, this hypothesis has not yet been confirmed with regards to underwater spatial localisation. This study therefore aimed to investigate underwater acoustic source localisation among blind and sighted scuba divers. Methods This study took place between February and June 2015 in Elba, Italy, and involved two experimental groups of divers with either acquired (n = 20) or congenital (n = 10) blindness and a control group of 30 sighted divers. Each subject took part in five attempts at an under-water acoustic source localisation task, in which the divers were requested to swim to the source of a sound originating from one of 24 potential locations. The control group had their sight obscured during the task. Results The congenitally blind divers demonstrated significantly better underwater sound localisation compared to the control group or those with acquired blindness (P = 0.0007). In addition, there was a significant correlation between years of blindness and underwater sound localisation (P <0.0001). Conclusion Congenital blindness was found to positively affect the ability of a diver to recognise the source of a sound in an underwater environment. As the correct localisation of sounds underwater may help individuals to avoid imminent danger, divers should perform sound localisation tests during training sessions. PMID:28690888

  1. The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl

    PubMed Central

    Baxter, Caitlin S.; Takahashi, Terry T.

    2013-01-01

    Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643–655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map. PMID:23175801

  2. Comparison of sound reproduction using higher order loudspeakers and equivalent line arrays in free-field conditions.

    PubMed

    Poletti, Mark A; Betlehem, Terence; Abhayapala, Thushara D

    2014-07-01

    Higher order sound sources of Nth order can radiate sound with 2N + 1 orthogonal radiation patterns, which can be represented as phase modes or, equivalently, amplitude modes. This paper shows that each phase mode response produces a spiral wave front with a different spiral rate, and therefore a different direction of arrival of sound. Hence, for a given receiver position a higher order source is equivalent to a linear array of 2N + 1 monopole sources. This interpretation suggests performance similar to a circular array of higher order sources can be produced by an array of sources, each of which consists of a line array having monopoles at the apparent source locations of the corresponding phase modes. Simulations of higher order arrays and arrays of equivalent line sources are presented. It is shown that the interior fields produced by the two arrays are essentially the same, but that the exterior fields differ because the higher order sources produces different equivalent source locations for field positions outside the array. This work provides an explanation of the fact that an array of L Nth order sources can reproduce sound fields whose accuracy approaches the performance of (2N + 1)L monopoles.

  3. Thermoacoustic sound projector: exceeding the fundamental efficiency of carbon nanotubes.

    PubMed

    Aliev, Ali E; Codoluto, Daniel; Baughman, Ray H; Ovalle-Robles, Raquel; Inoue, Kanzan; Romanov, Stepan A; Nasibulin, Albert G; Kumar, Prashant; Priya, Shashank; Mayo, Nathanael K; Blottman, John B

    2018-08-10

    The combination of smooth, continuous sound spectra produced by a sound source having no vibrating parts, a nanoscale thickness of a flexible active layer and the feasibility of creating large, conformal projectors provoke interest in thermoacoustic phenomena. However, at low frequencies, the sound pressure level (SPL) and the sound generation efficiency of an open carbon nanotube sheet (CNTS) is low. In addition, the nanoscale thickness of fragile heating elements, their high sensitivity to the environment and the high surface temperatures practical for thermoacoustic sound generation necessitate protective encapsulation of a freestanding CNTS in inert gases. Encapsulation provides the desired increase of sound pressure towards low frequencies. However, the protective enclosure restricts heat dissipation from the resistively heated CNTS and the interior of the encapsulated device. Here, the heat dissipation issue is addressed by short pulse excitations of the CNTS. An overall increase of energy conversion efficiency by more than four orders (from 10 -5 to 0.1) and the SPL of 120 dB re 20 μPa @ 1 m in air and 170 dB re 1 μPa @ 1 m in water were demonstrated. The short pulse excitation provides a stable linear increase of output sound pressure with substantially increased input power density (>2.5 W cm -2 ). We provide an extensive experimental study of pulse excitations in different thermodynamic regimes for freestanding CNTSs with varying thermal inertias (single-walled and multiwalled with varying diameters and numbers of superimposed sheet layers) in vacuum and in air. The acoustical and geometrical parameters providing further enhancement of energy conversion efficiency are discussed.

  4. Structure of supersonic jet flow and its radiated sound

    NASA Technical Reports Server (NTRS)

    Mankbadi, Reda R.; Hayer, M. Ehtesham; Povinelli, Louis A.

    1994-01-01

    The present paper explores the use of large-eddy simulations as a tool for predicting noise from first principles. A high-order numerical scheme is used to perform large-eddy simulations of a supersonic jet flow with emphasis on capturing the time-dependent flow structure representating the sound source. The wavelike nature of this structure under random inflow disturbances is demonstrated. This wavelike structure is then enhanced by taking the inflow disturbances to be purely harmonic. Application of Lighthill's theory to calculate the far-field noise, with the sound source obtained from the calculated time-dependent near field, is demonstrated. Alternative approaches to coupling the near-field sound source to the far-field sound are discussed.

  5. Spacecraft Internal Acoustic Environment Modeling

    NASA Technical Reports Server (NTRS)

    Chu, SShao-sheng R.; Allen, Christopher S.

    2009-01-01

    Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. In FY09, the physical mockup developed in FY08, with interior geometric shape similar to Orion CM (Crew Module) IML (Interior Mode Line), was used to validate SEA (Statistical Energy Analysis) acoustic model development with realistic ventilation fan sources. The sound power levels of these sources were unknown a priori, as opposed to previous studies that RSS (Reference Sound Source) with known sound power level was used. The modeling results were evaluated based on comparisons to measurements of sound pressure levels over a wide frequency range, including the frequency range where SEA gives good results. Sound intensity measurement was performed over a rectangular-shaped grid system enclosing the ventilation fan source. Sound intensities were measured at the top, front, back, right, and left surfaces of the and system. Sound intensity at the bottom surface was not measured, but sound blocking material was placed tinder the bottom surface to reflect most of the incident sound energy back to the remaining measured surfaces. Integrating measured sound intensities over measured surfaces renders estimated sound power of the source. The reverberation time T6o of the mockup interior had been modified to match reverberation levels of ISS US Lab interior for speech frequency bands, i.e., 0.5k, 1k, 2k, 4 kHz, by attaching appropriately sized Thinsulate sound absorption material to the interior wall of the mockup. Sound absorption of Thinsulate was modeled in three methods: Sabine equation with measured mockup interior reverberation time T60, layup model based on past impedance tube testing, and layup model plus air absorption correction. The evaluation/validation was carried out by acquiring octave band microphone data simultaneously at ten fixed locations throughout the mockup. SPLs (Sound Pressure Levels) predicted by our SEA model match well with measurements for our CM mockup, with a more complicated shape. Additionally in FY09, background NC noise (Noise Criterion) simulation and MRT (Modified Rhyme Test) were developed and performed in the mockup to determine the maximum noise level in CM habitable volume for fair crew voice communications. Numerous demonstrations of simulated noise environment in the mockup and associated SIL (Speech Interference Level) via MRT were performed for various communities, including members from NASA and Orion prime-/sub-contractors. Also, a new HSIR (Human-Systems Integration Requirement) for limiting pre- and post-landing SIL was proposed.

  6. Underwater sound of rigid-hulled inflatable boats.

    PubMed

    Erbe, Christine; Liong, Syafrin; Koessler, Matthew Walter; Duncan, Alec J; Gourlay, Tim

    2016-06-01

    Underwater sound of rigid-hulled inflatable boats was recorded 142 times in total, over 3 sites: 2 in southern British Columbia, Canada, and 1 off Western Australia. Underwater sound peaked between 70 and 400 Hz, exhibiting strong tones in this frequency range related to engine and propeller rotation. Sound propagation models were applied to compute monopole source levels, with the source assumed 1 m below the sea surface. Broadband source levels (10-48 000 Hz) increased from 134 to 171 dB re 1 μPa @ 1 m with speed from 3 to 16 m/s (10-56 km/h). Source power spectral density percentile levels and 1/3 octave band levels are given for use in predictive modeling of underwater sound of these boats as part of environmental impact assessments.

  7. Control of Toxic Chemicals in Puget Sound, Phase 3: Study Of Atmospheric Deposition of Air Toxics to the Surface of Puget Sound

    DTIC Science & Technology

    2007-01-01

    deposition directly to Puget Sound was an important source of PAHs, polybrominated diphenyl ethers (PBDEs), and heavy metals . In most cases, atmospheric...versus Atmospheric Fluxes ........................................................................66  PAH Source Apportionment ...temperature inversions) on air quality during the wet season. A semi-quantitative apportionment study permitted a first-order characterization of source

  8. Binaural Processing of Multiple Sound Sources

    DTIC Science & Technology

    2016-08-18

    Sound Source Localization Identification, and Sound Source Localization When Listeners Move. The CI research was also supported by an NIH grant...8217Cochlear Implant Performance in Realistic Listening Environments,’ Dr. Michael Dorman, Principal Investigator, Dr. William Yost unpaid advisor. The other... Listeners Move. The CI research was also supported by an NIH grant (“Cochlear Implant Performance in Realistic Listening Environments,” Dr. Michael Dorman

  9. Acoustic signatures of sound source-tract coupling.

    PubMed

    Arneodo, Ezequiel M; Perl, Yonatan Sanz; Mindlin, Gabriel B

    2011-04-01

    Birdsong is a complex behavior, which results from the interaction between a nervous system and a biomechanical peripheral device. While much has been learned about how complex sounds are generated in the vocal organ, little has been learned about the signature on the vocalizations of the nonlinear effects introduced by the acoustic interactions between a sound source and the vocal tract. The variety of morphologies among bird species makes birdsong a most suitable model to study phenomena associated to the production of complex vocalizations. Inspired by the sound production mechanisms of songbirds, in this work we study a mathematical model of a vocal organ, in which a simple sound source interacts with a tract, leading to a delay differential equation. We explore the system numerically, and by taking it to the weakly nonlinear limit, we are able to examine its periodic solutions analytically. By these means we are able to explore the dynamics of oscillatory solutions of a sound source-tract coupled system, which are qualitatively different from those of a sound source-filter model of a vocal organ. Nonlinear features of the solutions are proposed as the underlying mechanisms of observed phenomena in birdsong, such as unilaterally produced "frequency jumps," enhancement of resonances, and the shift of the fundamental frequency observed in heliox experiments. ©2011 American Physical Society

  10. Acoustic signatures of sound source-tract coupling

    PubMed Central

    Arneodo, Ezequiel M.; Perl, Yonatan Sanz; Mindlin, Gabriel B.

    2014-01-01

    Birdsong is a complex behavior, which results from the interaction between a nervous system and a biomechanical peripheral device. While much has been learned about how complex sounds are generated in the vocal organ, little has been learned about the signature on the vocalizations of the nonlinear effects introduced by the acoustic interactions between a sound source and the vocal tract. The variety of morphologies among bird species makes birdsong a most suitable model to study phenomena associated to the production of complex vocalizations. Inspired by the sound production mechanisms of songbirds, in this work we study a mathematical model of a vocal organ, in which a simple sound source interacts with a tract, leading to a delay differential equation. We explore the system numerically, and by taking it to the weakly nonlinear limit, we are able to examine its periodic solutions analytically. By these means we are able to explore the dynamics of oscillatory solutions of a sound source-tract coupled system, which are qualitatively different from those of a sound source-filter model of a vocal organ. Nonlinear features of the solutions are proposed as the underlying mechanisms of observed phenomena in birdsong, such as unilaterally produced “frequency jumps,” enhancement of resonances, and the shift of the fundamental frequency observed in heliox experiments. PMID:21599213

  11. Mechanisms underlying the temporal precision of sound coding at the inner hair cell ribbon synapse

    PubMed Central

    Moser, Tobias; Neef, Andreas; Khimich, Darina

    2006-01-01

    Our auditory system is capable of perceiving the azimuthal location of a low frequency sound source with a precision of a few degrees. This requires the auditory system to detect time differences in sound arrival between the two ears down to tens of microseconds. The detection of these interaural time differences relies on network computation by auditory brainstem neurons sharpening the temporal precision of the afferent signals. Nevertheless, the system requires the hair cell synapse to encode sound with the highest possible temporal acuity. In mammals, each auditory nerve fibre receives input from only one inner hair cell (IHC) synapse. Hence, this single synapse determines the temporal precision of the fibre. As if this was not enough of a challenge, the auditory system is also capable of maintaining such high temporal fidelity with acoustic signals that vary greatly in their intensity. Recent research has started to uncover the cellular basis of sound coding. Functional and structural descriptions of synaptic vesicle pools and estimates for the number of Ca2+ channels at the ribbon synapse have been obtained, as have insights into how the receptor potential couples to the release of synaptic vesicles. Here, we review current concepts about the mechanisms that control the timing of transmitter release in inner hair cells of the cochlea. PMID:16901948

  12. Occupational noise exposure during endourologic procedures.

    PubMed

    Soucy, Frédéric; Ko, Raymond; Denstedt, John D; Razvi, Hassan

    2008-08-01

    Long-term noise exposure in the workplace is a known cause of hearing loss. There has been limited study on the potential harm related to shock wave lithotripsy (SWL) or intracorporeal devices on patients and operating room personnel. We used a digital sound meter to measure decibel levels in the operating room during several endourologic procedures. The decibel levels were recorded during SWL (Storz SLX-F2), percutaneous nephrolithotomy using single- and dual-probe ultrasonic lithotripters (Olympus LUS-2, CyberWand), and during ureteroscopy using the Versa Pulse Holmium:YAG laser. Findings were compared with the U.S. Department of Labor Occupational Health and Safety Administration and Canadian Centre for Occupational Health recommendations on permissible noise levels in the workplace. The background sound level in the operating room prior to endourologic procedures ranged between 58 and 60 dB. In the SWL control room, 5 m from the source, the mean sound level was 68 dB (range 64-75) during treatment. The mean corresponding decibel level recorded at the patient's head during SWL was 77 dB (range 73-83). Noises produced by intracorporeal lithotripters were recorded at the surgeon's head, 2 m from the source. Measurements of the CyberWand (dual-probe) device revealed a higher mean decibel reading of 93 dB (range 85-102). Noise levels recorded for the Olympus LUS-2 (single-probe) ultrasound and the holmium laser were 65 dB (62 -68) and 60 dB (58-62), respectively. Although we noted that patients and urologists maybe exposed to significant noise levels during endourologic procedures, the duration of exposure is short. This risk appears to be minimal, based on current occupational guidelines, for most operating personnel.

  13. Separation of non-stationary multi-source sound field based on the interpolated time-domain equivalent source method

    NASA Astrophysics Data System (ADS)

    Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng

    2016-05-01

    In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.

  14. Intensity-invariant coding in the auditory system.

    PubMed

    Barbour, Dennis L

    2011-11-01

    The auditory system faithfully represents sufficient details from sound sources such that downstream cognitive processes are capable of acting upon this information effectively even in the face of signal uncertainty, degradation or interference. This robust sound source representation leads to an invariance in perception vital for animals to interact effectively with their environment. Due to unique nonlinearities in the cochlea, sound representations early in the auditory system exhibit a large amount of variability as a function of stimulus intensity. In other words, changes in stimulus intensity, such as for sound sources at differing distances, create a unique challenge for the auditory system to encode sounds invariantly across the intensity dimension. This challenge and some strategies available to sensory systems to eliminate intensity as an encoding variable are discussed, with a special emphasis upon sound encoding. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Measures against mechanical noise from large wind turbines: A design guide

    NASA Astrophysics Data System (ADS)

    Ljunggren, Sten; Johansson, Melker

    1991-06-01

    The noise generated by the machinery of the two Swedish prototypes contains pure tones which are very important with respect to the environmental impact. A discussion of the results of noise measurements carried out at these turbines, that are meant to be used as a guide as to how to predict and control the noise around a large wind turbine during the design stage, is presented. The design targets are discussed, stressing the importance of the audibility of pure tones and not only the annoyance; a simple criterion is cited. The main noise source is the gearbox and a simple empirical expression for the sound power level is shown to give good agreement with the measurement results. The influence of the noise of the gearbox design is discussed in some detail. Formulas for the prediction of the airborne sound transmission to the ground outside the nacelle are presented, together with a number of empirical data on the sound reduction indices for single and double constructions. The structure-borne noise transmission is discussed.

  16. Pollutant source identification model for water pollution incidents in small straight rivers based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Shou-ping; Xin, Xiao-kang

    2017-07-01

    Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.

  17. Numerical Models for Sound Propagation in Long Spaces

    NASA Astrophysics Data System (ADS)

    Lai, Chenly Yuen Cheung

    Both reverberation time and steady-state sound field are the key elements for assessing the acoustic condition in an enclosed space. They affect the noise propagation, speech intelligibility, clarity index, and definition. Since the sound field in a long space is non diffuse, classical room acoustics theory does not apply in this situation. The ray tracing technique and the image source methods are two common models to fathom both reverberation time and steady-state sound field in long enclosures nowadays. Although both models can give an accurate estimate of reverberation times and steady-state sound field directly or indirectly, they often involve time-consuming calculations. In order to simplify the acoustic consideration, a theoretical formulation has been developed for predicting both steady-state sound fields and reverberation times in street canyons. The prediction model is further developed to predict the steady-state sound field in a long enclosure. Apart from the straight long enclosure, there are other variations such as a cross junction, a long enclosure with a T-intersection, an U-turn long enclosure. In the present study, an theoretical and experimental investigations were conducted to develop formulae for predicting reverberation times and steady-state sound fields in a junction of a street canyon and in a long enclosure with T-intersection. The theoretical models are validated by comparing the numerical predictions with published experimental results. The theoretical results are also compared with precise indoor measurements and large-scale outdoor experimental results. In all of previous acoustical studies related to long enclosure, most of the studies are focused on the monopole sound source. Besides non-directional noise source, many noise sources in long enclosure are dipole like, such as train noise and fan noise. In order to study the characteristics of directional noise sources, a review of available dipole source was conducted. A dipole was constructed which was subsequent used for experimental studies. In additional, a theoretical model was developed for predicting dipole sound fields. The theoretical model can be used to study the effect of a dipole source on the speech intelligibility in long enclosures.

  18. Sound source tracking device for telematic spatial sound field reproduction

    NASA Astrophysics Data System (ADS)

    Cardenas, Bruno

    This research describes an algorithm that localizes sound sources for use in telematic applications. The localization algorithm is based on amplitude differences between various channels of a microphone array of directional shotgun microphones. The amplitude differences will be used to locate multiple performers and reproduce their voices, which were recorded at close distance with lavalier microphones, spatially corrected using a loudspeaker rendering system. In order to track multiple sound sources in parallel the information gained from the lavalier microphones will be utilized to estimate the signal-to-noise ratio between each performer and the concurrent performers.

  19. A Corticothalamic Circuit Model for Sound Identification in Complex Scenes

    PubMed Central

    Otazu, Gonzalo H.; Leibold, Christian

    2011-01-01

    The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668

  20. Quantitative measurement of pass-by noise radiated by vehicles running at high speeds

    NASA Astrophysics Data System (ADS)

    Yang, Diange; Wang, Ziteng; Li, Bing; Luo, Yugong; Lian, Xiaomin

    2011-03-01

    It has been a challenge in the past to accurately locate and quantify the pass-by noise source radiated by the running vehicles. A system composed of a microphone array is developed in our current work to do this work. An acoustic-holography method for moving sound sources is designed to handle the Doppler effect effectively in the time domain. The effective sound pressure distribution is reconstructed on the surface of a running vehicle. The method has achieved a high calculation efficiency and is able to quantitatively measure the sound pressure at the sound source and identify the location of the main sound source. The method is also validated by the simulation experiments and the measurement tests with known moving speakers. Finally, the engine noise, tire noise, exhaust noise and wind noise of the vehicle running at different speeds are successfully identified by this method.

  1. Auditory performance in an open sound field

    NASA Astrophysics Data System (ADS)

    Fluitt, Kim F.; Letowski, Tomasz; Mermagen, Timothy

    2003-04-01

    Detection and recognition of acoustic sources in an open field are important elements of situational awareness on the battlefield. They are affected by many technical and environmental conditions such as type of sound, distance to a sound source, terrain configuration, meteorological conditions, hearing capabilities of the listener, level of background noise, and the listener's familiarity with the sound source. A limited body of knowledge about auditory perception of sources located over long distances makes it difficult to develop models predicting auditory behavior on the battlefield. The purpose of the present study was to determine the listener's abilities to detect, recognize, localize, and estimate distances to sound sources from 25 to 800 m from the listing position. Data were also collected for meteorological conditions (wind direction and strength, temperature, atmospheric pressure, humidity) and background noise level for each experimental trial. Forty subjects (men and women, ages 18 to 25) participated in the study. Nine types of sounds were presented from six loudspeakers in random order; each series was presented four times. Partial results indicate that both detection and recognition declined at distances greater than approximately 200 m and distance estimation was grossly underestimated by listeners. Specific results will be presented.

  2. Evolutionary trends in directional hearing

    PubMed Central

    Carr, Catherine E.; Christensen-Dalsgaard, Jakob

    2016-01-01

    Tympanic hearing is a true evolutionary novelty that arose in parallel within early tetrapods. We propose that in these tetrapods, selection for sound localization in air acted upon pre-existing directionally sensitive brainstem circuits, similar to those in fishes. Auditory circuits in birds and lizards resemble this ancestral, directionally sensitive framework. Despite this anatomically similarity, coding of sound source location differs between birds and lizards. In birds, brainstem circuits compute sound location from interaural cues. Lizards, however, have coupled ears, and do not need to compute source location in the brain. Thus their neural processing of sound direction differs, although all show mechanisms for enhancing sound source directionality. Comparisons with mammals reveal similarly complex interactions between coding strategies and evolutionary history. PMID:27448850

  3. Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid.

    PubMed

    Kidd, Gerald

    2017-10-17

    Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired "target" talker while ignoring the speech from unwanted "masker" talkers and other sources of sound. This listening situation forms the classic "cocktail party problem" described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. This approach, embodied in a prototype "visually guided hearing aid" (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based "spatial filter" operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in "informational masking." The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in "energetic masking." Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation. http://cred.pubs.asha.org/article.aspx?articleid=2601621.

  4. Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid

    PubMed Central

    2017-01-01

    Purpose Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired “target” talker while ignoring the speech from unwanted “masker” talkers and other sources of sound. This listening situation forms the classic “cocktail party problem” described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. Method This approach, embodied in a prototype “visually guided hearing aid” (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. Results The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based “spatial filter” operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in “informational masking.” The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in “energetic masking.” Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Conclusions Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601621 PMID:29049603

  5. Relation of sound intensity and accuracy of localization.

    PubMed

    Farrimond, T

    1989-08-01

    Tests were carried out on 17 subjects to determine the accuracy of monaural sound localization when the head is not free to turn toward the sound source. Maximum accuracy of localization for a constant-volume sound source coincided with the position for maximum perceived intensity of the sound in the front quadrant. There was a tendency for sounds to be perceived more often as coming from a position directly toward the ear. That is, for sounds in the front quadrant, errors of localization tended to be predominantly clockwise (i.e., biased toward a line directly facing the ear). Errors for sounds occurring in the rear quadrant tended to be anticlockwise. The pinna's differential effect on sound intensity between front and rear quadrants would assist in identifying the direction of movement of objects, for example an insect, passing the ear.

  6. Source sparsity control of sound field reproduction using the elastic-net and the lasso minimizers.

    PubMed

    Gauthier, P-A; Lecomte, P; Berry, A

    2017-04-01

    Sound field reproduction is aimed at the reconstruction of a sound pressure field in an extended area using dense loudspeaker arrays. In some circumstances, sound field reproduction is targeted at the reproduction of a sound field captured using microphone arrays. Although methods and algorithms already exist to convert microphone array recordings to loudspeaker array signals, one remaining research question is how to control the spatial sparsity in the resulting loudspeaker array signals and what would be the resulting practical advantages. Sparsity is an interesting feature for spatial audio since it can drastically reduce the number of concurrently active reproduction sources and, therefore, increase the spatial contrast of the solution at the expense of a difference between the target and reproduced sound fields. In this paper, the application of the elastic-net cost function to sound field reproduction is compared to the lasso cost function. It is shown that the elastic-net can induce solution sparsity and overcomes limitations of the lasso: The elastic-net solves the non-uniqueness of the lasso solution, induces source clustering in the sparse solution, and provides a smoother solution within the activated source clusters.

  7. Sound quality indicators for urban places in Paris cross-validated by Milan data.

    PubMed

    Ricciardi, Paola; Delaitre, Pauline; Lavandier, Catherine; Torchia, Francesca; Aumond, Pierre

    2015-10-01

    A specific smartphone application was developed to collect perceptive and acoustic data in Paris. About 3400 questionnaires were analyzed, regarding the global sound environment characterization, the perceived loudness of some emergent sources and the presence time ratio of sources that do not emerge from the background. Sound pressure level was recorded each second from the mobile phone's microphone during a 10-min period. The aim of this study is to propose indicators of urban sound quality based on linear regressions with perceptive variables. A cross validation of the quality models extracted from Paris data was carried out by conducting the same survey in Milan. The proposed sound quality general model is correlated with the real perceived sound quality (72%). Another model without visual amenity and familiarity is 58% correlated with perceived sound quality. In order to improve the sound quality indicator, a site classification was performed by Kohonen's Artificial Neural Network algorithm, and seven specific class models were developed. These specific models attribute more importance on source events and are slightly closer to the individual data than the global model. In general, the Parisian models underestimate the sound quality of Milan environments assessed by Italian people.

  8. Analysis of impact/impulse noise for predicting noise induced hearing loss

    NASA Astrophysics Data System (ADS)

    Vipperman, Jeffrey S.; Prince, Mary M.; Flamm, Angela M.

    2003-04-01

    Studies indicate that the statistical properties and temporal structure of the sound signal are important in determining the extent of hearing hazard. As part of a pilot study to examine hearing conservation program effectiveness, NIOSH collected noise samples of impact noise sources in an automobile stamping plant, focusing on jobs with peak sound levels (Lpk) of greater than 120 dB. Digital tape recordings of sounds were collected using a Type I Precision Sound Level Meter and microphone connected to a DAT tape recorder. The events were archived and processed as .wav files to extract single events of interest on CD-R media and CD audio media. A preliminary analysis of sample wavelet files was conducted to characterize each event using metrics such as the number of impulses per unit time, the repetition rate or temporal pattern of these impulses, index of peakedness, crest factor, kurtosis, coefficient of kurtosis, rise time, fall time, and peak time. The spectrum, duration, and inverse of duration for each waveform were also computed. Finally, the data were evaluated with the Auditory Hazard Assessment Algorithm (AHAAH). Improvements to data collection for a future study examining different strategies for evaluating industrial noise exposure will be discussed.

  9. Response of cat cerebellar vermis induced by sound. I. Influence of drugs on responses of single units.

    PubMed

    Jastreboff, P J; Tarnecki, R

    1975-01-01

    Experiments were done on the cats under Chloralose and/or Nembutal anesthesia. A click was used as a standard acoustic stimulus. The type of responses of single units from cerebellar vermis lobuli V-VII were analyzed. At least four different types of single unit reactions were observed and one of these - oscillatory - was produced by the presence of Flaxedil simultaneously with Chloralose. The system that controls the activity of the middle-ear-muscles can be suspected as the source of the oscillatory pattern of the cerebellar response. Latencies were found to be constant, independent of anesthesia, but it was necessary to have a low level of Nembutal anesthesia because of the overriding inhibitory influence of Nembutal.

  10. Emphasis of spatial cues in the temporal fine structure during the rising segments of amplitude-modulated sounds II: single-neuron recordings

    PubMed Central

    Marquardt, Torsten; Stange, Annette; Pecka, Michael; Grothe, Benedikt; McAlpine, David

    2014-01-01

    Recently, with the use of an amplitude-modulated binaural beat (AMBB), in which sound amplitude and interaural-phase difference (IPD) were modulated with a fixed mutual relationship (Dietz et al. 2013b), we demonstrated that the human auditory system uses interaural timing differences in the temporal fine structure of modulated sounds only during the rising portion of each modulation cycle. However, the degree to which peripheral or central mechanisms contribute to the observed strong dominance of the rising slope remains to be determined. Here, by recording responses of single neurons in the medial superior olive (MSO) of anesthetized gerbils and in the inferior colliculus (IC) of anesthetized guinea pigs to AMBBs, we report a correlation between the position within the amplitude-modulation (AM) cycle generating the maximum response rate and the position at which the instantaneous IPD dominates the total neural response. The IPD during the rising segment dominates the total response in 78% of MSO neurons and 69% of IC neurons, with responses of the remaining neurons predominantly coding the IPD around the modulation maximum. The observed diversity of dominance regions within the AM cycle, especially in the IC, and its comparison with the human behavioral data suggest that only the subpopulation of neurons with rising slope dominance codes the sound-source location in complex listening conditions. A comparison of two models to account for the data suggests that emphasis on IPDs during the rising slope of the AM cycle depends on adaptation processes occurring before binaural interaction. PMID:24554782

  11. Airborne sound transmission loss characteristics of wood-frame construction

    NASA Astrophysics Data System (ADS)

    Rudder, F. F., Jr.

    1985-03-01

    This report summarizes the available data on the airborne sound transmission loss properties of wood-frame construction and evaluates the methods for predicting the airborne sound transmission loss. The first part of the report comprises a summary of sound transmission loss data for wood-frame interior walls and floor-ceiling construction. Data bases describing the sound transmission loss characteristics of other building components, such as windows and doors, are discussed. The second part of the report presents the prediction of the sound transmission loss of wood-frame construction. Appropriate calculation methods are described both for single-panel and for double-panel construction with sound absorption material in the cavity. With available methods, single-panel construction and double-panel construction with the panels connected by studs may be adequately characterized. Technical appendices are included that summarize laboratory measurements, compare measurement with theory, describe details of the prediction methods, and present sound transmission loss data for common building materials.

  12. Development of an ICT-Based Air Column Resonance Learning Media

    NASA Astrophysics Data System (ADS)

    Purjiyanta, Eka; Handayani, Langlang; Marwoto, Putut

    2016-08-01

    Commonly, the sound source used in the air column resonance experiment is the tuning fork having disadvantage of unoptimal resonance results due to the sound produced which is getting weaker. In this study we made tones with varying frequency using the Audacity software which were, then, stored in a mobile phone as a source of sound. One advantage of this sound source is the stability of the resulting sound enabling it to produce the same powerful sound. The movement of water in a glass tube mounted on the tool resonance and the tone sound that comes out from the mobile phone were recorded by using a video camera. Sound resonances recorded were first, second, and third resonance, for each tone frequency mentioned. The resulting sound stays longer, so it can be used for the first, second, third and next resonance experiments. This study aimed to (1) explain how to create tones that can substitute tuning forks sound used in air column resonance experiments, (2) illustrate the sound wave that occurred in the first, second, and third resonance in the experiment, and (3) determine the speed of sound in the air. This study used an experimental method. It was concluded that; (1) substitute tones of a tuning fork sound can be made by using the Audacity software; (2) the form of sound waves that occured in the first, second, and third resonance in the air column resonance can be drawn based on the results of video recording of the air column resonance; and (3) based on the experiment result, the speed of sound in the air is 346.5 m/s, while based on the chart analysis with logger pro software, the speed of sound in the air is 343.9 ± 0.3171 m/s.

  13. Egocentric and allocentric representations in auditory cortex

    PubMed Central

    Brimijoin, W. Owen; Bizley, Jennifer K.

    2017-01-01

    A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position. PMID:28617796

  14. Improved Mirror Source Method in Roomacoustics

    NASA Astrophysics Data System (ADS)

    Mechel, F. P.

    2002-10-01

    Most authors in room acoustics qualify the mirror source method (MS-method) as the only exact method to evaluate sound fields in auditoria. But evidently nobody applies it. The reason for this discrepancy is the abundantly high numbers of needed mirror sources which are reported in the literature, although such estimations of needed numbers of mirror sources mostly are used for the justification of more or less heuristic modifications of the MS-method. The present, intentionally tutorial article accentuates the analytical foundations of the MS-method whereby the number of needed mirror sources is reduced already. Further, the task of field evaluation in three-dimensional spaces is reduced to a sequence of tasks in two-dimensional room edges. This not only allows the use of easier geometrical computations in two dimensions, but also the sound field in corner areas can be represented by a single (directional) source sitting on the corner line, so that only this "corner source" must be mirror-reflected in the further process. This procedure gives a drastic reduction of the number of needed equivalent sources. Finally, the traditional MS-method is not applicable in rooms with convex corners (the angle between the corner flanks, measured on the room side, exceeds 180°). In such cases, the MS-method is combined below with the second principle of superposition(PSP). It reduces the scattering task at convex corners to two sub-tasks between one flank and the median plane of the room wedge, i.e., always in concave corner areas where the MS-method can be applied.

  15. Assessment of Hydroacoustic Propagation Using Autonomous Hydrophones in the Scotia Sea

    DTIC Science & Technology

    2010-09-01

    Award No. DE-AI52-08NA28654 Proposal No. BAA08-36 ABSTRACT The remote area of the Atlantic Ocean near the Antarctic Peninsula and the South...hydroacoustic blind spot. To investigate the sound propagation and interferences affected by these landmasses in the vicinity of the Antarctic polar...from large icebergs (near-surface sources) were utilized as natural sound sources. Surface sound sources, e.g., ice-related events, tend to suffer less

  16. Active control of noise on the source side of a partition to increase its sound isolation

    NASA Astrophysics Data System (ADS)

    Tarabini, Marco; Roure, Alain; Pinhede, Cedric

    2009-03-01

    This paper describes a local active noise control system that virtually increases the sound isolation of a dividing wall by means of a secondary source array. With the proposed method, sound pressure on the source side of the partition is reduced using an array of loudspeakers that generates destructive interference on the wall surface, where an array of error microphones is placed. The reduction of sound pressure on the incident side of the wall is expected to decrease the sound radiated into the contiguous room. The method efficiency was experimentally verified by checking the insertion loss of the active noise control system; in order to investigate the possibility of using a large number of actuators, a decentralized FXLMS control algorithm was used. Active control performances and stability were tested with different array configurations, loudspeaker directivities and enclosure characteristics (sound source position and absorption coefficient). The influence of all these parameters was investigated with the factorial design of experiments. The main outcome of the experimental campaign was that the insertion loss produced by the secondary source array, in the 50-300 Hz frequency range, was close to 10 dB. In addition, the analysis of variance showed that the active noise control performance can be optimized with a proper choice of the directional characteristics of the secondary source and the distance between loudspeakers and error microphones.

  17. The low-frequency sound power measuring technique for an underwater source in a non-anechoic tank

    NASA Astrophysics Data System (ADS)

    Zhang, Yi-Ming; Tang, Rui; Li, Qi; Shang, Da-Jing

    2018-03-01

    In order to determine the radiated sound power of an underwater source below the Schroeder cut-off frequency in a non-anechoic tank, a low-frequency extension measuring technique is proposed. This technique is based on a unique relationship between the transmission characteristics of the enclosed field and those of the free field, which can be obtained as a correction term based on previous measurements of a known simple source. The radiated sound power of an unknown underwater source in the free field can thereby be obtained accurately from measurements in a non-anechoic tank. To verify the validity of the proposed technique, a mathematical model of the enclosed field is established using normal-mode theory, and the relationship between the transmission characteristics of the enclosed and free fields is obtained. The radiated sound power of an underwater transducer source is tested in a glass tank using the proposed low-frequency extension measuring technique. Compared with the free field, the radiated sound power level of the narrowband spectrum deviation is found to be less than 3 dB, and the 1/3 octave spectrum deviation is found to be less than 1 dB. The proposed testing technique can be used not only to extend the low-frequency applications of non-anechoic tanks, but also for measurement of radiated sound power from complicated sources in non-anechoic tanks.

  18. Modeling and analysis of secondary sources coupling for active sound field reduction in confined spaces

    NASA Astrophysics Data System (ADS)

    Montazeri, Allahyar; Taylor, C. James

    2017-10-01

    This article addresses the coupling of acoustic secondary sources in a confined space in a sound field reduction framework. By considering the coupling of sources in a rectangular enclosure, the set of coupled equations governing its acoustical behavior are solved. The model obtained in this way is used to analyze the behavior of multi-input multi-output (MIMO) active sound field control (ASC) systems, where the coupling of sources cannot be neglected. In particular, the article develops the analytical results to analyze the effect of coupling of an array of secondary sources on the sound pressure levels inside an enclosure, when an array of microphones is used to capture the acoustic characteristics of the enclosure. The results are supported by extensive numerical simulations showing how coupling of loudspeakers through acoustic modes of the enclosure will change the strength and hence the driving voltage signal applied to the secondary loudspeakers. The practical significance of this model is to provide a better insight on the performance of the sound reproduction/reduction systems in confined spaces when an array of loudspeakers and microphones are placed in a fraction of wavelength of the excitation signal to reduce/reproduce the sound field. This is of particular importance because the interaction of different sources affects their radiation impedance depending on the electromechanical properties of the loudspeakers.

  19. Reduced audiovisual recalibration in the elderly.

    PubMed

    Chan, Yu Man; Pianta, Michael J; McKendrick, Allison M

    2014-01-01

    Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy aging results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However, the impact of aging on audiovisual recalibration is unknown. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for 15 younger (22-32 years old) and 15 older (64-74 years old) healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230 ms). The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post-adaptation to synchrony, the younger and older observers had average window widths (±standard deviation) of 326 (±80) and 448 (±105) ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers, however, perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age.

  20. Reduced audiovisual recalibration in the elderly

    PubMed Central

    Chan, Yu Man; Pianta, Michael J.; McKendrick, Allison M.

    2014-01-01

    Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy aging results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However, the impact of aging on audiovisual recalibration is unknown. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for 15 younger (22–32 years old) and 15 older (64–74 years old) healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230 ms). The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post-adaptation to synchrony, the younger and older observers had average window widths (±standard deviation) of 326 (±80) and 448 (±105) ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers, however, perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age. PMID:25221508

  1. New Global Bathymetry and Topography Model Grids

    NASA Astrophysics Data System (ADS)

    Smith, W. H.; Sandwell, D. T.; Marks, K. M.

    2008-12-01

    A new version of the "Smith and Sandwell" global marine topography model is available in two formats. A one-arc-minute Mercator projected grid covering latitudes to +/- 80.738 degrees is available in the "img" file format. Also available is a 30-arc-second version in latitude and longitude coordinates from pole to pole, supplied as tiles covering the same areas as the SRTM30 land topography data set. The new effort follows the Smith and Sandwell recipe, using publicly available and quality controlled single- and multi-beam echo soundings where possible and filling the gaps in the oceans with estimates derived from marine gravity anomalies observed by satellite altimetry. The altimeter data have been reprocessed to reduce the noise level and improve the spatial resolution [see Sandwell and Smith, this meeting]. The echo soundings database has grown enormously with new infusions of data from the U.S. Naval Oceanographic Office (NAVO), the National Geospatial-intelligence Agency (NGA), hydrographic offices around the world volunteering through the International Hydrographic Organization (IHO), and many other agencies and academic sources worldwide. These new data contributions have filled many holes: 50% of ocean grid points are within 8 km of a sounding point, 75% are within 24 km, and 90% are within 57 km. However, in the remote ocean basins some gaps still remain: 5% of the ocean grid points are more than 85 km from the nearest sounding control, and 1% are more than 173 km away. Both versions of the grid include a companion grid of source file numbers, so that control points may be mapped and traced to sources. We have compared the new model to multi-beam data not used in the compilation and find that 50% of differences are less than 25 m, 95% of differences are less than 130 m, but a few large differences remain in areas of poor sounding control and large-amplitude gravity anomalies. Land values in the solution are taken from SRTM30v2, GTOPO30 and ICESAT data. GEBCO has agreed to adopt this model and begin updating it in 2009. Ongoing tasks include building an uncertainty model and including information from the latest IBCAO map of the Arctic Ocean.

  2. Consistent modelling of wind turbine noise propagation from source to receiver.

    PubMed

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; Dag, Kaya O; Moriarty, Patrick

    2017-11-01

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. The local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.

  3. Consistent modelling of wind turbine noise propagation from source to receiver

    DOE PAGES

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; ...

    2017-11-28

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less

  4. Consistent modelling of wind turbine noise propagation from source to receiver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less

  5. Binaural room simulation

    NASA Technical Reports Server (NTRS)

    Lehnert, H.; Blauert, Jens; Pompetzki, W.

    1991-01-01

    In every-day listening the auditory event perceived by a listener is determined not only by the sound signal that a sound emits but also by a variety of environmental parameters. These parameters are the position, orientation and directional characteristics of the sound source, the listener's position and orientation, the geometrical and acoustical properties of surfaces which affect the sound field and the sound propagation properties of the surrounding fluid. A complete set of these parameters can be called an Acoustic Environment. If the auditory event perceived by a listener is manipulated in such a way that the listener is shifted acoustically into a different acoustic environment without moving himself physically, a Virtual Acoustic Environment has been created. Here, we deal with a special technique to set up nearly arbitrary Virtual Acoustic Environments, the Binaural Room Simulation. The purpose of the Binaural Room Simulation is to compute the binaural impulse response related to a virtual acoustic environment taking into account all parameters mentioned above. One possible way to describe a Virtual Acoustic Environment is the concept of the virtual sound sources. Each of the virtual sources emits a certain signal which is correlated but not necessarily identical with the signal emitted by the direct sound source. If source and receiver are non moving, the acoustic environment becomes a linear time-invariant system. Then, the Binaural Impulse Response from the source to a listener' s eardrums contains all relevant auditory information related to the Virtual Acoustic Environment. Listening into the simulated environment can easily be achieved by convolving the Binaural Impulse Response with dry signals and representing the results via headphones.

  6. Broad band sound from wind turbine generators

    NASA Technical Reports Server (NTRS)

    Hubbard, H. H.; Shepherd, K. P.; Grosveld, F. W.

    1981-01-01

    Brief descriptions are given of the various types of large wind turbines and their sound characteristics. Candidate sources of broadband sound are identified and are rank ordered for a large upwind configuration wind turbine generator for which data are available. The rotor is noted to be the main source of broadband sound which arises from inflow turbulence and from the interactions of the turbulent boundary layer on the blade with its trailing edge. Sound is radiated about equally in all directions but the refraction effects of the wind produce an elongated contour pattern in the downwind direction.

  7. Effects of sound source directivity on auralizations

    NASA Astrophysics Data System (ADS)

    Sheets, Nathan W.; Wang, Lily M.

    2002-05-01

    Auralization, the process of rendering audible the sound field in a simulated space, is a useful tool in the design of acoustically sensitive spaces. The auralization depends on the calculation of an impulse response between a source and a receiver which have certain directional behavior. Many auralizations created to date have used omnidirectional sources; the effects of source directivity on auralizations is a relatively unexplored area. To examine if and how the directivity of a sound source affects the acoustical results obtained from a room, we used directivity data for three sources in a room acoustic modeling program called Odeon. The three sources are: violin, piano, and human voice. The results from using directional data are compared to those obtained using omnidirectional source behavior, both through objective measure calculations and subjective listening tests.

  8. Development of a directivity-controlled piezoelectric transducer for sound reproduction

    NASA Astrophysics Data System (ADS)

    Bédard, Magella; Berry, Alain

    2008-04-01

    Present sound reproduction systems do not attempt to simulate the spatial radiation of musical instruments, or sound sources in general, even though the spatial directivity has a strong impact on the psychoacoustic experience. A transducer consisting of 4 piezoelectric elemental sources made from curved PVDF films is used to generate a target directivity pattern in the horizontal plane, in the frequency range of 5-20 kHz. The vibratory and acoustical response of an elemental source is addressed, both theoretically and experimentally. Two approaches to synthesize the input signals to apply to each elemental source are developed in order to create a prescribed, frequency-dependent acoustic directivity. The circumferential Fourier decomposition of the target directivity provides a compromise between the magnitude and the phase reconstruction, whereas the minimization of a quadratic error criterion provides a best magnitude reconstruction. This transducer can improve sound reproduction by introducing the spatial radiation aspect of the original source at high frequency.

  9. Callback response of dugongs to conspecific chirp playbacks.

    PubMed

    Ichikawa, Kotaro; Akamatsu, Tomonari; Shinke, Tomio; Adulyanukosol, Kanjana; Arai, Nobuaki

    2011-06-01

    Dugongs (Dugong dugon) produce bird-like calls such as chirps and trills. The vocal responses of dugongs to playbacks of several acoustic stimuli were investigated. Animals were exposed to four different playback stimuli: a recorded chirp from a wild dugong, a synthesized down-sweep sound, a synthesized constant-frequency sound, and silence. Wild dugongs vocalized more frequently after playback of broadcast chirps than that after constant-frequency sounds or silence. The down-sweep sound also elicited more vocal responses than did silence. No significant difference was found between the broadcast chirps and the down-sweep sound. The ratio of wild dugong chirps to all calls and the dominant frequencies of the wild dugong calls were significantly higher during playbacks of broadcast chirps, down-sweep sounds, and constant-frequency sounds than during those of silence. The source level and duration of dugong chirps increased significantly as signaling distance increased. No significant correlation was found between signaling distance and the source level of trills. These results show that dugongs vocalize to playbacks of frequency-modulated signals and suggest that the source level of dugong chirps may be manipulated to compensate for transmission loss between the source and receiver. This study provides the first behavioral observations revealing the function of dugong chirps. © 2011 Acoustical Society of America

  10. The inferior colliculus encodes the Franssen auditory spatial illusion

    PubMed Central

    Rajala, Abigail Z.; Yan, Yonghe; Dent, Micheal L.; Populin, Luis C.

    2014-01-01

    Illusions are effective tools for the study of the neural mechanisms underlying perception because neural responses can be correlated to the physical properties of stimuli and the subject’s perceptions. The Franssen illusion (FI) is an auditory spatial illusion evoked by presenting a transient, abrupt tone and a slowly rising, sustained tone of the same frequency simultaneously on opposite sides of the subject. Perception of the FI consists of hearing a single sound, the sustained tone, on the side that the transient was presented. Both subcortical and cortical mechanisms for the FI have been proposed, but, to date, there is no direct evidence for either. The data show that humans and rhesus monkeys perceive the FI similarly. Recordings were taken from single units of the inferior colliculus in the monkey while they indicated the perceived location of sound sources with their gaze. The results show that the transient component of the Franssen stimulus, with a shorter first spike latency and higher discharge rate than the sustained tone, encodes the perception of sound location. Furthermore, the persistent erroneous perception of the sustained stimulus location is due to continued excitation of the same neurons, first activated by the transient, by the sustained stimulus without location information. These results demonstrate for the first time, on a trial-by-trial basis, a correlation between perception of an auditory spatial illusion and a subcortical physiological substrate. PMID:23899307

  11. Preliminary Analysis of Acoustic Measurements from the NASA-Gulfstream Airframe Noise Flight Test

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Lockhard, David D.; Humphreys, Willliam M.; Choudhari, Meelan M.; Van De Ven, Thomas

    2008-01-01

    The NASA-Gulfstream joint Airframe Noise Flight Test program was conducted at the NASA Wallops Flight Facility during October, 2006. The primary objective of the AFN flight test was to acquire baseline airframe noise data on a regional jet class of transport in order to determine noise source strengths and distributions for model validation. To accomplish this task, two measuring systems were used: a ground-based microphone array and individual microphones. Acoustic data for a Gulfstream G550 aircraft were acquired over the course of ten days. Over twenty-four test conditions were flown. The test matrix was designed to provide an acoustic characterization of both the full aircraft and individual airframe components and included cruise to landing configurations. Noise sources were isolated by selectively deploying individual components (flaps, main landing gear, nose gear, spoilers, etc.) and altering the airspeed, glide path, and engine settings. The AFN flight test program confirmed that the airframe is a major contributor to the noise from regional jets during landing operations. Sound pressure levels from the individual microphones on the ground revealed the flap system to be the dominant airframe noise source for the G550 aircraft. The corresponding array beamform maps showed that most of the radiated sound from the flaps originates from the side edges. Using velocity to the sixth power and Strouhal scaling of the sound pressure spectra obtained at different speeds failed to collapse the data into a single spectrum. The best data collapse was obtained when the frequencies were left unscaled.

  12. Atmospheric Propagation

    NASA Technical Reports Server (NTRS)

    Embleton, Tony F. W.; Daigle, Gilles A.

    1991-01-01

    Reviewed here is the current state of knowledge with respect to each basic mechanism of sound propagation in the atmosphere and how each mechanism changes the spectral or temporal characteristics of the sound received at a distance from the source. Some of the basic processes affecting sound wave propagation which are present in any situation are discussed. They are geometrical spreading, molecular absorption, and turbulent scattering. In geometrical spreading, sound levels decrease with increasing distance from the source; there is no frequency dependence. In molecular absorption, sound energy is converted into heat as the sound wave propagates through the air; there is a strong dependence on frequency. In turbulent scattering, local variations in wind velocity and temperature induce fluctuations in phase and amplitude of the sound waves as they propagate through an inhomogeneous medium; there is a moderate dependence on frequency.

  13. How effectively do horizontal and vertical response strategies of long-finned pilot whales reduce sound exposure from naval sonar?

    PubMed

    Wensveen, Paul J; von Benda-Beckmann, Alexander M; Ainslie, Michael A; Lam, Frans-Peter A; Kvadsheim, Petter H; Tyack, Peter L; Miller, Patrick J O

    2015-05-01

    The behaviour of a marine mammal near a noise source can modulate the sound exposure it receives. We demonstrate that two long-finned pilot whales both surfaced in synchrony with consecutive arrivals of multiple sonar pulses. We then assess the effect of surfacing and other behavioural response strategies on the received cumulative sound exposure levels and maximum sound pressure levels (SPLs) by modelling realistic spatiotemporal interactions of a pilot whale with an approaching source. Under the propagation conditions of our model, some response strategies observed in the wild were effective in reducing received levels (e.g. movement perpendicular to the source's line of approach), but others were not (e.g. switching from deep to shallow diving; synchronous surfacing after maximum SPLs). Our study exemplifies how simulations of source-whale interactions guided by detailed observational data can improve our understanding about motivations behind behaviour responses observed in the wild (e.g., reducing sound exposure, prey movement). Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Simulated seal scarer sounds scare porpoises, but not seals: species-specific responses to 12 kHz deterrence sounds

    PubMed Central

    Hermannsen, Line; Beedholm, Kristian

    2017-01-01

    Acoustic harassment devices (AHD) or ‘seal scarers’ are used extensively, not only to deter seals from fisheries, but also as mitigation tools to deter marine mammals from potentially harmful sound sources, such as offshore pile driving. To test the effectiveness of AHDs, we conducted two studies with similar experimental set-ups on two key species: harbour porpoises and harbour seals. We exposed animals to 500 ms tone bursts at 12 kHz simulating that of an AHD (Lofitech), but with reduced output levels (source peak-to-peak level of 165 dB re 1 µPa). Animals were localized with a theodolite before, during and after sound exposures. In total, 12 sound exposures were conducted to porpoises and 13 exposures to seals. Porpoises were found to exhibit avoidance reactions out to ranges of 525 m from the sound source. Contrary to this, seal observations increased during sound exposure within 100 m of the loudspeaker. We thereby demonstrate that porpoises and seals respond very differently to AHD sounds. This has important implications for application of AHDs in multi-species habitats, as sound levels required to deter less sensitive species (seals) can lead to excessive and unwanted large deterrence ranges on more sensitive species (porpoises). PMID:28791155

  15. Characterizing large river sounds: Providing context for understanding the environmental effects of noise produced by hydrokinetic turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bevelhimer, Mark S.; Deng, Z. Daniel; Scherelis, Constantin

    2016-01-01

    Underwater noise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels of different sizes and other underwater sound sources in both static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines onmore » the Mississippi River, where the sound of flowing water is included in background measurements. The size of vessels measured ranged from a small fishing boat with a 60 HP outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, and when compared to the sound created by an operating HK turbine were many times greater. A comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed values.« less

  16. Feasibility of making sound power measurements in the NASA Langley V/STOL tunnel test section

    NASA Technical Reports Server (NTRS)

    Brooks, T. F.; Scheiman, J.; Silcox, R. J.

    1976-01-01

    Based on exploratory acoustic measurements in Langley's V/STOL wind tunnel, recommendations are made on the methodology for making sound power measurements of aircraft components in the closed tunnel test section. During airflow, tunnel self-noise and microphone flow-induced noise place restrictions on the amplitude and spectrum of the sound source to be measured. Models of aircraft components with high sound level sources, such as thrust engines and powered lift systems, seem likely candidates for acoustic testing.

  17. Varying sediment sources (Hudson Strait, Cumberland Sound, Baffin Bay) to the NW Labrador Sea slope between and during Heinrich events 0 to 4

    USGS Publications Warehouse

    Andrews, John T.; Barber, D.C.; Jennings, A.E.; Eberl, D.D.; Maclean, B.; Kirby, M.E.; Stoner, J.S.

    2012-01-01

    Core HU97048-007PC was recovered from the continental Labrador Sea slope at a water depth of 945 m, 250 km seaward from the mouth of Cumberland Sound, and 400 km north of Hudson Strait. Cumberland Sound is a structural trough partly floored by Cretaceous mudstones and Paleozoic carbonates. The record extends from ∼10 to 58 ka. On-board logging revealed a complex series of lithofacies, including buff-colored detrital carbonate-rich sediments [Heinrich (H)-events] frequently bracketed by black facies. We investigate the provenance of these facies using quantitative X-ray diffraction on drill-core samples from Paleozoic and Cretaceous bedrock from the SE Baffin Island Shelf, and on the < 2-mm sediment fraction in a transect of five cores from Cumberland Sound to the NW Labrador Sea. A sediment unmixing program was used to discriminate between sediment sources, which included dolomite-rich sediments from Baffin Bay, calcite-rich sediments from Hudson Strait and discrete sources from Cumberland Sound. Results indicated that the bulk of the sediment was derived from Cumberland Sound, but Baffin Bay contributed to sediments coeval with H-0 (Younger Dryas), whereas Hudson Strait was the source during H-events 1–4. Contributions from the Cretaceous outcrops within Cumberland Sound bracket H-events, thus both leading and lagging Hudson Strait-sourced H-events.

  18. Peripheral mechanisms for vocal production in birds - differences and similarities to human speech and singing.

    PubMed

    Riede, Tobias; Goller, Franz

    2010-10-01

    Song production in songbirds is a model system for studying learned vocal behavior. As in humans, bird phonation involves three main motor systems (respiration, vocal organ and vocal tract). The avian respiratory mechanism uses pressure regulation in air sacs to ventilate a rigid lung. In songbirds sound is generated with two independently controlled sound sources, which reside in a uniquely avian vocal organ, the syrinx. However, the physical sound generation mechanism in the syrinx shows strong analogies to that in the human larynx, such that both can be characterized as myoelastic-aerodynamic sound sources. Similarities include active adduction and abduction, oscillating tissue masses which modulate flow rate through the organ and a layered structure of the oscillating tissue masses giving rise to complex viscoelastic properties. Differences in the functional morphology of the sound producing system between birds and humans require specific motor control patterns. The songbird vocal apparatus is adapted for high speed, suggesting that temporal patterns and fast modulation of sound features are important in acoustic communication. Rapid respiratory patterns determine the coarse temporal structure of song and maintain gas exchange even during very long songs. The respiratory system also contributes to the fine control of airflow. Muscular control of the vocal organ regulates airflow and acoustic features. The upper vocal tract of birds filters the sounds generated in the syrinx, and filter properties are actively adjusted. Nonlinear source-filter interactions may also play a role. The unique morphology and biomechanical system for sound production in birds presents an interesting model for exploring parallels in control mechanisms that give rise to highly convergent physical patterns of sound generation. More comparative work should provide a rich source for our understanding of the evolution of complex sound producing systems. Copyright © 2009 Elsevier Inc. All rights reserved.

  19. The auditory P50 component to onset and offset of sound

    PubMed Central

    Pratt, Hillel; Starr, Arnold; Michalewski, Henry J.; Bleich, Naomi; Mittelman, Nomi

    2008-01-01

    Objective: The auditory Event-Related Potentials (ERP) component P50 to sound onset and offset have been reported to be similar, but their magnetic homologue has been reported absent to sound offset. We compared the spatio-temporal distribution of cortical activity during P50 to sound onset and offset, without confounds of spectral change. Methods: ERPs were recorded in response to onsets and offsets of silent intervals of 0.5 s (gaps) appearing randomly in otherwise continuous white noise and compared to ERPs to randomly distributed click pairs with half second separation presented in silence. Subjects were awake and distracted from the stimuli by reading a complicated text. Measures of P50 included peak latency and amplitude, as well as source current density estimates to the clicks and sound onsets and offsets. Results P50 occurred in response to noise onsets and to clicks, while to noise offset it was absent. Latency of P50 was similar to noise onset (56 msec) and to clicks (53 msec). Sources of P50 to noise onsets and clicks included bilateral superior parietal areas. In contrast, noise offsets activated left inferior temporal and occipital areas at the time of P50. Source current density was significantly higher to noise onset than offset in the vicinity of the temporo-parietal junction. Conclusions: P50 to sound offset is absent compared to the distinct P50 to sound onset and to clicks, at different intracranial sources. P50 to stimulus onset and to clicks appears to reflect preattentive arousal by a new sound in the scene. Sound offset does not involve a new sound and hence the absent P50. Significance: Stimulus onset activates distinct early cortical processes that are absent to offset. PMID:18055255

  20. Blind separation of incoherent and spatially disjoint sound sources

    NASA Astrophysics Data System (ADS)

    Dong, Bin; Antoni, Jérôme; Pereira, Antonio; Kellermann, Walter

    2016-11-01

    Blind separation of sound sources aims at reconstructing the individual sources which contribute to the overall radiation of an acoustical field. The challenge is to reach this goal using distant measurements when all sources are operating concurrently. The working assumption is usually that the sources of interest are incoherent - i.e. statistically orthogonal - so that their separation can be approached by decorrelating a set of simultaneous measurements, which amounts to diagonalizing the cross-spectral matrix. Principal Component Analysis (PCA) is traditionally used to this end. This paper reports two new findings in this context. First, a sufficient condition is established under which "virtual" sources returned by PCA coincide with true sources; it stipulates that the sources of interest should be not only incoherent but also spatially orthogonal. A particular case of this instance is met by spatially disjoint sources - i.e. with non-overlapping support sets. Second, based on this finding, a criterion that enforces both statistical and spatial orthogonality is proposed to blindly separate incoherent sound sources which radiate from disjoint domains. This criterion can be easily incorporated into acoustic imaging algorithms such as beamforming or acoustical holography to identify sound sources of different origins. The proposed methodology is validated on laboratory experiments. In particular, the separation of aeroacoustic sources is demonstrated in a wind tunnel.

  1. Progress Toward Improving Jet Noise Predictions in Hot Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Kenzakowski, Donald C.

    2007-01-01

    An acoustic analogy methodology for improving noise predictions in hot round jets is presented. Past approaches have often neglected the impact of temperature fluctuations on the predicted sound spectral density, which could be significant for heated jets, and this has yielded noticeable acoustic under-predictions in such cases. The governing acoustic equations adopted here are a set of linearized, inhomogeneous Euler equations. These equations are combined into a single third order linear wave operator when the base flow is considered as a locally parallel mean flow. The remaining second-order fluctuations are regarded as the equivalent sources of sound and are modeled. It is shown that the hot jet effect may be introduced primarily through a fluctuating velocity/enthalpy term. Modeling this additional source requires specialized inputs from a RANS-based flowfield simulation. The information is supplied using an extension to a baseline two equation turbulence model that predicts total enthalpy variance in addition to the standard parameters. Preliminary application of this model to a series of unheated and heated subsonic jets shows significant improvement in the acoustic predictions at the 90 degree observer angle.

  2. A New Mechanism of Sound Generation in Songbirds

    NASA Astrophysics Data System (ADS)

    Goller, Franz; Larsen, Ole N.

    1997-12-01

    Our current understanding of the sound-generating mechanism in the songbird vocal organ, the syrinx, is based on indirect evidence and theoretical treatments. The classical avian model of sound production postulates that the medial tympaniform membranes (MTM) are the principal sound generators. We tested the role of the MTM in sound generation and studied the songbird syrinx more directly by filming it endoscopically. After we surgically incapacitated the MTM as a vibratory source, zebra finches and cardinals were not only able to vocalize, but sang nearly normal song. This result shows clearly that the MTM are not the principal sound source. The endoscopic images of the intact songbird syrinx during spontaneous and brain stimulation-induced vocalizations illustrate the dynamics of syringeal reconfiguration before phonation and suggest a different model for sound production. Phonation is initiated by rostrad movement and stretching of the syrinx. At the same time, the syrinx is closed through movement of two soft tissue masses, the medial and lateral labia, into the bronchial lumen. Sound production always is accompanied by vibratory motions of both labia, indicating that these vibrations may be the sound source. However, because of the low temporal resolution of the imaging system, the frequency and phase of labial vibrations could not be assessed in relation to that of the generated sound. Nevertheless, in contrast to the previous model, these observations show that both labia contribute to aperture control and strongly suggest that they play an important role as principal sound generators.

  3. Modal sound transmission loss of a single leaf panel: Asymptotic solutions.

    PubMed

    Wang, Chong

    2015-12-01

    In a previously published paper [C. Wang, J. Acoust. Soc. Am. 137(6), 3514-3522 (2015)], the modal sound transmission coefficients of a single leaf panel were discussed with regard to the inter-modal coupling effects. By incorporating such effect into the equivalent modal radiation impedance, which is directly related to the modal sound transmission coefficient of each mode, the overall sound transmission loss for both normal and randomized sound incidences was computed through a simple modal superposition. Benefiting from the analytical expressions of the equivalent modal impedance and modal transmission coefficients, in this paper, behaviors of modal sound transmission coefficients in several typical frequency ranges are discussed in detail. Asymptotic solutions are also given for the panels with relatively low bending stiffnesses, for which the sound transmission loss has been assumed to follow the mass law of a limp panel. Results are also compared to numerical analysis and the renowned mass law theories.

  4. Acoustic processing method for MS/MS experiments

    NASA Technical Reports Server (NTRS)

    Whymark, R. R.

    1973-01-01

    Acoustical methods in which intense sound beams can be used to control the position of objects are considered. The position control arises from the radiation force experienced when a body is placed in a sound field. A description of the special properties of intense sound fields useful for position control is followed by a discussion of the more obvious methods of position, namely the use of multiple sound beams. A new type of acoustic position control device is reported that has advantages of simplicity and reliability and utilizes only a single sound beam. Finally a description is given of an experimental single beam levitator, and the results obtained in a number of key levitation experiments.

  5. On the role of glottis-interior sources in the production of voiced sound.

    PubMed

    Howe, M S; McGowan, R S

    2012-02-01

    The voice source is dominated by aeroacoustic sources downstream of the glottis. In this paper an investigation is made of the contribution to voiced speech of secondary sources within the glottis. The acoustic waveform is ultimately determined by the volume velocity of air at the glottis, which is controlled by vocal fold vibration, pressure forcing from the lungs, and unsteady backreactions from the sound and from the supraglottal air jet. The theory of aerodynamic sound is applied to study the influence on the fine details of the acoustic waveform of "potential flow" added-mass-type glottal sources, glottis friction, and vorticity either in the glottis-wall boundary layer or in the portion of the free jet shear layer within the glottis. These sources govern predominantly the high frequency content of the sound when the glottis is near closure. A detailed analysis performed for a canonical, cylindrical glottis of rectangular cross section indicates that glottis-interior boundary/shear layer vortex sources and the surface frictional source are of comparable importance; the influence of the potential flow source is about an order of magnitude smaller. © 2012 Acoustical Society of America

  6. Calibration of International Space Station (ISS) Node 1 Vibro-Acoustic Model-Report 2

    NASA Technical Reports Server (NTRS)

    Zhang, Weiguo; Raveendra, Ravi

    2014-01-01

    Reported here is the capability of the Energy Finite Element Method (E-FEM) to predict the vibro-acoustic sound fields within the International Space Station (ISS) Node 1 and to compare the results with simulated leak sounds. A series of electronically generated structural ultrasonic noise sources were created in the pressure wall to emulate leak signals at different locations of the Node 1 STA module during its period of storage at Stennis Space Center (SSC). The exact sound source profiles created within the pressure wall at the source were unknown, but were estimated from the closest sensor measurement. The E-FEM method represents a reverberant sound field calculation, and of importance to this application is the requirement to correctly handle the direct field effect of the sound generation. It was also important to be able to compute the sound energy fields in the ultrasonic frequency range. This report demonstrates the capability of this technology as applied to this type of application.

  7. [Perception by teenagers and adults of the changed by amplitude sound sequences used in models of movement of the sound source].

    PubMed

    Andreeva, I G; Vartanian, I A

    2012-01-01

    The ability to evaluate direction of amplitude changes of sound stimuli was studied in adults and in the 11-12- and 15-16-year old teenagers. The stimuli representing sequences of fragments of the tone of 1 kHz, whose amplitude is changing with time, are used as model of approach and withdrawal of the sound sources. The 11-12-year old teenagers at estimation of direction of amplitude changes were shown to make the significantly higher number of errors as compared with two other examined groups, including those in repeated experiments. The structure of errors - the ratio of the portion of errors at estimation of increasing and decreasing by amplitude stimulus - turned out to be different in teenagers and in adults. The question is discussed about the effect of unspecific activation of the large hemisphere cortex in teenagers on processes if taking solution about the complex sound stimulus, including a possibility estimation of approach and withdrawal of the sound source.

  8. Interior and exterior sound field control using general two-dimensional first-order sources.

    PubMed

    Poletti, M A; Abhayapala, T D

    2011-01-01

    Reproduction of a given sound field interior to a circular loudspeaker array without producing an undesirable exterior sound field is an unsolved problem over a broadband of frequencies. At low frequencies, by implementing the Kirchhoff-Helmholtz integral using a circular discrete array of line-source loudspeakers, a sound field can be recreated within the array and produce no exterior sound field, provided that the loudspeakers have azimuthal polar responses with variable first-order responses which are a combination of a two-dimensional (2D) monopole and a radially oriented 2D dipole. This paper examines the performance of circular discrete arrays of line-source loudspeakers which also include a tangential dipole, providing general variable-directivity responses in azimuth. It is shown that at low frequencies, the tangential dipoles are not required, but that near and above the Nyquist frequency, the tangential dipoles can both improve the interior accuracy and reduce the exterior sound field. The additional dipoles extend the useful range of the array by around an octave.

  9. The silent base flow and the sound sources in a laminar jet.

    PubMed

    Sinayoko, Samuel; Agarwal, Anurag

    2012-03-01

    An algorithm to compute the silent base flow sources of sound in a jet is introduced. The algorithm is based on spatiotemporal filtering of the flow field and is applicable to multifrequency sources. It is applied to an axisymmetric laminar jet and the resulting sources are validated successfully. The sources are compared to those obtained from two classical acoustic analogies, based on quiescent and time-averaged base flows. The comparison demonstrates how the silent base flow sources shed light on the sound generation process. It is shown that the dominant source mechanism in the axisymmetric laminar jet is "shear-noise," which is a linear mechanism. The algorithm presented here could be applied to fully turbulent flows to understand the aerodynamic noise-generation mechanism. © 2012 Acoustical Society of America

  10. Sound Radiated by a Wave-Like Structure in a Compressible Jet

    NASA Technical Reports Server (NTRS)

    Golubev, V. V.; Prieto, A. F.; Mankbadi, R. R.; Dahl, M. D.; Hixon, R.

    2003-01-01

    This paper extends the analysis of acoustic radiation from the source model representing spatially-growing instability waves in a round jet at high speeds. Compared to previous work, a modified approach to the sound source modeling is examined that employs a set of solutions to linearized Euler equations. The sound radiation is then calculated using an integral surface method.

  11. Photoacoustic Effect Generated from an Expanding Spherical Source

    NASA Astrophysics Data System (ADS)

    Bai, Wenyu; Diebold, Gerald J.

    2018-02-01

    Although the photoacoustic effect is typically generated by amplitude-modulated continuous or pulsed radiation, the form of the wave equation for pressure that governs the generation of sound indicates that optical sources moving in an absorbing fluid can produce sound as well. Here, the characteristics of the acoustic wave produced by a radially symmetric Gaussian source expanding outwardly from the origin are found. The unique feature of the photoacoustic effect from the spherical source is a trailing compressive wave that arises from reflection of an inwardly propagating component of the wave. Similar to the one-dimensional geometry, an unbounded amplification effect is found for the Gaussian source expanding at the sound speed.

  12. Modal sound transmission loss of a single leaf panel: Effects of inter-modal coupling.

    PubMed

    Wang, Chong

    2015-06-01

    Sound transmission through a single leaf panel has mostly been discussed and explained by using the approaching wave concept, from which the well-known mass law can be derived. In this paper, the modal behavior in sound transmission coefficients is explored, and it is shown that the mutual modal radiation impedances in modal sound transmission coefficients may not be ignored even for a panel immersed in a light fluid. By introducing the equivalent modal impedance which incorporates the inter-modal coupling effect, an analytical expression for the modal sound transmission coefficient is derived, and the overall sound transmission coefficient is simply a modal superposition of modal sound transmission coefficients. A good correlation is obtained between analytical calculation and boundary element method. In addition, it is found that inter-modal coupling has noticeable effects in modal sound transmission coefficients in the subsonic region but may be ignored as modes become supersonic. It is also shown that the well-known mass law performance is attributed to all the supersonic modes.

  13. Measurement and Numerical Calculation of Force on a Particle in a Strong Acoustic Field Required for Levitation

    NASA Astrophysics Data System (ADS)

    Kozuka, Teruyuki; Yasui, Kyuichi; Tuziuti, Toru; Towata, Atsuya; Lee, Judy; Iida, Yasuo

    2009-07-01

    Using a standing-wave field generated between a sound source and a reflector, it is possible to trap small objects at nodes of the sound pressure distribution in air. In this study, a sound field generated under a flat or concave reflector was studied by both experimental measurement and numerical calculation. The calculated result agrees well with the experimental data. The maximum force generated between a sound source of 25.0 mm diameter and a concave reflector is 0.8 mN in the experiment. A steel ball of 2.0 mm in diameter was levitated in the sound field in air.

  14. Using Incremental Rehearsal to Teach Letter Sounds to English Language Learners

    ERIC Educational Resources Information Center

    Rahn, Naomi L.; Wilson, Jennifer; Egan, Andrea; Brandes, Dana; Kunkel, Amy; Peterson, Meredith; McComas, Jennifer

    2015-01-01

    This study examined the effects of incremental rehearsal (IR) on letter sound expression for one kindergarten and one first grade English learner who were below district benchmark for letter sound fluency. A single-subject multiple-baseline design across sets of unknown letter sounds was used to evaluate the effect of IR on letter-sound expression…

  15. Investigation of spherical loudspeaker arrays for local active control of sound.

    PubMed

    Peleg, Tomer; Rafaely, Boaz

    2011-10-01

    Active control of sound can be employed globally to reduce noise levels in an entire enclosure, or locally around a listener's head. Recently, spherical loudspeaker arrays have been studied as multiple-channel sources for local active control of sound, presenting the fundamental theory and several active control configurations. In this paper, important aspects of using a spherical loudspeaker array for local active control of sound are further investigated. First, the feasibility of creating sphere-shaped quiet zones away from the source is studied both theoretically and numerically, showing that these quiet zones are associated with sound amplification and poor system robustness. To mitigate the latter, the design of shell-shaped quiet zones around the source is investigated. A combination of two spherical sources is then studied with the aim of enlarging the quiet zone. The two sources are employed to generate quiet zones that surround a rigid sphere, investigating the application of active control around a listener's head. A significant improvement in performance is demonstrated in this case over a conventional headrest-type system that uses two monopole secondary sources. Finally, several simulations are presented to support the theoretical work and to demonstrate the performance and limitations of the system. © 2011 Acoustical Society of America

  16. Hydrographic surveys of rivers and lakes using a multibeam echosounder mapping system

    USGS Publications Warehouse

    Huizinga, Richard J.; Heimann, David C.

    2018-06-12

    A multibeam echosounder is a type of sound navigation and ranging device that uses sound waves to “see” through even murky waters. Unlike a single beam echosounder (also known as a depth sounder or fathometer) that releases a single sound pulse in a single, narrow beam and “listens” for the return echo, a multibeam system emits a multidirectional radial beam to obtain information within a fan-shaped swath. The timing and direction of the returning sound waves provide detailed information on the depth of water and the shape of the river channel, lake bottom, or any underwater features of interest. This information has been used by the U.S. Geological Survey to efficiently generate high-resolution maps of river and lake bottoms.

  17. Efficient techniques for wave-based sound propagation in interactive applications

    NASA Astrophysics Data System (ADS)

    Mehra, Ravish

    Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data-driven, rotating or time-varying directivity function at runtime. Unlike previous approaches, the listener directivity approach can be used to compute spatial audio (3D audio) for a moving, rotating listener at interactive rates. Lastly, we propose an efficient GPU-based time-domain solver for the wave equation that enables wave simulation up to the mid-frequency range in tens of minutes on a desktop computer. It is demonstrated that by carefully mapping all the components of the wave simulator to match the parallel processing capabilities of the graphics processors, significant improvement in performance can be achieved compared to the CPU-based simulators, while maintaining numerical accuracy. We validate these techniques with offline numerical simulations and measured data recorded in an outdoor scene. We present results of preliminary user evaluations conducted to study the impact of these techniques on user's immersion in virtual environment. We have integrated these techniques with the Half-Life 2 game engine, Oculus Rift head-mounted display, and Xbox game controller to enable users to experience high-quality acoustics effects and spatial audio in the virtual environment.

  18. Multimodal modeling and validation of simplified vocal tract acoustics for sibilant /s/

    NASA Astrophysics Data System (ADS)

    Yoshinaga, T.; Van Hirtum, A.; Wada, S.

    2017-12-01

    To investigate the acoustic characteristics of sibilant /s/, multimodal theory is applied to a simplified vocal tract geometry derived from a CT scan of a single speaker for whom the sound spectrum was gathered. The vocal tract was represented by a concatenation of waveguides with rectangular cross-sections and constant width, and a sound source was placed either at the inlet of the vocal tract or downstream from the constriction representing the sibilant groove. The modeled pressure amplitude was validated experimentally using an acoustic driver or airflow supply at the vocal tract inlet. Results showed that the spectrum predicted with the source at the inlet and including higher-order modes matched the spectrum measured with the acoustic driver at the inlet. Spectra modeled with the source downstream from the constriction captured the first characteristic peak observed for the speaker at 4 kHz. By positioning the source near the upper teeth wall, the higher frequency peak observed for the speaker at 8 kHz was predicted with the inclusion of higher-order modes. At the frequencies of the characteristic peaks, nodes and antinodes of the pressure amplitude were observed in the simplified vocal tract when the source was placed downstream from the constriction. These results indicate that the multimodal approach enables to capture the amplitude and frequency of the peaks in the spectrum as well as the nodes and antinodes of the pressure distribution due to /s/ inside the vocal tract.

  19. Two dimensional sound field reproduction using higher order sources to exploit room reflections.

    PubMed

    Betlehem, Terence; Poletti, Mark A

    2014-04-01

    In this paper, sound field reproduction is performed in a reverberant room using higher order sources (HOSs) and a calibrating microphone array. Previously a sound field was reproduced with fixed directivity sources and the reverberation compensated for using digital filters. However by virtue of their directive properties, HOSs may be driven to not only avoid the creation of excess reverberation but also to use room reflection to contribute constructively to the desired sound field. The manner by which the loudspeakers steer the sound around the room is determined by measuring the acoustic transfer functions. The requirements on the number and order N of HOSs for accurate reproduction in a reverberant room are derived, showing a 2N + 1-fold decrease in the number of loudspeakers in comparison to using monopole sources. HOSs are shown applicable to rooms with a rich variety of wall reflections while in an anechoic room their advantages may be lost. Performance is investigated in a room using extensions of both the diffuse field model and a more rigorous image-source simulation method, which account for the properties of the HOSs. The robustness of the proposed method is validated by introducing measurement errors.

  20. Radiation characteristics of multiple and single sound hole vihuelas and a classical guitar.

    PubMed

    Bader, Rolf

    2012-01-01

    Two recently built vihuelas, quasi-replicas of the Spanish Renaissance guitar, one with a small body and one sound hole and one with a large body with five sound holes, together with a classical guitar are investigated. Frequency dependent radiation strengths are measured using a 128 microphone array, back-propagating the frequency dependent sound field upon the body surface. All three instruments have a strong sound hole radiation within the low frequency range. Here the five tone holes vihuela has a much wider frequency region of strong sound hole radiation up to about 500 Hz, whereas the single hole instruments only have strong sound hole radiations up to about 300 Hz due to the enlarged radiation area of the sound holes. The strong broadband radiation of the five sound hole vihuela up to about 500 Hz is also caused by the sound hole phases, showing very consistent in-phase relations up to this frequency range. Also the radiation strength of the sound holes placed nearer to the center of the sound box are much stronger than those near the ribs, pointing to a strong position dependency of sound hole to radiation strength. The Helmholtz resonance frequency of the five sound hole vihuela is influenced by this difference in radiation strength but not by the rosettas, which only have a slight effect on the Helmholtz frequency. © 2012 Acoustical Society of America.

  1. Restoration of spatial hearing in adult cochlear implant users with single-sided deafness.

    PubMed

    Litovsky, Ruth Y; Moua, Keng; Godar, Shelly; Kan, Alan; Misurelli, Sara M; Lee, Daniel J

    2018-04-14

    In recent years, cochlear implants (CIs) have been provided in growing numbers to people with not only bilateral deafness but also to people with unilateral hearing loss, at times in order to alleviate tinnitus. This study presents audiological data from 15 adult participants (ages 48 ± 12 years) with single sided deafness. Results are presented from 9/15 adults, who received a CI (SSD-CI) in the deaf ear and were tested in Acoustic or Acoustic + CI hearing modes, and 6/15 adults who are planning to receive a CI, and were tested in the unilateral condition only. Testing included (1) audiometric measures of threshold, (2) speech understanding for CNC words and AzBIO sentences, (3) tinnitus handicap inventory, (4) sound localization with stationary sound sources, and (5) perceived auditory motion. Results showed that when listening to sentences in quiet, performance was excellent in the Acoustic and Acoustic + CI conditions. In noise, performance was similar between Acoustic and Acoustic + CI conditions in 4/6 participants tested, and slightly worse in the Acoustic + CI in 2/6 participants. In some cases, the CI provided reduced tinnitus handicap scores. When testing sound localization ability, the Acoustic + CI condition resulted in improved sound localization RMS error of 29.2° (SD: ±6.7°) compared to 56.6° (SD: ±16.5°) in the Acoustic-only condition. Preliminary results suggest that the perception of motion direction, whereby subjects are required to process and compare directional cues across multiple locations, is impaired when compared with that of normal hearing subjects. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Calculating far-field radiated sound pressure levels from NASTRAN output

    NASA Technical Reports Server (NTRS)

    Lipman, R. R.

    1986-01-01

    FAFRAP is a computer program which calculates far field radiated sound pressure levels from quantities computed by a NASTRAN direct frequency response analysis of an arbitrarily shaped structure. Fluid loading on the structure can be computed directly by NASTRAN or an added-mass approximation to fluid loading on the structure can be used. Output from FAFRAP includes tables of radiated sound pressure levels and several types of graphic output. FAFRAP results for monopole and dipole sources compare closely with an explicit calculation of the radiated sound pressure level for those sources.

  3. Modelling of human low frequency sound localization acuity demonstrates dominance of spatial variation of interaural time difference and suggests uniform just-noticeable differences in interaural time difference.

    PubMed

    Smith, Rosanna C G; Price, Stephen R

    2014-01-01

    Sound source localization is critical to animal survival and for identification of auditory objects. We investigated the acuity with which humans localize low frequency, pure tone sounds using timing differences between the ears. These small differences in time, known as interaural time differences or ITDs, are identified in a manner that allows localization acuity of around 1° at the midline. Acuity, a relative measure of localization ability, displays a non-linear variation as sound sources are positioned more laterally. All species studied localize sounds best at the midline and progressively worse as the sound is located out towards the side. To understand why sound localization displays this variation with azimuthal angle, we took a first-principles, systemic, analytical approach to model localization acuity. We calculated how ITDs vary with sound frequency, head size and sound source location for humans. This allowed us to model ITD variation for previously published experimental acuity data and determine the distribution of just-noticeable differences in ITD. Our results suggest that the best-fit model is one whereby just-noticeable differences in ITDs are identified with uniform or close to uniform sensitivity across the physiological range. We discuss how our results have several implications for neural ITD processing in different species as well as development of the auditory system.

  4. Graphene-on-paper sound source devices.

    PubMed

    Tian, He; Ren, Tian-Ling; Xie, Dan; Wang, Yu-Feng; Zhou, Chang-Jian; Feng, Ting-Ting; Fu, Di; Yang, Yi; Peng, Ping-Gang; Wang, Li-Gang; Liu, Li-Tian

    2011-06-28

    We demonstrate an interesting phenomenon that graphene can emit sound. The application of graphene can be expanded in the acoustic field. Graphene-on-paper sound source devices are made by patterning graphene on paper substrates. Three graphene sheet samples with the thickness of 100, 60, and 20 nm were fabricated. Sound emission from graphene is measured as a function of power, distance, angle, and frequency in the far-field. The theoretical model of air/graphene/paper/PCB board multilayer structure is established to analyze the sound directivity, frequency response, and efficiency. Measured sound pressure level (SPL) and efficiency are in good agreement with theoretical results. It is found that graphene has a significant flat frequency response in the wide ultrasound range 20-50 kHz. In addition, the thinner graphene sheets can produce higher SPL due to its lower heat capacity per unit area (HCPUA). The infrared thermal images reveal that a thermoacoustic effect is the working principle. We find that the sound performance mainly depends on the HCPUA of the conductor and the thermal properties of the substrate. The paper-based graphene sound source devices have highly reliable, flexible, no mechanical vibration, simple structure and high performance characteristics. It could open wide applications in multimedia, consumer electronics, biological, medical, and many other areas.

  5. Representation of Sound Objects within Early-Stage Auditory Areas: A Repetition Effect Study Using 7T fMRI

    PubMed Central

    Da Costa, Sandra; Bourquin, Nathalie M.-P.; Knebel, Jean-François; Saenz, Melissa; van der Zwaag, Wietske; Clarke, Stephanie

    2015-01-01

    Environmental sounds are highly complex stimuli whose recognition depends on the interaction of top-down and bottom-up processes in the brain. Their semantic representations were shown to yield repetition suppression effects, i. e. a decrease in activity during exposure to a sound that is perceived as belonging to the same source as a preceding sound. Making use of the high spatial resolution of 7T fMRI we have investigated the representations of sound objects within early-stage auditory areas on the supratemporal plane. The primary auditory cortex was identified by means of tonotopic mapping and the non-primary areas by comparison with previous histological studies. Repeated presentations of different exemplars of the same sound source, as compared to the presentation of different sound sources, yielded significant repetition suppression effects within a subset of early-stage areas. This effect was found within the right hemisphere in primary areas A1 and R as well as two non-primary areas on the antero-medial part of the planum temporale, and within the left hemisphere in A1 and a non-primary area on the medial part of Heschl’s gyrus. Thus, several, but not all early-stage auditory areas encode the meaning of environmental sounds. PMID:25938430

  6. Conversion of environmental data to a digital-spatial database, Puget Sound area, Washington

    USGS Publications Warehouse

    Uhrich, M.A.; McGrath, T.S.

    1997-01-01

    Data and maps from the Puget Sound Environmental Atlas, compiled for the U.S. Environmental Protection Agency, the Puget Sound Water Quality Authority, and the U.S. Army Corps of Engineers, have been converted into a digital-spatial database using a geographic information system. Environmental data for the Puget Sound area,collected from sources other than the Puget SoundEnvironmental Atlas by different Federal, State, andlocal agencies, also have been converted into thisdigital-spatial database. Background on the geographic-information-system planning process, the design and implementation of the geographic information-system database, and the reasons for conversion to this digital-spatial database are included in this report. The Puget Sound Environmental Atlas data layers include information about seabird nesting areas, eelgrass and kelp habitat, marine mammal and fish areas, and shellfish resources and bed certification. Data layers, from sources other than the Puget Sound Environmental Atlas, include the Puget Sound shoreline, the water-body system, shellfish growing areas, recreational shellfish beaches, sewage-treatment outfalls, upland hydrography,watershed and political boundaries, and geographicnames. The sources of data, descriptions of the datalayers, and the steps and errors of processing associated with conversion to a digital-spatial database used in development of the Puget Sound Geographic Information System also are included in this report. The appendixes contain data dictionaries for each of the resource layers and error values for the conversion of Puget SoundEnvironmental Atlas data.

  7. Wave Field Synthesis of moving sources with arbitrary trajectory and velocity profile.

    PubMed

    Firtha, Gergely; Fiala, Péter

    2017-08-01

    The sound field synthesis of moving sound sources is of great importance when dynamic virtual sound scenes are to be reconstructed. Previous solutions considered only virtual sources moving uniformly along a straight trajectory, synthesized employing a linear loudspeaker array. This article presents the synthesis of point sources following an arbitrary trajectory. Under high-frequency assumptions 2.5D Wave Field Synthesis driving functions are derived for arbitrary shaped secondary source contours by adapting the stationary phase approximation to the dynamic description of sources in motion. It is explained how a referencing function should be chosen in order to optimize the amplitude of synthesis on an arbitrary receiver curve. Finally, a finite difference implementation scheme is considered, making the presented approach suitable for real-time applications.

  8. Sound localization skills in children who use bilateral cochlear implants and in children with normal acoustic hearing

    PubMed Central

    Grieco-Calub, Tina M.; Litovsky, Ruth Y.

    2010-01-01

    Objectives To measure sound source localization in children who have sequential bilateral cochlear implants (BICIs); to determine if localization accuracy correlates with performance on a right-left discrimination task (i.e., spatial acuity); to determine if there is a measurable bilateral benefit on a sound source identification task (i.e., localization accuracy) by comparing performance under bilateral and unilateral listening conditions; to determine if sound source localization continues to improve with longer durations of bilateral experience. Design Two groups of children participated in this study: a group of 21 children who received BICIs in sequential procedures (5–14 years old) and a group of 7 typically-developing children with normal acoustic hearing (5 years old). Testing was conducted in a large sound-treated booth with loudspeakers positioned on a horizontal arc with a radius of 1.2 m. Children participated in two experiments that assessed spatial hearing skills. Spatial hearing acuity was assessed with a discrimination task in which listeners determined if a sound source was presented on the right or left side of center; the smallest angle at which performance on this task was reliably above chance is the minimum audible angle. Sound localization accuracy was assessed with a sound source identification task in which children identified the perceived position of the sound source from a multi-loudspeaker array (7 or 15); errors are quantified using the root-mean-square (RMS) error. Results Sound localization accuracy was highly variable among the children with BICIs, with RMS errors ranging from 19°–56°. Performance of the NH group, with RMS errors ranging from 9°–29° was significantly better. Within the BICI group, in 11/21 children RMS errors were smaller in the bilateral vs. unilateral listening condition, indicating bilateral benefit. There was a significant correlation between spatial acuity and sound localization accuracy (R2=0.68, p<0.01), suggesting that children who achieve small RMS errors tend to have the smallest MAAs. Although there was large intersubject variability, testing of 11 children in the BICI group at two sequential visits revealed a subset of children who show improvement in spatial hearing skills over time. Conclusions A subset of children who use sequential BICIs can acquire sound localization abilities, even after long intervals between activation of hearing in the first- and second-implanted ears. This suggests that children with activation of the second implant later in life may be capable of developing spatial hearing abilities. The large variability in performance among the children with BICIs suggests that maturation of sound localization abilities in children with BICIs may be dependent on various individual subject factors such as age of implantation and chronological age. PMID:20592615

  9. Hardwall acoustical characteristics and measurement capabilities of the NASA Lewis 9 x 15 foot low speed wind tunnel

    NASA Technical Reports Server (NTRS)

    Rentz, P. E.

    1976-01-01

    Experimental evaluations of the acoustical characteristics and source sound power and directionality measurement capabilities of the NASA Lewis 9 x 15 foot low speed wind tunnel in the untreated or hardwall configuration were performed. The results indicate that source sound power estimates can be made using only settling chamber sound pressure measurements. The accuracy of these estimates, expressed as one standard deviation, can be improved from + or - 4 db to + or - 1 db if sound pressure measurements in the preparation room and diffuser are also used and source directivity information is utilized. A simple procedure is presented. Acceptably accurate measurements of source direct field acoustic radiation were found to be limited by the test section reverberant characteristics to 3.0 feet for omni-directional and highly directional sources. Wind-on noise measurements in the test section, settling chamber and preparation room were found to depend on the sixth power of tunnel velocity. The levels were compared with various analytic models. Results are presented and discussed.

  10. Estimation of multiple sound sources with data and model uncertainties using the EM and evidential EM algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Xun; Quost, Benjamin; Chazot, Jean-Daniel; Antoni, Jérôme

    2016-01-01

    This paper considers the problem of identifying multiple sound sources from acoustical measurements obtained by an array of microphones. The problem is solved via maximum likelihood. In particular, an expectation-maximization (EM) approach is used to estimate the sound source locations and strengths, the pressure measured by a microphone being interpreted as a mixture of latent signals emitted by the sources. This work also considers two kinds of uncertainties pervading the sound propagation and measurement process: uncertain microphone locations and uncertain wavenumber. These uncertainties are transposed to the data in the belief functions framework. Then, the source locations and strengths can be estimated using a variant of the EM algorithm, known as the Evidential EM (E2M) algorithm. Eventually, both simulation and real experiments are shown to illustrate the advantage of using the EM in the case without uncertainty and the E2M in the case of uncertain measurement.

  11. Near-Field Sound Localization Based on the Small Profile Monaural Structure

    PubMed Central

    Kim, Youngwoong; Kim, Keonwook

    2015-01-01

    The acoustic wave around a sound source in the near-field area presents unconventional properties in the temporal, spectral, and spatial domains due to the propagation mechanism. This paper investigates a near-field sound localizer in a small profile structure with a single microphone. The asymmetric structure around the microphone provides a distinctive spectral variation that can be recognized by the dedicated algorithm for directional localization. The physical structure consists of ten pipes of different lengths in a vertical fashion and rectangular wings positioned between the pipes in radial directions. The sound from an individual direction travels through the nearest open pipe, which generates the particular fundamental frequency according to the acoustic resonance. The Cepstral parameter is modified to evaluate the fundamental frequency. Once the system estimates the fundamental frequency of the received signal, the length of arrival and angle of arrival (AoA) are derived by the designed model. From an azimuthal distance of 3–15 cm from the outer body of the pipes, the extensive acoustic experiments with a 3D-printed structure show that the direct and side directions deliver average hit rates of 89% and 73%, respectively. The closer positions to the system demonstrate higher accuracy, and the overall hit rate performance is 78% up to 15 cm away from the structure body. PMID:26580618

  12. Frontal Cortex Activation Causes Rapid Plasticity of Auditory Cortical Processing

    PubMed Central

    Winkowski, Daniel E.; Bandyopadhyay, Sharba; Shamma, Shihab A.

    2013-01-01

    Neurons in the primary auditory cortex (A1) can show rapid changes in receptive fields when animals are engaged in sound detection and discrimination tasks. The source of a signal to A1 that triggers these changes is suspected to be in frontal cortical areas. How or whether activity in frontal areas can influence activity and sensory processing in A1 and the detailed changes occurring in A1 on the level of single neurons and in neuronal populations remain uncertain. Using electrophysiological techniques in mice, we found that pairing orbitofrontal cortex (OFC) stimulation with sound stimuli caused rapid changes in the sound-driven activity within A1 that are largely mediated by noncholinergic mechanisms. By integrating in vivo two-photon Ca2+ imaging of A1 with OFC stimulation, we found that pairing OFC activity with sounds caused dynamic and selective changes in sensory responses of neural populations in A1. Further, analysis of changes in signal and noise correlation after OFC pairing revealed improvement in neural population-based discrimination performance within A1. This improvement was frequency specific and dependent on correlation changes. These OFC-induced influences on auditory responses resemble behavior-induced influences on auditory responses and demonstrate that OFC activity could underlie the coordination of rapid, dynamic changes in A1 to dynamic sensory environments. PMID:24227723

  13. CNTNAP2 Is Significantly Associated With Speech Sound Disorder in the Chinese Han Population.

    PubMed

    Zhao, Yun-Jing; Wang, Yue-Ping; Yang, Wen-Zhu; Sun, Hong-Wei; Ma, Hong-Wei; Zhao, Ya-Ru

    2015-11-01

    Speech sound disorder is the most common communication disorder. Some investigations support the possibility that the CNTNAP2 gene might be involved in the pathogenesis of speech-related diseases. To investigate single-nucleotide polymorphisms in the CNTNAP2 gene, 300 unrelated speech sound disorder patients and 200 normal controls were included in the study. Five single-nucleotide polymorphisms were amplified and directly sequenced. Significant differences were found in the genotype (P = .0003) and allele (P = .0056) frequencies of rs2538976 between patients and controls. The excess frequency of the A allele in the patient group remained significant after Bonferroni correction (P = .0280). A significant haplotype association with rs2710102T/+rs17236239A/+2538976A/+2710117A (P = 4.10e-006) was identified. A neighboring single-nucleotide polymorphism, rs10608123, was found in complete linkage disequilibrium with rs2538976, and the genotypes exactly corresponded to each other. The authors propose that these CNTNAP2 variants increase the susceptibility to speech sound disorder. The single-nucleotide polymorphisms rs10608123 and rs2538976 may merge into one single-nucleotide polymorphism. © The Author(s) 2015.

  14. Auditory Localization: An Annotated Bibliography

    DTIC Science & Technology

    1983-11-01

    tranverse plane, natural sound localization in ,-- both horizontal and vertical planes can be performed with nearly the same accuracy as real sound sources...important for unscrambling the competing sounds which so often occur in natural environments. A workable sound sensor has been constructed and empirical

  15. Detection of Sound Image Movement During Horizontal Head Rotation

    PubMed Central

    Ohba, Kagesho; Iwaya, Yukio; Suzuki, Yôiti

    2016-01-01

    Movement detection for a virtual sound source was measured during the listener’s horizontal head rotation. Listeners were instructed to do head rotation at a given speed. A trial consisted of two intervals. During an interval, a virtual sound source was presented 60° to the right or left of the listener, who was instructed to rotate the head to face the sound image position. Then in one of a pair of intervals, the sound position was moved slightly in the middle of the rotation. Listeners were asked to judge the interval in a trial during which the sound stimuli moved. Results suggest that detection thresholds are higher when listeners do head rotation. Moreover, this effect was found to be independent of the rotation velocity. PMID:27698993

  16. Rotorcraft Noise Model

    NASA Technical Reports Server (NTRS)

    Lucas, Michael J.; Marcolini, Michael A.

    1997-01-01

    The Rotorcraft Noise Model (RNM) is an aircraft noise impact modeling computer program being developed for NASA-Langley Research Center which calculates sound levels at receiver positions either on a uniform grid or at specific defined locations. The basic computational model calculates a variety of metria. Acoustic properties of the noise source are defined by two sets of sound pressure hemispheres, each hemisphere being centered on a noise source of the aircraft. One set of sound hemispheres provides the broadband data in the form of one-third octave band sound levels. The other set of sound hemispheres provides narrowband data in the form of pure-tone sound pressure levels and phase. Noise contours on the ground are output graphically or in tabular format, and are suitable for inclusion in Environmental Impact Statements or Environmental Assessments.

  17. Theoretical and experimental study on active sound transmission control based on single structural mode actuation using point force actuators.

    PubMed

    Sanada, Akira; Tanaka, Nobuo

    2012-08-01

    This study deals with the feedforward active control of sound transmission through a simply supported rectangular panel using vibration actuators. The control effect largely depends on the excitation method, including the number and locations of actuators. In order to obtain a large control effect at low frequencies over a wide frequency, an active transmission control method based on single structural mode actuation is proposed. Then, with the goal of examining the feasibility of the proposed method, the (1, 3) mode is selected as the target mode and a modal actuation method in combination with six point force actuators is considered. Assuming that a single input single output feedforward control is used, sound transmission in the case minimizing the transmitted sound power is calculated for some actuation methods. Simulation results showed that the (1, 3) modal actuation is globally effective at reducing the sound transmission by more than 10 dB in the low-frequency range for both normal and oblique incidences. Finally, experimental results also showed that a large reduction could be achieved in the low-frequency range, which proves the validity and feasibility of the proposed method.

  18. Differential presence of anthropogenic compounds dissolved in the marine waters of Puget Sound, WA and Barkley Sound, BC.

    PubMed

    Keil, Richard; Salemme, Keri; Forrest, Brittany; Neibauer, Jaqui; Logsdon, Miles

    2011-11-01

    Organic compounds were evaluated in March 2010 at 22 stations in Barkley Sound, Vancouver Island Canada and at 66 locations in Puget Sound. Of 37 compounds, 15 were xenobiotics, 8 were determined to have an anthropogenic imprint over natural sources, and 13 were presumed to be of natural or mixed origin. The three most frequently detected compounds were salicyclic acid, vanillin and thymol. The three most abundant compounds were diethylhexyl phthalate (DEHP), ethyl vanillin and benzaldehyde (∼600 n g L(-1) on average). Concentrations of xenobiotics were 10-100 times higher in Puget Sound relative to Barkley Sound. Three compound couplets are used to illustrate the influence of human activity on marine waters; vanillin and ethyl vanillin, salicylic acid and acetylsalicylic acid, and cinnamaldehyde and cinnamic acid. Ratios indicate that anthropogenic activities are the predominant source of these chemicals in Puget Sound. Published by Elsevier Ltd.

  19. A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS

    NASA Astrophysics Data System (ADS)

    Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto

    At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.

  20. Mapping the sound field of an erupting submarine volcano using an acoustic glider.

    PubMed

    Matsumoto, Haru; Haxel, Joseph H; Dziak, Robert P; Bohnenstiehl, Delwayne R; Embley, Robert W

    2011-03-01

    An underwater glider with an acoustic data logger flew toward a recently discovered erupting submarine volcano in the northern Lau basin. With the volcano providing a wide-band sound source, recordings from the two-day survey produced a two-dimensional sound level map spanning 1 km (depth) × 40 km(distance). The observed sound field shows depth- and range-dependence, with the first-order spatial pattern being consistent with the predictions of a range-dependent propagation model. The results allow constraining the acoustic source level of the volcanic activity and suggest that the glider provides an effective platform for monitoring natural and anthropogenic ocean sounds. © 2011 Acoustical Society of America

  1. Study of environmental sound source identification based on hidden Markov model for robust speech recognition

    NASA Astrophysics Data System (ADS)

    Nishiura, Takanobu; Nakamura, Satoshi

    2003-10-01

    Humans communicate with each other through speech by focusing on the target speech among environmental sounds in real acoustic environments. We can easily identify the target sound from other environmental sounds. For hands-free speech recognition, the identification of the target speech from environmental sounds is imperative. This mechanism may also be important for a self-moving robot to sense the acoustic environments and communicate with humans. Therefore, this paper first proposes hidden Markov model (HMM)-based environmental sound source identification. Environmental sounds are modeled by three states of HMMs and evaluated using 92 kinds of environmental sounds. The identification accuracy was 95.4%. This paper also proposes a new HMM composition method that composes speech HMMs and an HMM of categorized environmental sounds for robust environmental sound-added speech recognition. As a result of the evaluation experiments, we confirmed that the proposed HMM composition outperforms the conventional HMM composition with speech HMMs and a noise (environmental sound) HMM trained using noise periods prior to the target speech in a captured signal. [Work supported by Ministry of Public Management, Home Affairs, Posts and Telecommunications of Japan.

  2. Theoretical and Experimental Aspects of Acoustic Modelling of Engine Exhaust Systems with Applications to a Vacuum Pump

    NASA Astrophysics Data System (ADS)

    Sridhara, Basavapatna Sitaramaiah

    In an internal combustion engine, the engine is the noise source and the exhaust pipe is the main transmitter of noise. Mufflers are often used to reduce engine noise level in the exhaust pipe. To optimize a muffler design, a series of experiments could be conducted using various mufflers installed in the exhaust pipe. For each configuration, the radiated sound pressure could be measured. However, this is not a very efficient method. A second approach would be to develop a scheme involving only a few measurements which can predict the radiated sound pressure at a specified distance from the open end of the exhaust pipe. In this work, the engine exhaust system was modelled as a lumped source-muffler-termination system. An expression for the predicted sound pressure level was derived in terms of the source and termination impedances, and the muffler geometry. The pressure source and monopole radiation models were used for the source and the open end of the exhaust pipe. The four pole parameters were used to relate the acoustic properties at two different cross sections of the muffler and the pipe. The developed formulation was verified through a series of experiments. Two loudspeakers and a reciprocating type vacuum pump were used as sound sources during the tests. The source impedance was measured using the direct, two-load and four-load methods. A simple expansion chamber and a side-branch resonator were used as mufflers. Sound pressure level measurements for the prediction scheme were made for several source-muffler and source-straight pipe combinations. The predicted and measured sound pressure levels were compared for all cases considered. In all cases, correlation of the experimental results and those predicted by the developed expressions was good. Predicted and measured values of the insertion loss of the mufflers were compared. The agreement between the two was good. Also, an error analysis of the four-load method was done.

  3. Ambient Sound-Based Collaborative Localization of Indeterministic Devices

    PubMed Central

    Kamminga, Jacob; Le, Duc; Havinga, Paul

    2016-01-01

    Localization is essential in wireless sensor networks. To our knowledge, no prior work has utilized low-cost devices for collaborative localization based on only ambient sound, without the support of local infrastructure. The reason may be the fact that most low-cost devices are indeterministic and suffer from uncertain input latencies. This uncertainty makes accurate localization challenging. Therefore, we present a collaborative localization algorithm (Cooperative Localization on Android with ambient Sound Sources (CLASS)) that simultaneously localizes the position of indeterministic devices and ambient sound sources without local infrastructure. The CLASS algorithm deals with the uncertainty by splitting the devices into subsets so that outliers can be removed from the time difference of arrival values and localization results. Since Android is indeterministic, we select Android devices to evaluate our approach. The algorithm is evaluated with an outdoor experiment and achieves a mean Root Mean Square Error (RMSE) of 2.18 m with a standard deviation of 0.22 m. Estimated directions towards the sound sources have a mean RMSE of 17.5° and a standard deviation of 2.3°. These results show that it is feasible to simultaneously achieve a relative positioning of both devices and sound sources with sufficient accuracy, even when using non-deterministic devices and platforms, such as Android. PMID:27649176

  4. Amplitude and Wavelength Measurement of Sound Waves in Free Space using a Sound Wave Phase Meter

    NASA Astrophysics Data System (ADS)

    Ham, Sounggil; Lee, Kiwon

    2018-05-01

    We developed a sound wave phase meter (SWPM) and measured the amplitude and wavelength of sound waves in free space. The SWPM consists of two parallel metal plates, where the front plate was operated as a diaphragm. An aluminum perforated plate was additionally installed in front of the diaphragm, and the same signal as that applied to the sound source was applied to the perforated plate. The SWPM measures both the sound wave signal due to the diaphragm vibration and the induction signal due to the electric field of the aluminum perforated plate. Therefore, the two measurement signals interfere with each other due to the phase difference according to the distance between the sound source and the SWPM, and the amplitude of the composite signal that is output as a result is periodically changed. We obtained the wavelength of the sound wave from this periodic amplitude change measured in the free space and compared it with the theoretically calculated values.

  5. Investigation of the Statistics of Pure Tone Sound Power Injection from Low Frequency, Finite Sized Sources in a Reverberant Room

    NASA Technical Reports Server (NTRS)

    Smith, Wayne Farrior

    1973-01-01

    The effect of finite source size on the power statistics in a reverberant room for pure tone excitation was investigated. Theoretical results indicate that the standard deviation of low frequency, pure tone finite sources is always less than that predicted by point source theory and considerably less when the source dimension approaches one-half an acoustic wavelength or greater. A supporting experimental study was conducted utilizing an eight inch loudspeaker and a 30 inch loudspeaker at eleven source positions. The resulting standard deviation of sound power output of the smaller speaker is in excellent agreement with both the derived finite source theory and existing point source theory, if the theoretical data is adjusted to account for experimental incomplete spatial averaging. However, the standard deviation of sound power output of the larger speaker is measurably lower than point source theory indicates, but is in good agreement with the finite source theory.

  6. Localizing nearby sound sources in a classroom: Binaural room impulse responses

    NASA Astrophysics Data System (ADS)

    Shinn-Cunningham, Barbara G.; Kopco, Norbert; Martin, Tara J.

    2005-05-01

    Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy. .

  7. Localizing nearby sound sources in a classroom: binaural room impulse responses.

    PubMed

    Shinn-Cunningham, Barbara G; Kopco, Norbert; Martin, Tara J

    2005-05-01

    Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy.

  8. Methods for reducing singly reflected rays on the Wolter-I focusing mirrors of the FOXSI rocket experiment

    NASA Astrophysics Data System (ADS)

    Buitrago-Casas, Juan Camilo; Elsner, Ronald; Glesener, Lindsay; Christe, Steven; Ramsey, Brian; Courtade, Sasha; Ishikawa, Shin-nosuke; Narukage, Noriyuki; Turin, Paul; Vievering, Juliana; Athiray, P. S.; Musset, Sophie; Krucker, Säm.

    2017-08-01

    In high energy solar astrophysics, imaging hard X-rays by direct focusing offers higher dynamic range and greater sensitivity compared to past techniques that used indirect imaging. The Focusing Optics X-ray Solar Imager (FOXSI) is a sounding rocket payload that uses seven sets of nested Wolter-I figured mirrors together with seven high-sensitivity semiconductor detectors to observe the Sun in hard X-rays through direct focusing. The FOXSI rocket has successfully flown twice and is funded to fly a third time in summer 2018. The Wolter-I geometry consists of two consecutive mirrors, one paraboloid and one hyperboloid, that reflect photons at grazing angles. Correctly focused X-rays reflect once per mirror segment. For extended sources, like the Sun, off-axis photons at certain incident angles can reflect on only one mirror and still reach the focal plane, generating a background pattern of singly reflected rays (i.e., ghost rays) that can limit the sensitivity of the observation to faint, focused sources. Understanding and mitigating the impact of the singly reflected rays on the FOXSI optical modules will maximize the instruments' sensitivity to background-limited sources. We present an analysis of the FOXSI singly reflected rays based on ray-tracing simulations and laboratory measurements, as well as the effectiveness of different physical strategies to reduce them.

  9. Captive Bottlenose Dolphins Do Discriminate Human-Made Sounds Both Underwater and in the Air.

    PubMed

    Lima, Alice; Sébilleau, Mélissa; Boye, Martin; Durand, Candice; Hausberger, Martine; Lemasson, Alban

    2018-01-01

    Bottlenose dolphins ( Tursiops truncatus ) spontaneously emit individual acoustic signals that identify them to group members. We tested whether these cetaceans could learn artificial individual sound cues played underwater and whether they would generalize this learning to airborne sounds. Dolphins are thought to perceive only underwater sounds and their training depends largely on visual signals. We investigated the behavioral responses of seven dolphins in a group to learned human-made individual sound cues, played underwater and in the air. Dolphins recognized their own sound cue after hearing it underwater as they immediately moved toward the source, whereas when it was airborne they gazed more at the source of their own sound cue but did not approach it. We hypothesize that they perhaps detected modifications of the sound induced by air or were confused by the novelty of the situation, but nevertheless recognized they were being "targeted." They did not respond when hearing another group member's cue in either situation. This study provides further evidence that dolphins respond to individual-specific sounds and that these marine mammals possess some capacity for processing airborne acoustic signals.

  10. Exploring positive hospital ward soundscape interventions.

    PubMed

    Mackrill, J; Jennings, P; Cain, R

    2014-11-01

    Sound is often considered as a negative aspect of an environment that needs mitigating, particularly in hospitals. It is worthwhile however, to consider how subjective responses to hospital sounds can be made more positive. The authors identified natural sound, steady state sound and written sound source information as having the potential to do this. Listening evaluations were conducted with 24 participants who rated their emotional (Relaxation) and cognitive (Interest and Understanding) response to a variety of hospital ward soundscape clips across these three interventions. A repeated measures ANOVA revealed that the 'Relaxation' response was significantly affected (n(2) = 0.05, p = 0.001) by the interventions with natural sound producing a 10.1% more positive response. Most interestingly, written sound source information produced a 4.7% positive change in response. The authors conclude that exploring different ways to improve the sounds of a hospital offers subjective benefits that move beyond sound level reduction. This is an area for future work to focus upon in an effort to achieve more positively experienced hospital soundscapes and environments. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  11. Calibration of Seismic Sources during a Test Cruise with the new RV SONNE

    NASA Astrophysics Data System (ADS)

    Engels, M.; Schnabel, M.; Damm, V.

    2015-12-01

    During autumn 2014, several test cruises of the brand new German research vessel SONNE were carried out before the first official scientific cruise started in December. In September 2014, BGR conducted a seismic test cruise in the British North Sea. RV SONNE is a multipurpose research vessel and was also designed for the mobile BGR 3D seismic equipment, which was tested successfully during the cruise. We spend two days for calibration of the following seismic sources of BGR: G-gun array (50 l @ 150 bar) G-gun array (50 l @ 207 bar) single GI-gun (3.4 l @ 150 bar) For this experiment two hydrophones (TC4042 from Reson Teledyne) sampling up to 48 kHz were fixed below a drifting buoy at 20 m and 60 m water depth - the sea bottom was at 80 m depth. The vessel with the seismic sources sailed several up to 7 km long profiles around the buoy in order to cover many different azimuths and distances. We aimed to measure sound pressure level (SPL) and sound exposure level (SEL) under the conditions of the shallow North Sea. Total reflections and refracted waves dominate the recorded wave field, enhance the noise level and partly screen the direct wave in contrast to 'true' deep water calibration based solely on the direct wave. Presented are SPL and RMS power results in time domain, the decay with distance along profiles, and the somehow complicated 2D sound radiation pattern modulated by topography. The shading effect of the vessel's hull is significant. In frequency domain we consider 1/3 octave levels and estimate the amount of energy in frequency ranges not used for reflection seismic processing. Results are presented in comparison of the three different sources listed above. We compare the measured SPL decay with distance during this experiment with deep water modeling of seismic sources (Gundalf software) and with published results from calibrations with other marine seismic sources under different conditions: E.g. Breitzke et al. (2008, 2010) with RV Polarstern, Tolstoy et al. (2004) with RV Ewing and Tolstoy et al. (2009) with RV Langseth, and Crone et al. (2014) with RV Langseth.

  12. Shock waves and the Ffowcs Williams-Hawkings equation

    NASA Technical Reports Server (NTRS)

    Isom, Morris P.; Yu, Yung H.

    1991-01-01

    The expansion of the double divergence of the generalized Lighthill stress tensor, which is the basis of the concept of the role played by shock and contact discontinuities as sources of dipole and monopole sound, is presently applied to the simplest transonic flows: (1) a fixed wing in steady motion, for which there is no sound field, and (2) a hovering helicopter blade that produces a sound field. Attention is given to the contribution of the shock to sound from the viewpoint of energy conservation; the shock emerges as the source of only the quantity of entropy.

  13. Hearing in three dimensions

    NASA Astrophysics Data System (ADS)

    Shinn-Cunningham, Barbara

    2003-04-01

    One of the key functions of hearing is to help us monitor and orient to events in our environment (including those outside the line of sight). The ability to compute the spatial location of a sound source is also important for detecting, identifying, and understanding the content of a sound source, especially in the presence of competing sources from other positions. Determining the spatial location of a sound source poses difficult computational challenges; however, we perform this complex task with proficiency, even in the presence of noise and reverberation. This tutorial will review the acoustic, psychoacoustic, and physiological processes underlying spatial auditory perception. First, the tutorial will examine how the many different features of the acoustic signals reaching a listener's ears provide cues for source direction and distance, both in anechoic and reverberant space. Then we will discuss psychophysical studies of three-dimensional sound localization in different environments and the basic neural mechanisms by which spatial auditory cues are extracted. Finally, ``virtual reality'' approaches for simulating sounds at different directions and distances under headphones will be reviewed. The tutorial will be structured to appeal to a diverse audience with interests in all fields of acoustics and will incorporate concepts from many areas, such as psychological and physiological acoustics, architectural acoustics, and signal processing.

  14. Aerofoil broadband and tonal noise modelling using stochastic sound sources and incorporated large scale fluctuations

    NASA Astrophysics Data System (ADS)

    Proskurov, S.; Darbyshire, O. R.; Karabasov, S. A.

    2017-12-01

    The present work discusses modifications to the stochastic Fast Random Particle Mesh (FRPM) method featuring both tonal and broadband noise sources. The technique relies on the combination of incorporated vortex-shedding resolved flow available from Unsteady Reynolds-Averaged Navier-Stokes (URANS) simulation with the fine-scale turbulence FRPM solution generated via the stochastic velocity fluctuations in the context of vortex sound theory. In contrast to the existing literature, our method encompasses a unified treatment for broadband and tonal acoustic noise sources at the source level, thus, accounting for linear source interference as well as possible non-linear source interaction effects. When sound sources are determined, for the sound propagation, Acoustic Perturbation Equations (APE-4) are solved in the time-domain. Results of the method's application for two aerofoil benchmark cases, with both sharp and blunt trailing edges are presented. In each case, the importance of individual linear and non-linear noise sources was investigated. Several new key features related to the unsteady implementation of the method were tested and brought into the equation. Encouraging results have been obtained for benchmark test cases using the new technique which is believed to be potentially applicable to other airframe noise problems where both tonal and broadband parts are important.

  15. Experimental localization of an acoustic sound source in a wind-tunnel flow by using a numerical time-reversal technique.

    PubMed

    Padois, Thomas; Prax, Christian; Valeau, Vincent; Marx, David

    2012-10-01

    The possibility of using the time-reversal technique to localize acoustic sources in a wind-tunnel flow is investigated. While the technique is widespread, it has scarcely been used in aeroacoustics up to now. The proposed method consists of two steps: in a first experimental step, the acoustic pressure fluctuations are recorded over a linear array of microphones; in a second numerical step, the experimental data are time-reversed and used as input data for a numerical code solving the linearized Euler equations. The simulation achieves the back-propagation of the waves from the array to the source and takes into account the effect of the mean flow on sound propagation. The ability of the method to localize a sound source in a typical wind-tunnel flow is first demonstrated using simulated data. A generic experiment is then set up in an anechoic wind tunnel to validate the proposed method with a flow at Mach number 0.11. Monopolar sources are first considered that are either monochromatic or have a narrow or wide-band frequency content. The source position estimation is well-achieved with an error inferior to the wavelength. An application to a dipolar sound source shows that this type of source is also very satisfactorily characterized.

  16. Different categories of living and non-living sound-sources activate distinct cortical networks

    PubMed Central

    Engel, Lauren R.; Frum, Chris; Puce, Aina; Walker, Nathan A.; Lewis, James W.

    2009-01-01

    With regard to hearing perception, it remains unclear as to whether, or the extent to which, different conceptual categories of real-world sounds and related categorical knowledge are differentially represented in the brain. Semantic knowledge representations are reported to include the major divisions of living versus non-living things, plus more specific categories including animals, tools, biological motion, faces, and places—categories typically defined by their characteristic visual features. Here, we used functional magnetic resonance imaging (fMRI) to identify brain regions showing preferential activity to four categories of action sounds, which included non-vocal human and animal actions (living), plus mechanical and environmental sound-producing actions (non-living). The results showed a striking antero-posterior division in cortical representations for sounds produced by living versus non-living sources. Additionally, there were several significant differences by category, depending on whether the task was category-specific (e.g. human or not) versus non-specific (detect end-of-sound). In general, (1) human-produced sounds yielded robust activation in the bilateral posterior superior temporal sulci independent of task. Task demands modulated activation of left-lateralized fronto-parietal regions, bilateral insular cortices, and subcortical regions previously implicated in observation-execution matching, consistent with “embodied” and mirror-neuron network representations subserving recognition. (2) Animal action sounds preferentially activated the bilateral posterior insulae. (3) Mechanical sounds activated the anterior superior temporal gyri and parahippocampal cortices. (4) Environmental sounds preferentially activated dorsal occipital and medial parietal cortices. Overall, this multi-level dissociation of networks for preferentially representing distinct sound-source categories provides novel support for grounded cognition models that may underlie organizational principles for hearing perception. PMID:19465134

  17. The directivity of the sound radiation from panels and openings.

    PubMed

    Davy, John L

    2009-06-01

    This paper presents a method for calculating the directivity of the radiation of sound from a panel or opening, whose vibration is forced by the incidence of sound from the other side. The directivity of the radiation depends on the angular distribution of the incident sound energy in the room or duct in whose wall or end the panel or opening occurs. The angular distribution of the incident sound energy is predicted using a model which depends on the sound absorption coefficient of the room or duct surfaces. If the sound source is situated in the room or duct, the sound absorption coefficient model is used in conjunction with a model for the directivity of the sound source. For angles of radiation approaching 90 degrees to the normal to the panel or opening, the effect of the diffraction by the panel or opening, or by the finite baffle in which the panel or opening is mounted, is included. A simple empirical model is developed to predict the diffraction of sound into the shadow zone when the angle of radiation is greater than 90 degrees to the normal to the panel or opening. The method is compared with published experimental results.

  18. Fundamentals of diagnostic ultrasonography.

    PubMed

    Noce, J P

    1990-01-01

    Diagnostic ultrasonography uses acoustical waves in the frequency range of 1 to 20 MHz. These waves obey Snell's law of reflection and refraction, which are rules ordinary to wave behavior. In ultrasound, the analogy to momentum is acoustic impedance. The acoustic impedance, Z, is equal to the density, p, times velocity, v. The ultrasound transducer converts electrical energy into ultrasound energy and vice versa. The transducer usually consists of a piezoelectric crystal composed of such ceramic materials as barium titanate, lead titanate, zirconate, or lead metaniobate. Five basic ultrasonic scanning modes play the major roles in clinical applications. A-mode, or amplitude-mode, scanning measures the tissue discontinuity along the scan axis. B-mode scanning produces a two-dimensional image of the tissue under study by combining A-mode signals from various directions through mechanical transducer scanning. M-mode, or time motion scanning, is an extension of the A-mode approach in which a single stationary transducer is used. The depth of the echo is displayed on the vertical axis; the brightness of the oscilloscope display is modulated by the echo amplitude. Real-time scanning, or rapid B-scanning, techniques provide continuous data acquisition at a rate sufficient to give the impression of the instantaneous motion of moving structures. Doppler scanning relies on the presence of motion. The Doppler effect occurs when there is relative motion between the source of sound and the receiver of the sound, causing a change in the detected frequency of the sound source.

  19. Theoretical and experimental studies of the waveforms associated with stratospheric infrasonic returns

    NASA Astrophysics Data System (ADS)

    Waxler, R.; Talmadge, C. L.; Blom, P.

    2009-12-01

    Theory predicts that for ground to ground infrasound propagation along paths which travel downwind, relative to the stratospheric jet, there is a shadow zone which ends about 200 km from the source where the first return from the stratosphere strikes the earth. With increasing range the single stratospheric arrival splits into two distinct arrivals, a fast arrival with the trace velocity of the effective sound speed at the stratopause, and a slower arrival with the trace velocity of the sound speed on the ground. To test the theory we have deployed eight infrasound arrays along an approximate line directly west of the site of the US Navy's Trident Missile rocket motor eliminations. The arrays were deployed during the summer of 2009 spaced roughly 10 km apart along a segment from 180 to 260 km west of the site. Comparisons between the theoretical predictions and the received data will be presented.

  20. Evidence for pitch chroma mapping in human auditory cortex.

    PubMed

    Briley, Paul M; Breakey, Charlotte; Krumbholz, Katrin

    2013-11-01

    Some areas in auditory cortex respond preferentially to sounds that elicit pitch, such as musical sounds or voiced speech. This study used human electroencephalography (EEG) with an adaptation paradigm to investigate how pitch is represented within these areas and, in particular, whether the representation reflects the physical or perceptual dimensions of pitch. Physically, pitch corresponds to a single monotonic dimension: the repetition rate of the stimulus waveform. Perceptually, however, pitch has to be described with 2 dimensions, a monotonic, "pitch height," and a cyclical, "pitch chroma," dimension, to account for the similarity of the cycle of notes (c, d, e, etc.) across different octaves. The EEG adaptation effect mirrored the cyclicality of the pitch chroma dimension, suggesting that auditory cortex contains a representation of pitch chroma. Source analysis indicated that the centroid of this pitch chroma representation lies somewhat anterior and lateral to primary auditory cortex.

  1. Evidence for Pitch Chroma Mapping in Human Auditory Cortex

    PubMed Central

    Briley, Paul M.; Breakey, Charlotte; Krumbholz, Katrin

    2013-01-01

    Some areas in auditory cortex respond preferentially to sounds that elicit pitch, such as musical sounds or voiced speech. This study used human electroencephalography (EEG) with an adaptation paradigm to investigate how pitch is represented within these areas and, in particular, whether the representation reflects the physical or perceptual dimensions of pitch. Physically, pitch corresponds to a single monotonic dimension: the repetition rate of the stimulus waveform. Perceptually, however, pitch has to be described with 2 dimensions, a monotonic, “pitch height,” and a cyclical, “pitch chroma,” dimension, to account for the similarity of the cycle of notes (c, d, e, etc.) across different octaves. The EEG adaptation effect mirrored the cyclicality of the pitch chroma dimension, suggesting that auditory cortex contains a representation of pitch chroma. Source analysis indicated that the centroid of this pitch chroma representation lies somewhat anterior and lateral to primary auditory cortex. PMID:22918980

  2. USAF bioenvironmental noise data handbook. Volume 158: F-106A aircraft, near and far-field noise

    NASA Astrophysics Data System (ADS)

    Rau, T. H.

    1982-05-01

    The USAF F-106A is a single seat, all-weather fighter/interceptor aircraft powered by a J75-P-17 turbojet engine. This report provides measured and extrapolated data defining the bioacoustic environments produced by this aircraft operating on a concrete runup pad for five engine-power conditions. Near-field data are reported for five locations in a wide variety of physical and psychoacoustic measures: overall and band sound pressure levels, C-weighted and A-weighted sound levels, preferred speech interference level, perceived noise levels, and limiting times for total daily exposure of personnel with and without standard Air Force ear protectors. Far-field data measured at 19 locations are normalized to standard meteorological conditions and extrapolated from 75 - 8000 meters to derive sets of equal-value contours for these same seven acoustic measures as functions of angle and distance from the source.

  3. Spatial hearing in Cope’s gray treefrog: I. Open and closed loop experiments on sound localization in the presence and absence of noise

    PubMed Central

    Caldwell, Michael S.; Bee, Mark A.

    2014-01-01

    The ability to reliably locate sound sources is critical to anurans, which navigate acoustically complex breeding choruses when choosing mates. Yet, the factors influencing sound localization performance in frogs remain largely unexplored. We applied two complementary methodologies, open and closed loop playback trials, to identify influences on localization abilities in Cope’s gray treefrog, Hyla chrysoscelis. We examined localization acuity and phonotaxis behavior of females in response to advertisement calls presented from 12 azimuthal angles, at two signal levels, in the presence and absence of noise, and at two noise levels. Orientation responses were consistent with precise localization of sound sources, rather than binary discrimination between sources on either side of the body (lateralization). Frogs were unable to discriminate between sounds arriving from forward and rearward directions, and accurate localization was limited to forward sound presentation angles. Within this region, sound presentation angle had little effect on localization acuity. The presence of noise and low signal-to-noise ratios also did not strongly impair localization ability in open loop trials, but females exhibited reduced phonotaxis performance consistent with impaired localization during closed loop trials. We discuss these results in light of previous work on spatial hearing in anurans. PMID:24504182

  4. A study of methods of prediction and measurement of the transmission of sound through the walls of light aircraft

    NASA Technical Reports Server (NTRS)

    Forssen, B.; Wang, Y. S.; Raju, P. K.; Crocker, M. J.

    1981-01-01

    The acoustic intensity technique was applied to the sound transmission loss of panel structures (single, composite, and stiffened). A theoretical model of sound transmission through a cylindrical shell is presented.

  5. A study of methods of prediction and measurement of the transmission of sound through the walls of light aircraft

    NASA Astrophysics Data System (ADS)

    Forssen, B.; Wang, Y. S.; Raju, P. K.; Crocker, M. J.

    1981-08-01

    The acoustic intensity technique was applied to the sound transmission loss of panel structures (single, composite, and stiffened). A theoretical model of sound transmission through a cylindrical shell is presented.

  6. Sound-direction identification with bilateral cochlear implants.

    PubMed

    Neuman, Arlene C; Haravon, Anita; Sislian, Nicole; Waltzman, Susan B

    2007-02-01

    The purpose of this study was to compare the accuracy of sound-direction identification in the horizontal plane by bilateral cochlear implant users when localization was measured with pink noise and with speech stimuli. Eight adults who were bilateral users of Nucleus 24 Contour devices participated in the study. All had received implants in both ears in a single surgery. Sound-direction identification was measured in a large classroom by using a nine-loudspeaker array. Localization was tested in three listening conditions (bilateral cochlear implants, left cochlear implant, and right cochlear implant), using two different stimuli (a speech stimulus and pink noise bursts) in a repeated-measures design. Sound-direction identification accuracy was significantly better when using two implants than when using a single implant. The mean root-mean-square error was 29 degrees for the bilateral condition, 54 degrees for the left cochlear implant, and 46.5 degrees for the right cochlear implant condition. Unilateral accuracy was similar for right cochlear implant and left cochlear implant performance. Sound-direction identification performance was similar for speech and pink noise stimuli. The data obtained in this study add to the growing body of evidence that sound-direction identification with bilateral cochlear implants is better than with a single implant. The similarity in localization performance obtained with the speech and pink noise supports the use of either stimulus for measuring sound-direction identification.

  7. Sex differences present in auditory looming perception, absent in auditory recession

    NASA Astrophysics Data System (ADS)

    Neuhoff, John G.; Seifritz, Erich

    2005-04-01

    When predicting the arrival time of an approaching sound source, listeners typically exhibit an anticipatory bias that affords a margin of safety in dealing with looming objects. The looming bias has been demonstrated behaviorally in the laboratory and in the field (Neuhoff 1998, 2001), neurally in fMRI studies (Seifritz et al., 2002), and comparatively in non-human primates (Ghazanfar, Neuhoff, and Logothetis, 2002). In the current work, male and female listeners were presented with three-dimensional looming sound sources and asked to press a button when the source was at the point of closest approach. Females exhibited a significantly greater anticipatory bias than males. Next, listeners were presented with sounds that either approached or receded and then stopped at three different terminal distances. Consistent with the time-to-arrival judgments, female terminal distance judgments for looming sources were significantly closer than male judgments. However, there was no difference between male and female terminal distance judgments for receding sounds. Taken together with the converging behavioral, neural, and comparative evidence, the current results illustrate the environmental salience of looming sounds and suggest that the anticipatory bias for auditory looming may have been shaped by evolution to provide a selective advantage in dealing with looming objects.

  8. A possible approach to optimization of parameters of sound-absorbing structures for multimode waveguides

    NASA Astrophysics Data System (ADS)

    Mironov, M. A.

    2011-11-01

    A method of allowing for the spatial sound field structure in designing the sound-absorbing structures for turbojet aircraft engine ducts is proposed. The acoustic impedance of a duct should be chosen so as to prevent the reflection of the primary sound field, which is generated by the sound source in the absence of the duct, from the duct walls.

  9. Sandstone petrographic evidence and the Chugach-Prince William terrane boundary in southern Alaska

    USGS Publications Warehouse

    Dumoulin, Julie A.

    1988-01-01

    The contact between the Upper Cretaceous Valdez Group and the Paleocene and Eocene Orca Group has been inferred to be the boundary between the Chugach and the Prince William tectonostratigraphic terranes. Sandstone petrographic data from the Prince William Sound area show no compositional discontinuity across this contact. These data are best explained by considering the Valdez and Orca Groups to be part of a single terrane - a thick flysch sequence derived primarily from a progressively unroofing magmatic arc with increasing input from subduction-complex sources through time.

  10. Quantifying the influence of flow asymmetries on glottal sound sources in speech

    NASA Astrophysics Data System (ADS)

    Erath, Byron; Plesniak, Michael

    2008-11-01

    Human speech is made possible by the air flow interaction with the vocal folds. During phonation, asymmetries in the glottal flow field may arise from flow phenomena (e.g. the Coanda effect) as well as from pathological vocal fold motion (e.g. unilateral paralysis). In this study, the effects of flow asymmetries on glottal sound sources were investigated. Dynamically-programmable 7.5 times life-size vocal fold models with 2 degrees-of-freedom (linear and rotational) were constructed to provide a first-order approximation of vocal fold motion. Important parameters (Reynolds, Strouhal, and Euler numbers) were scaled to physiological values. Normal and abnormal vocal fold motions were synthesized, and the velocity field and instantaneous transglottal pressure drop were measured. Variability in the glottal jet trajectory necessitated sorting of the data according to the resulting flow configuration. The dipole sound source is related to the transglottal pressure drop via acoustic analogies. Variations in the transglottal pressure drop (and subsequently the dipole sound source) arising from flow asymmetries are discussed.

  11. Psychophysical evidence for auditory motion parallax.

    PubMed

    Genzel, Daria; Schutte, Michael; Brimijoin, W Owen; MacNeilage, Paul R; Wiegrebe, Lutz

    2018-04-17

    Distance is important: From an ecological perspective, knowledge about the distance to either prey or predator is vital. However, the distance of an unknown sound source is particularly difficult to assess, especially in anechoic environments. In vision, changes in perspective resulting from observer motion produce a reliable, consistent, and unambiguous impression of depth known as motion parallax. Here we demonstrate with formal psychophysics that humans can exploit auditory motion parallax, i.e., the change in the dynamic binaural cues elicited by self-motion, to assess the relative depths of two sound sources. Our data show that sensitivity to relative depth is best when subjects move actively; performance deteriorates when subjects are moved by a motion platform or when the sound sources themselves move. This is true even though the dynamic binaural cues elicited by these three types of motion are identical. Our data demonstrate a perceptual strategy to segregate intermittent sound sources in depth and highlight the tight interaction between self-motion and binaural processing that allows assessment of the spatial layout of complex acoustic scenes.

  12. Auditory event perception: the source-perception loop for posture in human gait.

    PubMed

    Pastore, Richard E; Flint, Jesse D; Gaston, Jeremy R; Solomon, Matthew J

    2008-01-01

    There is a small but growing literature on the perception of natural acoustic events, but few attempts have been made to investigate complex sounds not systematically controlled within a laboratory setting. The present study investigates listeners' ability to make judgments about the posture (upright-stooped) of the walker who generated acoustic stimuli contrasted on each trial. We use a comprehensive three-stage approach to event perception, in which we develop a solid understanding of the source event and its sound properties, as well as the relationships between these two event stages. Developing this understanding helps both to identify the limitations of common statistical procedures and to develop effective new procedures for investigating not only the two information stages above, but also the decision strategies employed by listeners in making source judgments from sound. The result is a comprehensive, ultimately logical, but not necessarily expected picture of both the source-sound-perception loop and the utility of alternative research tools.

  13. Nonlinear theory of shocked sound propagation in a nearly choked duct flow

    NASA Technical Reports Server (NTRS)

    Myers, M. K.; Callegari, A. J.

    1982-01-01

    The development of shocks in the sound field propagating through a nearly choked duct flow is analyzed by extending a quasi-one dimensional theory. The theory is applied to the case in which sound is introduced into the flow by an acoustic source located in the vicinity of a near-sonic throat. Analytical solutions for the field are obtained which illustrate the essential features of the nonlinear interaction between sound and flow. Numerical results are presented covering ranges of variation of source strength, throat Mach number, and frequency. It is found that the development of shocks leads to appreciable attenuation of acoustic power transmitted upstream through the near-sonic flow. It is possible, for example, that the power loss in the fundamental harmonic can be as much as 90% of that introduced at the source.

  14. Noise abatement in a pine plantation

    Treesearch

    R. E. Leonard; L. P. Herrington

    1971-01-01

    Observations on sound propagation were made in two red pine plantations. Measurements were taken of attenuation of prerecorded frequencies at various distances from the sound source. Sound absorption was strongly dependent on frequencies. Peak absorption was at 500 Hz.

  15. Hearing in three dimensions: Sound localization

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Kistler, Doris J.

    1990-01-01

    The ability to localize a source of sound in space is a fundamental component of the three dimensional character of the sound of audio. For over a century scientists have been trying to understand the physical and psychological processes and physiological mechanisms that subserve sound localization. This research has shown that important information about sound source position is provided by interaural differences in time of arrival, interaural differences in intensity and direction-dependent filtering provided by the pinnae. Progress has been slow, primarily because experiments on localization are technically demanding. Control of stimulus parameters and quantification of the subjective experience are quite difficult problems. Recent advances, such as the ability to simulate a three dimensional sound field over headphones, seem to offer potential for rapid progress. Research using the new techniques has already produced new information. It now seems that interaural time differences are a much more salient and dominant localization cue than previously believed.

  16. Acoustic characterization of a nonlinear vibroacoustic absorber at low frequencies and high sound levels

    NASA Astrophysics Data System (ADS)

    Chauvin, A.; Monteil, M.; Bellizzi, S.; Côte, R.; Herzog, Ph.; Pachebat, M.

    2018-03-01

    A nonlinear vibroacoustic absorber (Nonlinear Energy Sink: NES), involving a clamped thin membrane made in Latex, is assessed in the acoustic domain. This NES is here considered as an one-port acoustic system, analyzed at low frequencies and for increasing excitation levels. This dynamic and frequency range requires a suitable experimental technique, which is presented first. It involves a specific impedance tube able to deal with samples of sufficient size, and reaching high sound levels with a guaranteed linear response thank's to a specific acoustic source. The identification method presented here requires a single pressure measurement, and is calibrated from a set of known acoustic loads. The NES reflection coefficient is then estimated at increasing source levels, showing its strong level dependency. This is presented as a mean to understand energy dissipation. The results of the experimental tests are first compared to a nonlinear viscoelastic model of the membrane absorber. In a second step, a family of one degree of freedom models, treated as equivalent Helmholtz resonators is identified from the measurements, allowing a parametric description of the NES behavior over a wide range of levels.

  17. Using sounds for making decisions: greater tube-nosed bats prefer antagonistic calls over non-communicative sounds when feeding

    PubMed Central

    Jiang, Tinglei; Long, Zhenyu; Ran, Xin; Zhao, Xue; Xu, Fei; Qiu, Fuyuan; Kanwal, Jagmeet S.

    2016-01-01

    ABSTRACT Bats vocalize extensively within different social contexts. The type and extent of information conveyed via their vocalizations and their perceptual significance, however, remains controversial and difficult to assess. Greater tube-nosed bats, Murina leucogaster, emit calls consisting of long rectangular broadband noise burst (rBNBl) syllables during aggression between males. To experimentally test the behavioral impact of these sounds for feeding, we deployed an approach and place-preference paradigm. Two food trays were placed on opposite sides and within different acoustic microenvironments, created by sound playback, within a specially constructed tent. Specifically, we tested whether the presence of rBNBl sounds at a food source effectively deters the approach of male bats in comparison to echolocation sounds and white noise. In each case, contrary to our expectation, males preferred to feed at a location where rBNBl sounds were present. We propose that the species-specific rBNBl provides contextual information, not present within non-communicative sounds, to facilitate approach towards a food source. PMID:27815241

  18. What the Toadfish Ear Tells the Toadfish Brain About Sound.

    PubMed

    Edds-Walton, Peggy L

    2016-01-01

    Of the three, paired otolithic endorgans in the ear of teleost fishes, the saccule is the one most often demonstrated to have a major role in encoding frequencies of biologically relevant sounds. The toadfish saccule also encodes sound level and sound source direction in the phase-locked activity conveyed via auditory afferents to nuclei of the ipsilateral octaval column in the medulla. Although paired auditory receptors are present in teleost fishes, binaural processes were believed to be unimportant due to the speed of sound in water and the acoustic transparency of the tissues in water. In contrast, there are behavioral and anatomical data that support binaural processing in fishes. Studies in the toadfish combined anatomical tract-tracing and physiological recordings from identified sites along the ascending auditory pathway to document response characteristics at each level. Binaural computations in the medulla and midbrain sharpen the directional information provided by the saccule. Furthermore, physiological studies in the central nervous system indicated that encoding frequency, sound level, temporal pattern, and sound source direction are important components of what the toadfish ear tells the toadfish brain about sound.

  19. Replacing the Orchestra? – The Discernibility of Sample Library and Live Orchestra Sounds

    PubMed Central

    Wolf, Anna; Platz, Friedrich; Mons, Jan

    2016-01-01

    Recently, musical sounds from pre-recorded orchestra sample libraries (OSL) have become indispensable in music production for the stage or popular charts. Surprisingly, it is unknown whether human listeners can identify sounds as stemming from real orchestras or OSLs. Thus, an internet-based experiment was conducted to investigate whether a classic orchestral work, produced with sounds from a state-of-the-art OSL, could be reliably discerned from a live orchestra recording of the piece. It could be shown that the entire sample of listeners (N = 602) on average identified the correct sound source at 72.5%. This rate slightly exceeded Alan Turing's well-known upper threshold of 70% for a convincing, simulated performance. However, while sound experts tended to correctly identify the sound source, participants with lower listening expertise, who resembled the majority of music consumers, only achieved 68.6%. As non-expert listeners in the experiment were virtually unable to tell the real-life and OSL sounds apart, it is assumed that OSLs will become more common in music production for economic reasons. PMID:27382932

  20. The Coast Artillery Journal. Volume 65, Number 4, October 1926

    DTIC Science & Technology

    1926-10-01

    sound. a. Sound location of airplanes by binaural observation in all antiaircraft regiments. b. Sound ranging on report of enemy guns, together with...Direction finding by binaural observation. [Subparagraphs 30 a and 30 c (l).J This applies to continuous sounds such as pro- pellor noises. b. Point...impacts. 32. The so-called binaural sense is our means of sensing the direc- tion of a sound source. When we hear a sound we judge the approxi- mate

  1. Hemispherical breathing mode speaker using a dielectric elastomer actuator.

    PubMed

    Hosoya, Naoki; Baba, Shun; Maeda, Shingo

    2015-10-01

    Although indoor acoustic characteristics should ideally be assessed by measuring the reverberation time using a point sound source, a regular polyhedron loudspeaker, which has multiple loudspeakers on a chassis, is typically used. However, such a configuration is not a point sound source if the size of the loudspeaker is large relative to the target sound field. This study investigates a small lightweight loudspeaker using a dielectric elastomer actuator vibrating in the breathing mode (the pulsating mode such as the expansion and contraction of a balloon). Acoustic testing with regard to repeatability, sound pressure, vibration mode profiles, and acoustic radiation patterns indicate that dielectric elastomer loudspeakers may be feasible.

  2. The role of reverberation-related binaural cues in the externalization of speech.

    PubMed

    Catic, Jasmina; Santurette, Sébastien; Dau, Torsten

    2015-08-01

    The perception of externalization of speech sounds was investigated with respect to the monaural and binaural cues available at the listeners' ears in a reverberant environment. Individualized binaural room impulse responses (BRIRs) were used to simulate externalized sound sources via headphones. The measured BRIRs were subsequently modified such that the proportion of the response containing binaural vs monaural information was varied. Normal-hearing listeners were presented with speech sounds convolved with such modified BRIRs. Monaural reverberation cues were found to be sufficient for the externalization of a lateral sound source. In contrast, for a frontal source, an increased amount of binaural cues from reflections was required in order to obtain well externalized sound images. It was demonstrated that the interaction between the interaural cues of the direct sound and the reverberation strongly affects the perception of externalization. An analysis of the short-term binaural cues showed that the amount of fluctuations of the binaural cues corresponded well to the externalization ratings obtained in the listening tests. The results further suggested that the precedence effect is involved in the auditory processing of the dynamic binaural cues that are utilized for externalization perception.

  3. BUFR TABLE A

    Science.gov Websites

    Surface data - sea 2 Vertical soundings (other than satellite) 3 Vertical soundings (satellite) 4 Single level upper-air data (other than satellite) 5 Single level upper-air data (satellite) 6 Radar data 7 tables, complete replacement or update 12 Surface data (satellite) 13 Forecasts 14 Warnings 15-19

  4. Airborne sound insulation evaluation and flanking path prediction of coupled room

    NASA Astrophysics Data System (ADS)

    Tassia, R. D.; Asmoro, W. A.; Arifianto, D.

    2016-11-01

    One of the parameters to review the acoustic comfort is based on the value of the insulation partition in the classroom. The insulation value can be expressed by the sound transmission loss which converted into a single value as weighted sound reduction index (Rw, DnTw) and also have an additional sound correction factor in low frequency (C, Ctr) .In this study, the measurements were performed in two positions at each point using BSWA microphone and dodecahedron speaker as the sound source. The results of field measurements indicate the acoustic insulation values (DnT w + C) is 19.6 dB. It is noted that the partition wall not according to the standard which the DnTw + C> 51 dB. Hence the partition wall need to be redesign to improve acoustic insulation in the classroom. The design used gypsum board, plasterboard, cement board, and PVC as the replacement material. Based on the results, all the material is simulated in accordance with established standards. Best insulation is cement board with the insulation value is 69dB, the thickness of 12.5 mm on each side and the absorber material is 50 mm. Many factors lead to increase the value of acoustic insulation, such as the thickness of the panel, the addition of absorber material, density, and Poisson's ratio of a material. The prediction of flanking path can be estimated from noise reduction values at each measurement point in the class room. Based on data obtained, there is no significant change in noise reduction from each point so that the pathway of flanking is not affect the sound transmission in the classroom.

  5. An open access database for the evaluation of heart sound algorithms.

    PubMed

    Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D

    2016-12-01

    In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.

  6. Neuromagnetic recordings reveal the temporal dynamics of auditory spatial processing in the human cortex.

    PubMed

    Tiitinen, Hannu; Salminen, Nelli H; Palomäki, Kalle J; Mäkinen, Ville T; Alku, Paavo; May, Patrick J C

    2006-03-20

    In an attempt to delineate the assumed 'what' and 'where' processing streams, we studied the processing of spatial sound in the human cortex by using magnetoencephalography in the passive and active recording conditions and two kinds of spatial stimuli: individually constructed, highly realistic spatial (3D) stimuli and stimuli containing interaural time difference (ITD) cues only. The auditory P1m, N1m, and P2m responses of the event-related field were found to be sensitive to the direction of sound source in the azimuthal plane. In general, the right-hemispheric responses to spatial sounds were more prominent than the left-hemispheric ones. The right-hemispheric P1m and N1m responses peaked earlier for sound sources in the contralateral than for sources in the ipsilateral hemifield and the peak amplitudes of all responses reached their maxima for contralateral sound sources. The amplitude of the right-hemispheric P2m response reflected the degree of spatiality of sound, being twice as large for the 3D than ITD stimuli. The results indicate that the right hemisphere is specialized in the processing of spatial cues in the passive recording condition. Minimum current estimate (MCE) localization revealed that temporal areas were activated both in the active and passive condition. This initial activation, taking place at around 100 ms, was followed by parietal and frontal activity at 180 and 200 ms, respectively. The latter activations, however, were specific to attentional engagement and motor responding. This suggests that parietal activation reflects active responding to a spatial sound rather than auditory spatial processing as such.

  7. Sound Source Localization through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network.

    PubMed

    Beck, Christoph; Garreau, Guillaume; Georgiou, Julius

    2016-01-01

    Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.

  8. Captive Bottlenose Dolphins Do Discriminate Human-Made Sounds Both Underwater and in the Air

    PubMed Central

    Lima, Alice; Sébilleau, Mélissa; Boye, Martin; Durand, Candice; Hausberger, Martine; Lemasson, Alban

    2018-01-01

    Bottlenose dolphins (Tursiops truncatus) spontaneously emit individual acoustic signals that identify them to group members. We tested whether these cetaceans could learn artificial individual sound cues played underwater and whether they would generalize this learning to airborne sounds. Dolphins are thought to perceive only underwater sounds and their training depends largely on visual signals. We investigated the behavioral responses of seven dolphins in a group to learned human-made individual sound cues, played underwater and in the air. Dolphins recognized their own sound cue after hearing it underwater as they immediately moved toward the source, whereas when it was airborne they gazed more at the source of their own sound cue but did not approach it. We hypothesize that they perhaps detected modifications of the sound induced by air or were confused by the novelty of the situation, but nevertheless recognized they were being “targeted.” They did not respond when hearing another group member’s cue in either situation. This study provides further evidence that dolphins respond to individual-specific sounds and that these marine mammals possess some capacity for processing airborne acoustic signals. PMID:29445350

  9. The influence of crowd density on the sound environment of commercial pedestrian streets.

    PubMed

    Meng, Qi; Kang, Jian

    2015-04-01

    Commercial pedestrian streets are very common in China and Europe, with many situated in historic or cultural centres. The environments of these streets are important, including their sound environments. The objective of this study is to explore the relationships between the crowd density and the sound environments of commercial pedestrian streets. On-site measurements were performed at the case study site in Harbin, China, and a questionnaire was administered. The sound pressure measurements showed that the crowd density has an insignificant effect on sound pressure below 0.05 persons/m2, whereas when the crowd density is greater than 0.05 persons/m2, the sound pressure increases with crowd density. The sound sources were analysed, showing that several typical sound sources, such as traffic noise, can be masked by the sounds resulting from dense crowds. The acoustic analysis showed that crowd densities outside the range of 0.10 to 0.25 persons/m2 exhibited lower acoustic comfort evaluation scores. In terms of audiovisual characteristics, the subjective loudness increases with greater crowd density, while the acoustic comfort decreases. The results for an indoor underground shopping street are also presented for comparison. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Dual Coding of Frequency Modulation in the Ventral Cochlear Nucleus.

    PubMed

    Paraouty, Nihaad; Stasiak, Arkadiusz; Lorenzi, Christian; Varnet, Léo; Winter, Ian M

    2018-04-25

    Frequency modulation (FM) is a common acoustic feature of natural sounds and is known to play a role in robust sound source recognition. Auditory neurons show precise stimulus-synchronized discharge patterns that may be used for the representation of low-rate FM. However, it remains unclear whether this representation is based on synchronization to slow temporal envelope (ENV) cues resulting from cochlear filtering or phase locking to faster temporal fine structure (TFS) cues. To investigate the plausibility of those encoding schemes, single units of the ventral cochlear nucleus of guinea pigs of either sex were recorded in response to sine FM tones centered at the unit's best frequency (BF). The results show that, in contrast to high-BF units, for modulation depths within the receptive field, low-BF units (<4 kHz) demonstrate good phase locking to TFS. For modulation depths extending beyond the receptive field, the discharge patterns follow the ENV and fluctuate at the modulation rate. The receptive field proved to be a good predictor of the ENV responses for most primary-like and chopper units. The current in vivo data also reveal a high level of diversity in responses across unit types. TFS cues are mainly conveyed by low-frequency and primary-like units and ENV cues by chopper and onset units. The diversity of responses exhibited by cochlear nucleus neurons provides a neural basis for a dual-coding scheme of FM in the brainstem based on both ENV and TFS cues. SIGNIFICANCE STATEMENT Natural sounds, including speech, convey informative temporal modulations in frequency. Understanding how the auditory system represents those frequency modulations (FM) has important implications as robust sound source recognition depends crucially on the reception of low-rate FM cues. Here, we recorded 115 single-unit responses from the ventral cochlear nucleus in response to FM and provide the first physiological evidence of a dual-coding mechanism of FM via synchronization to temporal envelope cues and phase locking to temporal fine structure cues. We also demonstrate a diversity of neural responses with different coding specializations. These results support the dual-coding scheme proposed by psychophysicists to account for FM sensitivity in humans and provide new insights on how this might be implemented in the early stages of the auditory pathway. Copyright © 2018 the authors 0270-6474/18/384123-15$15.00/0.

  11. Soundscapes and the sense of hearing of fishes.

    PubMed

    Fay, Richard

    2009-03-01

    Underwater soundscapes have probably played an important role in the adaptation of ears and auditory systems of fishes throughout evolutionary time, and for all species. These sounds probably contain important information about the environment and about most objects and events that confront the receiving fish so that appropriate behavior is possible. For example, the sounds from reefs appear to be used by at least some fishes for their orientation and migration. These sorts of environmental sounds should be considered much like "acoustic daylight," that continuously bathes all environments and contain information that all organisms can potentially use to form a sort of image of the environment. At present, however, we are generally ignorant of the nature of ambient sound fields impinging on fishes, and the adaptive value of processing these fields to resolve the multiple sources of sound. Our field has focused almost exclusively on the adaptive value of processing species-specific communication sounds, and has not considered the informational value of ambient "noise." Since all fishes can detect and process acoustic particle motion, including the directional characteristics of this motion, underwater sound fields are potentially more complex and information-rich than terrestrial acoustic environments. The capacities of one fish species (goldfish) to receive and make use of such sound source information have been demonstrated (sound source segregation and auditory scene analysis), and it is suggested that all vertebrate species have this capacity. A call is made to better understand underwater soundscapes, and the associated behaviors they determine in fishes. © 2009 ISZS, Blackwell Publishing and IOZ/CAS.

  12. Possibilities of psychoacoustics to determine sound quality

    NASA Astrophysics Data System (ADS)

    Genuit, Klaus

    For some years, acoustic engineers have increasingly become aware of the importance of analyzing and minimizing noise problems not only with regard to the A-weighted sound pressure level, but to design sound quality. It is relatively easy to determine the A-weighted SPL according to international standards. However, the objective evaluation to describe subjectively perceived sound quality - taking into account psychoacoustic parameters such as loudness, roughness, fluctuation strength, sharpness and so forth - is more difficult. On the one hand, the psychoacoustic measurement procedures which are known so far have yet not been standardized. On the other hand, they have only been tested in laboratories by means of listening tests in the free-field and one sound source and simple signals. Therefore, the results achieved cannot be transferred to complex sound situations with several spatially distributed sound sources without difficulty. Due to the directional hearing and selectivity of human hearing, individual sound events can be selected among many. Already in the late seventies a new binaural Artificial Head Measurement System was developed which met the requirements of the automobile industry in terms of measurement technology. The first industrial application of the Artificial Head Measurement System was in 1981. Since that time the system was further developed, particularly by the cooperation between HEAD acoustics and Mercedes-Benz. In addition to a calibratable Artificial Head Measurement System which is compatible with standard measurement technologies and has transfer characteristics comparable to human hearing, a Binaural Analysis System is now also available. This system permits the analysis of binaural signals regarding physical and psychoacoustic procedures. Moreover, the signals to be analyzed can be simultaneously monitored through headphones and manipulated in the time and frequency domain so that those signal components being responsible for noise annoyance can be found. Especially in complex sound situations with several spatially distributed sound sources, standard, one-channel measurements methods cannot adequately determine sound quality, the acoustic comfort, or annoyance of sound events.

  13. The Emergence of the Allophonic Perception of Unfamiliar Speech Sounds: The Effects of Contextual Distribution and Phonetic Naturalness

    ERIC Educational Resources Information Center

    Noguchi, Masaki; Hudson Kam, Carla L.

    2018-01-01

    In human languages, different speech sounds can be contextual variants of a single phoneme, called allophones. Learning which sounds are allophones is an integral part of the acquisition of phonemes. Whether given sounds are separate phonemes or allophones in a listener's language affects speech perception. Listeners tend to be less sensitive to…

  14. Temporal-Spectral Characterization and Classification of Marine Mammal Vocalizations and Diesel-Electric Ships Radiated Sound over Continental Shelf Scale Regions with Coherent Hydrophone Array Measurements

    NASA Astrophysics Data System (ADS)

    Huang, Wei

    The passive ocean acoustic waveguide remote sensing (POAWRS) technology is capable of monitoring a large variety of underwater sound sources over instantaneous wide areas spanning continental-shelf scale regions. POAWRS uses a large-aperture densely-sampled coherent hydrophone array to significantly enhance the signal-to-noise ratio via beamforming, enabling detection of sound sources roughly two-orders of magnitude more distant in range than that possible with a single hydrophone. The sound sources detected by POAWRS include ocean biology, geophysical processes, and man-made activities. POAWRS provides detection, bearing-time estimation, localization, and classification of underwater sound sources. The volume of underwater sounds detected by POAWRS is immense, typically exceeding a million unique signal detections per day, in the 10-4000 Hz frequency range, making it a tremendously challenging task to distinguish and categorize the various sound sources present in a given region. Here we develop various approaches for characterizing and clustering the signal detections for various subsets of data acquired using the POAWRS technology. The approaches include pitch tracking of the dominant signal detections, time-frequency feature extraction, clustering and categorization methods. These approaches are essential for automatic processing and enhancing the efficiency and accuracy of POAWRS data analysis. The results of the signal detection, clustering and classification analysis are required for further POAWRS processing, including localization and tracking of a large number of oceanic sound sources. Here the POAWRS detection, localization and clustering approaches are applied to analyze and elucidate the vocalization behavior of humpback, sperm and fin whales in the New England continental shelf and slope, including the Gulf of Maine from data acquired using coherent hydrophone arrays. The POAWRS technology can also be applied for monitoring ocean vehicles. Here the approach is calibrated by application to known ships present in the Gulf of Maine and in the Norwegian Sea from their underwater sounds received using a coherent hydrophone array. The vocalization behavior of humpback whales was monitored over vast areas of the Gulf of Maine using the POAWRS technique over multiple diel cycles in Fall 2006. The humpback vocalizations, received at a rate of roughly 1800+/-1100 calls per day, comprised of both song and non-song. The song vocalizations, composed of highly structured and repeatable set of phrases, are characterized by inter-pulse intervals of 3.5 +/- 1.8 s. Songs were detected throughout the diel cycle, occuring roughly 40% during the day and 60% during the night. The humpback non-song vocalizations, dominated by shorter duration (≤3 s) downsweep and bow-shaped moans, as well as a small fraction of longer duration (˜5 s) cries, have significantly larger mean and more variable inter-pulse intervals of 14.2 +/- 11 s. The non-song vocalizations were detected at night with negligible detections during the day, implying they probably function as nighttime communication signals. The humpback song and non-song vocalizations are separately localized using the moving array triangulation and array invariant techniques. The humpback song and non-song moan calls are both consistently localized to a dense area on northeastern Georges Bank and a less dense region extended from Franklin Basin to the Great South Channel. Humpback cries occur exclusively on northeastern Georges Bank and during nights with coincident dense Atlantic herring shoaling populations, implying the cries are feeding-related. Sperm whales in the New England continental shelf and slope were passively localized and classified from their vocalizations received using a single low-frequency (<2500 Hz) densely-sampled horizontal coherent hydrophone array deployed in Spring 2013 in Gulf of Maine. Whale bearings were estimated using time-domain beamforming that provided high coherent array gain in sperm whale click signal-to-noise ratio. Whale ranges from the receiver array center were estimated using the moving array triangulation technique from a sequence of whale bearing measurements. Multiple concurrently vocalizing sperm whales, in the far-field of the horizontal receiver array, were distinguished and classified based on their horizontal spatial locations and the inter-pulse intervals of their vocalized click signals. We provide detailed analysis of over 15,000 fin whale 20 Hz vocalizations received on Oct 1-3, 2006 in the Gulf of Maine. These vocalizations are separated into 16 clusters following the clustering approaches. Seven of these types are prominent, each acounting for between 8% to 16% and together comprise roughly 85% of all the analyzed vocalizations. The 7 prominent clusters are each more abundant during nighttime hours by a factor of roughly 2.5 times than that of the daytime. The diel-spatial correlation of the 7 prominent clusters to the simultaneously observed densities of their fish prey, the Atlantic herring in the Gulf of Maine, is provided which implies that the factor of roughly 2.5 increase in call rate during night-time hours can be attributed to increased fish-feeding activities. (Abstract shortened by ProQuest.).

  15. Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners With Bilateral and With Hearing-Preservation Cochlear Implants.

    PubMed

    Loiselle, Louise H; Dorman, Michael F; Yost, William A; Cook, Sarah J; Gifford, Rene H

    2016-08-01

    To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Eleven bilateral listeners with MED-EL (Durham, NC) CIs and 8 listeners with hearing-preservation CIs with symmetrical low frequency, acoustic hearing using the MED-EL or Cochlear device were evaluated using 2 tests designed to task binaural hearing, localization, and a simulated cocktail party. Access to interaural cues for localization was constrained by the use of low-pass, high-pass, and wideband noise stimuli. Sound-source localization accuracy for listeners with bilateral CIs in response to the high-pass noise stimulus and sound-source localization accuracy for the listeners with hearing-preservation CIs in response to the low-pass noise stimulus did not differ significantly. Speech understanding in a cocktail party listening environment improved for all listeners when interaural cues, either interaural time difference or interaural level difference, were available. The findings of the current study indicate that similar degrees of benefit to sound-source localization and speech understanding in complex listening environments are possible with 2 very different rehabilitation strategies: the provision of bilateral CIs and the preservation of hearing.

  16. Spherical harmonic analysis of the sound radiation from omnidirectional loudspeaker arrays

    NASA Astrophysics Data System (ADS)

    Pasqual, A. M.

    2014-09-01

    Omnidirectional sound sources are widely used in room acoustics. These devices are made up of loudspeakers mounted on a spherical or polyhedral cabinet, where the dodecahedral shape prevails. Although such electroacoustic sources have been made readily available to acousticians by many manufacturers, an in-depth investigation of their vibroacoustic behavior has not been provided yet. In order to fulfill this lack, this paper presents a theoretical study of the sound radiation from omnidirectional loudspeaker arrays, which is carried out by using a mathematical model based on the spherical harmonic analysis. Eight different loudspeaker arrangements on the sphere are considered: the well-known five Platonic solid layouts and three extremal system layouts. The latter possess useful properties for spherical loudspeaker arrays used as directivity controlled sound sources, so that these layouts are included here in order to investigate whether or not they could be of interest as omnidirectional sources as well. It is shown through a comparative analysis that the dodecahedral array leads to the lowest error in producing an omnidirectional sound field and to the highest acoustic power, which corroborates the prevalence of such a layout. In addition, if a source with less than 12 loudspeakers is required, it is shown that tetrahedra or hexahedra can be used alternatively, whereas the extremal system layouts are not interesting choices for omnidirectional loudspeaker arrays.

  17. The use of an active controlled enclosure to attenuate sound radiation from a heavy radiator

    NASA Astrophysics Data System (ADS)

    Sun, Yao; Yang, Tiejun; Zhu, Minggang; Pan, Jie

    2017-03-01

    Active structural acoustical control usually experiences difficulty in the control of heavy sources or sources where direct applications of control forces are not practical. To overcome this difficulty, an active controlled enclosure, which forms a cavity with both flexible and open boundary, is employed. This configuration permits indirect implementation of active control in which the control inputs can be applied to subsidiary structures other than the sources. To determine the control effectiveness of the configuration, the vibro-acoustic behavior of the system, which consists of a top plate with an open, a sound cavity and a source panel, is investigated in this paper. A complete mathematical model of the system is formulated involving modified Fourier series formulations and the governing equations are solved using the Rayleigh-Ritz method. The coupling mechanisms of a partly opened cavity and a plate are analysed in terms of modal responses and directivity patterns. Furthermore, to attenuate sound power radiated from both the top panel and the open, two strategies are studied: minimizing the total radiated power and the cancellation of volume velocity. Moreover, three control configurations are compared, using a point force on the control panel (structural control), using a sound source in the cavity (acoustical control) and applying hybrid structural-acoustical control. In addition, the effects of boundary condition of the control panel on the sound radiation and control performance are discussed.

  18. Material sound source localization through headphones

    NASA Astrophysics Data System (ADS)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  19. Hydrodynamic phonon drift and second sound in a (20,20) single-wall carbon nanotube

    NASA Astrophysics Data System (ADS)

    Lee, Sangyeop; Lindsay, Lucas

    2017-05-01

    Two hydrodynamic features of phonon transport, phonon drift and second sound, in a (20,20) single-wall carbon nanotube (SWCNT) are discussed using lattice dynamics calculations employing an optimized Tersoff potential for atomic interactions. We formally derive a formula for the contribution of drift motion of phonons to total heat flux at steady state. It is found that the drift motion of phonons carries more than 70 % and 90 % of heat at 300 and 100 K, respectively, indicating that phonon flow can be reasonably approximated as hydrodynamic if the SWCNT is long enough to avoid ballistic phonon transport. The dispersion relation of second sound is derived from the Peierls-Boltzmann transport equation with Callaway's scattering model and quantifies the speed of second sound and its relaxation. The speed of second sound is around 4000 m/s in a (20,20) SWCNT and the second sound can propagate more than 10 µm in an isotopically pure (20,20) SWCNT for frequency around 1 GHz at 100 K.

  20. Aeroacoustic analysis of the human phonation process based on a hybrid acoustic PIV approach

    NASA Astrophysics Data System (ADS)

    Lodermeyer, Alexander; Tautz, Matthias; Becker, Stefan; Döllinger, Michael; Birk, Veronika; Kniesburges, Stefan

    2018-01-01

    The detailed analysis of sound generation in human phonation is severely limited as the accessibility to the laryngeal flow region is highly restricted. Consequently, the physical basis of the underlying fluid-structure-acoustic interaction that describes the primary mechanism of sound production is not yet fully understood. Therefore, we propose the implementation of a hybrid acoustic PIV procedure to evaluate aeroacoustic sound generation during voice production within a synthetic larynx model. Focusing on the flow field downstream of synthetic, aerodynamically driven vocal folds, we calculated acoustic source terms based on the velocity fields obtained by time-resolved high-speed PIV applied to the mid-coronal plane. The radiation of these sources into the acoustic far field was numerically simulated and the resulting acoustic pressure was finally compared with experimental microphone measurements. We identified the tonal sound to be generated downstream in a small region close to the vocal folds. The simulation of the sound propagation underestimated the tonal components, whereas the broadband sound was well reproduced. Our results demonstrate the feasibility to locate aeroacoustic sound sources inside a synthetic larynx using a hybrid acoustic PIV approach. Although the technique employs a 2D-limited flow field, it accurately reproduces the basic characteristics of the aeroacoustic field in our larynx model. In future studies, not only the aeroacoustic mechanisms of normal phonation will be assessable, but also the sound generation of voice disorders can be investigated more profoundly.

  1. Selective attention to sound location or pitch studied with event-related brain potentials and magnetic fields.

    PubMed

    Degerman, Alexander; Rinne, Teemu; Särkkä, Anna-Kaisa; Salmi, Juha; Alho, Kimmo

    2008-06-01

    Event-related brain potentials (ERPs) and magnetic fields (ERFs) were used to compare brain activity associated with selective attention to sound location or pitch in humans. Sixteen healthy adults participated in the ERP experiment, and 11 adults in the ERF experiment. In different conditions, the participants focused their attention on a designated sound location or pitch, or pictures presented on a screen, in order to detect target sounds or pictures among the attended stimuli. In the Attend Location condition, the location of sounds varied randomly (left or right), while their pitch (high or low) was kept constant. In the Attend Pitch condition, sounds of varying pitch (high or low) were presented at a constant location (left or right). Consistent with previous ERP results, selective attention to either sound feature produced a negative difference (Nd) between ERPs to attended and unattended sounds. In addition, ERPs showed a more posterior scalp distribution for the location-related Nd than for the pitch-related Nd, suggesting partially different generators for these Nds. The ERF source analyses found no source distribution differences between the pitch-related Ndm (the magnetic counterpart of the Nd) and location-related Ndm in the superior temporal cortex (STC), where the main sources of the Ndm effects are thought to be located. Thus, the ERP scalp distribution differences between the location-related and pitch-related Nd effects may have been caused by activity of areas outside the STC, perhaps in the inferior parietal regions.

  2. Sound Explorations from the Ages of 10 to 37 Months: The Ontogenesis of Musical Conducts

    ERIC Educational Resources Information Center

    Delalande, Francois; Cornara, Silvia

    2010-01-01

    One of the forms of first musical conduct is the exploration of sound sources. When young children produce sounds with any object, these sounds may surprise them and so they make the sounds again--not exactly the same, but introducing some variation. A process of repetition with slight changes is set in motion which can be analysed, as did Piaget,…

  3. Monitoring the Ocean Using High Frequency Ambient Sound

    DTIC Science & Technology

    2008-10-01

    even identify specific groups within the resident killer whale type ( Puget Sound Southern Resident pods J, K and L) because these groups have...particular, the different populations of killer whales in the NE Pacific Ocean. This has been accomplished by detecting transient sounds with short...high sea state (the sound of spray), general shipping - close and distant, clanking and whale calls and clicking. These sound sources form the basis

  4. Monitoring CO2 sources and sinks from space : the Orbiting Carbon Observatory (OCO) Mission

    NASA Technical Reports Server (NTRS)

    Crisp, David

    2006-01-01

    NASA's Orbiting Carbon Observatory (OCO) will make the first space-based measurements of atmospheric carbon dioxide (CO2) with the precision, resolution, and coverage needed to characterize the geographic distribution of CO2 sources and sinks and quantify their variability over the seasonal cycle. OCO is currently scheduled for launch in 2008. The observatory will carry a single instrument that incorporates three high-resolution grating spectrometers designed to measure the near-infrared absorption by CO2 and molecular oxygen (O2) in reflected sunlight. OCO will fly 12 minutes ahead of the EOS Aqua platform in the Earth Observing System (EOS) Afternoon Constellation (A-Train). The in-strument will collect 12 to 24 soundings per second as the Observatory moves along its orbit track on the day side of the Earth. A small sampling footprint (<3 km2 at nadir) was adopted to reduce biases in each sounding associated with clouds and aerosols and spatial variations in surface topography. A comprehensive ground-based validation program will be used to assess random errors and biases in the XCO2 product on regional to continental scales. Measurements collected by OCO will be assimilated with other environmental measurements to retrieve surface sources and sinks of CO2. This information could play an important role in monitoring the integrity of large scale CO2 sequestration projects.

  5. Meteorological effects on long-range outdoor sound propagation

    NASA Technical Reports Server (NTRS)

    Klug, Helmut

    1990-01-01

    Measurements of sound propagation over distances up to 1000 m were carried out with an impulse sound source offering reproducible, short time signals. Temperature and wind speed at several heights were monitored simultaneously; the meteorological data are used to determine the sound speed gradients according to the Monin-Obukhov similarity theory. The sound speed profile is compared to a corresponding prediction, gained through the measured travel time difference between direct and ground reflected pulse (which depends on the sound speed gradient). Positive sound speed gradients cause bending of the sound rays towards the ground yielding enhanced sound pressure levels. The measured meteorological effects on sound propagation are discussed and illustrated by ray tracing methods.

  6. The Problems with "Noise Numbers" for Wind Farm Noise Assessment

    ERIC Educational Resources Information Center

    Thorne, Bob

    2011-01-01

    Human perception responds primarily to sound character rather than sound level. Wind farms are unique sound sources and exhibit special audible and inaudible characteristics that can be described as modulating sound or as a tonal complex. Wind farm compliance measures based on a specified noise number alone will fail to address problems with noise…

  7. Tackling Production Techniques: Professional Studio Sound at Amateur Prices: the Power of the Portable Four-Track Audio Recorder.

    ERIC Educational Resources Information Center

    Robinson, David E.

    1997-01-01

    One solution to poor quality sound in student video projects is a four-track audio cassette recorder. This article discusses the advantages of four-track over single-track recorders and compares two student productions, one using a single-track and the other a four-track recorder. (PEN)

  8. Psychometric Characteristics of Single-Word Tests of Children's Speech Sound Production

    ERIC Educational Resources Information Center

    Flipsen, Peter, Jr.; Ogiela, Diane A.

    2015-01-01

    Purpose: Our understanding of test construction has improved since the now-classic review by McCauley and Swisher (1984) . The current review article examines the psychometric characteristics of current single-word tests of speech sound production in an attempt to determine whether our tests have improved since then. It also provides a resource…

  9. A study of sound generation in subsonic rotors, volume 1

    NASA Technical Reports Server (NTRS)

    Chalupnik, J. D.; Clark, L. T.

    1975-01-01

    A model for the prediction of wake related sound generation by a single airfoil is presented. It is assumed that the net force fluctuation on an airfoil may be expressed in terms of the net momentum fluctuation in the near wake of the airfoil. The forcing function for sound generation depends on the spectra of the two point velocity correlations in the turbulent region near the airfoil trailing edge. The spectra of the two point velocity correlations were measured for the longitudinal and transverse components of turbulence in the wake of a 91.4 cm chord airfoil. A scaling procedure was developed using the turbulent boundary layer thickness. The model was then used to predict the radiated sound from a 5.1 cm chord airfoil. Agreement between the predicted and measured sound radiation spectra was good. The single airfoil results were extended to a rotor geometry, and various aerodynamic parameters were studied.

  10. Optimization of Sound Absorbers Number and Placement in an Enclosed Room by Finite Element Simulation

    NASA Astrophysics Data System (ADS)

    Lau, S. F.; Zainulabidin, M. H.; Yahya, M. N.; Zaman, I.; Azmir, N. A.; Madlan, M. A.; Ismon, M.; Kasron, M. Z.; Ismail, A. E.

    2017-10-01

    Giving a room proper acoustic treatment is both art and science. Acoustic design brings comfort in the built environment and reduces noise level by using sound absorbers. There is a need to give a room acoustic treatment by installing absorbers in order to decrease the reverberant sound. However, they are usually high in price which cost much for installation and there is no system to locate the optimum number and placement of sound absorbers. It would be a waste if the room is overly treated with absorbers or cause improper treatment if the room is treated with insufficient absorbers. This study aims to determine the amount of sound absorbers needed and optimum location of sound absorbers placement in order to reduce the overall sound pressure level in specified room by using ANSYS APDL software. The size of sound absorbers needed is found to be 11 m 2 by using Sabine equation and different unit sets of absorbers are applied on walls, each with the same total areas to investigate the best configurations. All three sets (single absorber, 11 absorbers and 44 absorbers) has successfully treating the room by reducing the overall sound pressure level. The greatest reduction in overall sound pressure level is that of 44 absorbers evenly distributed around the walls, which has reduced as much as 24.2 dB and the least effective configuration is single absorber whereby it has reduced the overall sound pressure level by 18.4 dB.

  11. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs

    PubMed Central

    Ponnath, Abhilash; Farris, Hamilton E.

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3–10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted <2 s and, in different cells, excitability either decreased, increased or shifted in latency. Within cells, the modulatory effect of sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene. PMID:25120437

  12. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs.

    PubMed

    Ponnath, Abhilash; Farris, Hamilton E

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted <2 s and, in different cells, excitability either decreased, increased or shifted in latency. Within cells, the modulatory effect of sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.

  13. Spatial sound field synthesis and upmixing based on the equivalent source method.

    PubMed

    Bai, Mingsian R; Hsu, Hoshen; Wen, Jheng-Ciang

    2014-01-01

    Given scarce number of recorded signals, spatial sound field synthesis with an extended sweet spot is a challenging problem in acoustic array signal processing. To address the problem, a synthesis and upmixing approach inspired by the equivalent source method (ESM) is proposed. The synthesis procedure is based on the pressure signals recorded by a microphone array and requires no source model. The array geometry can also be arbitrary. Four upmixing strategies are adopted to enhance the resolution of the reproduced sound field when there are more channels of loudspeakers than the microphones. Multi-channel inverse filtering with regularization is exploited to deal with the ill-posedness in the reconstruction process. The distance between the microphone and loudspeaker arrays is optimized to achieve the best synthesis quality. To validate the proposed system, numerical simulations and subjective listening experiments are performed. The results demonstrated that all upmixing methods improved the quality of reproduced target sound field over the original reproduction. In particular, the underdetermined ESM interpolation method yielded the best spatial sound field synthesis in terms of the reproduction error, timbral quality, and spatial quality.

  14. Methods for Reducing Singly Reflected Rays on the Wolter-I Focusing Figures of the FOXSI Rocket Experiment

    NASA Technical Reports Server (NTRS)

    Buitrago-Casas, Juan Camilo; Glesener, Lindsay; Christe, Steven; Elsner, Ronald; Ramsey, Brian; Courtade, Sasha; Ishikawa, Shin-nosuke; Narukage, Noriyuki; Vievering, Juliana; Subramania, Athiray; hide

    2017-01-01

    In high energy solar astrophysics, imaging hard X-rays by direct focusing offers higher dynamic range and greater sensitivity compared to past techniques that used indirect imaging. The Focusing Optics X-ray Solar Imager (FOXSI) is a sounding rocket payload which uses seven sets of nested Wolter-I figured mirrors that, together with seven high-sensitivity semiconductor detectors, observes the Sun in hard X-rays by direct focusing. The FOXSI rocket has successfully flown twice and is funded to fly a third time in Summer 2018. The Wolter-I geometry consists of two consecutive mirrors, one paraboloid, and one hyperboloid, that reflect photons at grazing angles. Correctly focused X-rays reflect twice, once per mirror segment. For extended sources, like the Sun, off-axis photons at certain incident angles can reflect on only one mirror and still reach the focal plane, generating a pattern of single-bounce photons that can limit the sensitivity of the observation of faint focused X-rays. Understanding and cutting down the singly reflected rays on the FOXSI optics will maximize the instrument's sensitivity of the faintest solar sources for future flights. We present an analysis of the FOXSI singly reflected rays based on ray-tracing simulations, as well as the effectiveness of different physical strategies to reduce them.

  15. Development and Testing of a High Level Axial Array Duct Sound Source for the NASA Flow Impedance Test Facility

    NASA Technical Reports Server (NTRS)

    Johnson, Marty E.; Fuller, Chris R.; Jones, Michael G. (Technical Monitor)

    2000-01-01

    In this report both a frequency domain method for creating high level harmonic excitation and a time domain inverse method for creating large pulses in a duct are developed. To create controllable, high level sound an axial array of six JBL-2485 compression drivers was used. The pressure downstream is considered as input voltages to the sources filtered by the natural dynamics of the sources and the duct. It is shown that this dynamic behavior can be compensated for by filtering the inputs such that both time delays and phase changes are taken into account. The methods developed maximize the sound output while (i) keeping within the power constraints of the sources and (ii) maintaining a suitable level of reproduction accuracy. Harmonic excitation pressure levels of over 155dB were created experimentally over a wide frequency range (1000-4000Hz). For pulse excitation there is a tradeoff between accuracy of reproduction and sound level achieved. However, the accurate reproduction of a pulse with a maximum pressure level over 6500Pa was achieved experimentally. It was also shown that the throat connecting the driver to the duct makes it difficult to inject sound just below the cut-on of each acoustic mode (pre cut-on loading effect).

  16. Perceptual assessment of quality of urban soundscapes with combined noise sources and water sounds.

    PubMed

    Jeon, Jin Yong; Lee, Pyoung Jik; You, Jin; Kang, Jian

    2010-03-01

    In this study, urban soundscapes containing combined noise sources were evaluated through field surveys and laboratory experiments. The effect of water sounds on masking urban noises was then examined in order to enhance the soundscape perception. Field surveys in 16 urban spaces were conducted through soundwalking to evaluate the annoyance of combined noise sources. Synthesis curves were derived for the relationships between noise levels and the percentage of highly annoyed (%HA) and the percentage of annoyed (%A) for the combined noise sources. Qualitative analysis was also made using semantic scales for evaluating the quality of the soundscape, and it was shown that the perception of acoustic comfort and loudness was strongly related to the annoyance. A laboratory auditory experiment was then conducted in order to quantify the total annoyance caused by road traffic noise and four types of construction noise. It was shown that the annoyance ratings were related to the types of construction noise in combination with road traffic noise and the level of the road traffic noise. Finally, water sounds were determined to be the best sounds to use for enhancing the urban soundscape. The level of the water sounds should be similar to or not less than 3 dB below the level of the urban noises.

  17. Development of a Finite-Difference Time Domain (FDTD) Model for Propagation of Transient Sounds in Very Shallow Water.

    PubMed

    Sprague, Mark W; Luczkovich, Joseph J

    2016-01-01

    This finite-difference time domain (FDTD) model for sound propagation in very shallow water uses pressure and velocity grids with both 3-dimensional Cartesian and 2-dimensional cylindrical implementations. Parameters, including water and sediment properties, can vary in each dimension. Steady-state and transient signals from discrete and distributed sources, such as the surface of a vibrating pile, can be used. The cylindrical implementation uses less computation but requires axial symmetry. The Cartesian implementation allows asymmetry. FDTD calculations compare well with those of a split-step parabolic equation. Applications include modeling the propagation of individual fish sounds, fish aggregation sounds, and distributed sources.

  18. Adaptive near-field beamforming techniques for sound source imaging.

    PubMed

    Cho, Yong Thung; Roan, Michael J

    2009-02-01

    Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.

  19. L-type calcium channels refine the neural population code of sound level

    PubMed Central

    Grimsley, Calum Alex; Green, David Brian

    2016-01-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536

  20. Choice and Effects of Instrument Sound in Aural Training

    ERIC Educational Resources Information Center

    Loh, Christian Sebastian

    2007-01-01

    A musical note produced through the vibration of a single string is psychoacoustically simpler/purer than that produced via multiple-strings vibration. Does the psychoacoustics of instrument sound have any effect on learning outcomes in music instruction? This study investigated the effect of two psychoacoustically distinct instrument sounds on…

  1. The forced sound transmission of finite single leaf walls using a variational technique.

    PubMed

    Brunskog, Jonas

    2012-09-01

    The single wall is the simplest element of concern in building acoustics, but there still remain some open questions regarding the sound insulation of this simple case. The two main reasons for this are the effects on the excitation and sound radiation of the wall when it has a finite size, and the fact that the wave field in the wall is consisting of two types of waves, namely forced waves due to the exciting acoustic field, and free bending waves due to reflections in the boundary. The aim of the present paper is to derive simple analytical formulas for the forced part of the airborne sound insulation of a single homogeneous wall of finite size, using a variational technique based on the integral-differential equation of the fluid loaded wall. The so derived formulas are valid in the entire audible frequency range. The results are compared with full numerical calculations, measurements and alternative theory, with reasonable agreement.

  2. Synchronized vortex shedding and sound radiation from two side-by-side rectangular cylinders of different cross-sectional aspect ratios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Octavianty, Ressa, E-mail: ressa-octavianty@ed.tmu.ac.jp; Asai, Masahito, E-mail: masai@tmu.ac.jp

    Synchronized vortex shedding from two side-by-side cylinders and the associated sound radiation were examined experimentally at Reynolds numbers of the order of 10{sup 4} in low-Mach-number flows. In addition to a pair of square cylinders, a pair of rectangular cylinders, one with a square cross section (d × d) and the other with a rectangular cross section (d × c) having a cross-sectional aspect ratio (c/d) of 1.2–1.5, was considered. The center-to-center distance between the two cylinders L/d was 3.6, 4.5, and 6.0; these settings were within the non-biased flow regime for side-by-side square cylinders. In case of a squaremore » cylinder pair, anti-phase synchronized vortex shedding occurring for L/d = 3.6 and 4.5 generated a quadrupole-like sound source which radiated in-phase, planar-symmetric sound in the far field. Synchronized vortex shedding from the two rectangular cylinders with different c/d also occurred with almost the same frequency as the characteristic frequency of the square-cylinder wake in the case of the small center-to-center distance, L/d = 3.6, for all the cylinder pairs examined. The synchronized sound field was anti-phase and asymmetric in amplitude, unlike the case of a square cylinder pair. For larger spacing L/d = 4.5, synchronized vortex shedding and anti-phase sound still occurred, but only for close cross-sectional aspect ratios (c/d = 1.0 and 1.2), and highly modulated sound was radiated with two different frequencies due to non-synchronized vortex shedding from the two cylinders for larger differences in c/d. It was also found that when synchronized vortex shedding occurred, near-wake velocity fluctuations exhibited high spanwise-coherency, with a very sharp spectral peak compared with the single-cylinder case.« less

  3. Investigation of hydraulic transmission noise sources

    NASA Astrophysics Data System (ADS)

    Klop, Richard J.

    Advanced hydrostatic transmissions and hydraulic hybrids show potential in new market segments such as commercial vehicles and passenger cars. Such new applications regard low noise generation as a high priority, thus, demanding new quiet hydrostatic transmission designs. In this thesis, the aim is to investigate noise sources of hydrostatic transmissions to discover strategies for designing compact and quiet solutions. A model has been developed to capture the interaction of a pump and motor working in a hydrostatic transmission and to predict overall noise sources. This model allows a designer to compare noise sources for various configurations and to design compact and inherently quiet solutions. The model describes dynamics of the system by coupling lumped parameter pump and motor models with a one-dimensional unsteady compressible transmission line model. The model has been verified with dynamic pressure measurements in the line over a wide operating range for several system structures. Simulation studies were performed illustrating sensitivities of several design variables and the potential of the model to design transmissions with minimal noise sources. A semi-anechoic chamber has been designed and constructed suitable for sound intensity measurements that can be used to derive sound power. Measurements proved the potential to reduce audible noise by predicting and reducing both noise sources. Sound power measurements were conducted on a series hybrid transmission test bench to validate the model and compare predicted noise sources with sound power.

  4. CAVITATION SOUNDS DURING CERVICOTHORACIC SPINAL MANIPULATION.

    PubMed

    Dunning, James; Mourad, Firas; Zingoni, Andrea; Iorio, Raffaele; Perreault, Thomas; Zacharko, Noah; de Las Peñas, César Fernández; Butts, Raymond; Cleland, Joshua A

    2017-08-01

    No study has previously investigated the side, duration or number of audible cavitation sounds during high-velocity low-amplitude (HVLA) thrust manipulation to the cervicothoracic spine. The primary purpose was to determine which side of the spine cavitates during cervicothoracic junction (CTJ) HVLA thrust manipulation. Secondary aims were to calculate the average number of cavitations, the duration of cervicothoracic thrust manipulation, and the duration of a single cavitation. Quasi-experimental study. Thirty-two patients with upper trapezius myalgia received two cervicothoracic HVLA thrust manipulations targeting the right and left T1-2 articulation, respectively. Two high sampling rate accelerometers were secured bilaterally 25 mm lateral to midline of the T1-2 interspace. For each manipulation, two audio signals were extracted using Short-Time Fourier Transformation (STFT) and singularly processed via spectrogram calculation in order to evaluate the frequency content and number of instantaneous energy bursts of both signals over time for each side of the CTJ. Unilateral cavitation sounds were detected in 53 (91.4%) of 58 cervicothoracic HVLA thrust manipulations and bilateral cavitation sounds were detected in just five (8.6%) of the 58 thrust manipulations; that is, cavitation was significantly (p<0.001) more likely to occur unilaterally than bilaterally. In addition, cavitation was significantly (p<0.0001) more likely to occur on the side contralateral to the clinician's short-lever applicator. The mean number of audible cavitations per manipulation was 4.35 (95% CI 2.88, 5.76). The mean duration of a single manipulation was 60.77 ms (95% CI 28.25, 97.42) and the mean duration of a single audible cavitation was 4.13 ms (95% CI 0.82, 7.46). In addition to single-peak and multi-peak energy bursts, spectrogram analysis also demonstrated high frequency sounds, low frequency sounds, and sounds of multiple frequencies for all 58 manipulations. Cavitation was significantly more likely to occur unilaterally, and on the side contralateral to the short-lever applicator contact, during cervicothoracic HVLA thrust manipulation. Clinicians should expect multiple cavitation sounds when performing HVLA thrust manipulation to the CTJ. Due to the presence of multi-peak energy bursts and sounds of multiple frequencies, the cavitation hypothesis (i.e. intra-articular gas bubble collapse) alone appears unable to explain all of the audible sounds during HVLA thrust manipulation, and the possibility remains that several phenomena may be occurring simultaneously. 2b.

  5. On the Possible Detection of Lightning Storms by Elephants

    PubMed Central

    Kelley, Michael C.; Garstang, Michael

    2013-01-01

    Simple Summary We use data similar to that taken by the International Monitoring System for the detection of nuclear explosions, to determine whether elephants might be capable of detecting and locating the source of sounds generated by thunderstorms. Knowledge that elephants might be capable of responding to such storms, particularly at the end of the dry season when migrations are initiated, is of considerable interest to management and conservation. Abstract Theoretical calculations suggest that sounds produced by thunderstorms and detected by a system similar to the International Monitoring System (IMS) for the detection of nuclear explosions at distances ≥100 km, are at sound pressure levels equal to or greater than 6 × 10−3 Pa. Such sound pressure levels are well within the range of elephant hearing. Frequencies carrying these sounds might allow for interaural time delays such that adult elephants could not only hear but could also locate the source of these sounds. Determining whether it is possible for elephants to hear and locate thunderstorms contributes to the question of whether elephant movements are triggered or influenced by these abiotic sounds. PMID:26487406

  6. Speech Intelligibility in Various Noise Conditions with the Nucleus® 5 CP810 Sound Processor.

    PubMed

    Dillier, Norbert; Lai, Wai Kong

    2015-06-11

    The Nucleus(®) 5 System Sound Processor (CP810, Cochlear™, Macquarie University, NSW, Australia) contains two omnidirectional microphones. They can be configured as a fixed directional microphone combination (called Zoom) or as an adaptive beamformer (called Beam), which adjusts the directivity continuously to maximally reduce the interfering noise. Initial evaluation studies with the CP810 had compared performance and usability of the new processor in comparison with the Freedom™ Sound Processor (Cochlear™) for speech in quiet and noise for a subset of the processing options. This study compares the two processing options suggested to be used in noisy environments, Zoom and Beam, for various sound field conditions using a standardized speech in noise matrix test (Oldenburg sentences test). Nine German-speaking subjects who previously had been using the Freedom speech processor and subsequently were upgraded to the CP810 device participated in this series of additional evaluation tests. The speech reception threshold (SRT for 50% speech intelligibility in noise) was determined using sentences presented via loudspeaker at 65 dB SPL in front of the listener and noise presented either via the same loudspeaker (S0N0) or at 90 degrees at either the ear with the sound processor (S0NCI+) or the opposite unaided ear (S0NCI-). The fourth noise condition consisted of three uncorrelated noise sources placed at 90, 180 and 270 degrees. The noise level was adjusted through an adaptive procedure to yield a signal to noise ratio where 50% of the words in the sentences were correctly understood. In spatially separated speech and noise conditions both Zoom and Beam could improve the SRT significantly. For single noise sources, either ipsilateral or contralateral to the cochlear implant sound processor, average improvements with Beam of 12.9 and 7.9 dB in SRT were found. The average SRT of -8 dB for Beam in the diffuse noise condition (uncorrelated noise from both sides and back) is truly remarkable and comparable to the performance of normal hearing listeners in the same test environment. The static directivity (Zoom) option in the diffuse noise condition still provides a significant benefit of 5.9 dB in comparison with the standard omnidirectional microphone setting. These results indicate that CI recipients may improve their speech recognition in noisy environments significantly using these directional microphone-processing options.

  7. Data Acquisition and Analyses of Magnetotelluric Sounding in Lujiang-Zongyang Ore Concentrated Area

    NASA Astrophysics Data System (ADS)

    Tang, J.; Xiao, X.; Zhou, C.; Lu, Q.

    2010-12-01

    It is really challenging to perform MT data acquisition and processing in the Lujiang-Zongyang ore concentrated area, where severe and complicated noise is mixed with the useful data. Dense population, well-developed water systems, transport networks, communication and power grids, and some in-exploiting mines are main sources of the noise. However, to conduct MT sounding in this area is not only helpful to the study of geological structure and tectonics of this zone, but also brings valuable experience of data analysis and processing in real field work with heavy interference. This work has been accomplished by us in 5 survey lines with 500 sounding stations in total. In order to verify the consistency of the 6 V5-2000 data acquisition systems employed in our study, a consistency experiment was conducted in a test area with weak interference. Curves of apparent resistivity and phase obtained from these 6 instruments are plotted in Fig.1, in which acceptable consistency is showed, except for a few high noise frequencies. To determine the optimal interval for data acquisition in this noise-heavy survey area, a comparison experiment was implemented in a single sounding station for comparing the data quality by different intervals. We found 20 hours or more were required for each acquisition. The evaluation was based on degree of coherence and signal-to-noise ratio. With analysis of the MT data in both time and frequency domain, noise was categorized into several patterns according to the characteristic of various noise sources, and then corresponding filters were adopted. After removing flying-spot, cubic spline smoothing and spacial filtering to all the sounding curves, apparent resistivity profiles were obtained. Further studies including 2D and 3D inverse analysis are on processing. Fig 1. Consistency experiment of the apparatus (a) and (b) are apparent resistivity curves of yx and xy direction;(c) and (d) are phase curves of yx and xy direction;J1,J2,J3,J4,J5,J6 are marks of the 6 apparatus respectively.

  8. Prediction and reduction of aircraft noise in outdoor environments

    NASA Astrophysics Data System (ADS)

    Tong, Bao N.

    This dissertation investigates the noise due to an en-route aircraft cruising at high altitudes. It offers an improved understanding into the combined effects of atmospheric propagation, ground reflection, and source motion on the impact of en-route aircraft noise. A numerical model has been developed to compute pressure time-histories due to a uniformly moving source above a flat ground surface in the presence of a horizontally stratified atmosphere. For a moving source at high elevations, contributions from a direct and specularly reflected wave are sufficient in predicting the sound field close to the ground. In the absence of wind effects, the predicted sound field from a single overhead flight trajectory can be used to interpolate pressure time histories at all other receiver locations via a simplified ray model for the incoherent sound field. This approach provides an efficient method for generating pressure time histories in a three-dimensional space for noise impact studies. A variety of different noise propagation methods are adapted to a uniformly moving source to evaluate the accuracy and efficiency of their predictions. The techniques include: analytical methods, the Fast Field Program (FFP), and asymptotic analysis methods (e.g., ray tracing and more advanced formulations). Source motion effects are introduced via either a retarded time analysis or a Lorentz transform approach depending on the complexity of the problem. The noise spectrum from a single emission frequency, moving source has broadband characteristics. This is a consequence of the Doppler shift which continuously modifies the perceived frequency of the source as it moves relative to a stationary observer on the ground. Thus, the instantaneous wavefronts must be considered in both the frequency dependent ground impedance model and the atmospheric absorption model. It can be shown that the Doppler factor is invariant along each ray path. This gives rise to a path dependent atmospheric absorption mechanism due to the source's motion. To help mitigate the noise that propagates to the ground, multi-layered acoustic treatments can be applied to provide good performance over a wide range of frequencies. An accurate representation of material properties for each of the constituent layers is needed in the design of such treatments. The parameter of interest is the specific acoustic impedance, which can be obtained via inversion of acoustic transfer function measurements. However, several different impedance values can correspond to the same sound field predictions. The boundary loss factor F (associated with spherical wave reflection) is the source of this ambiguity. A method for identifying the family of solutions and selecting the physically meaningful branch is proposed to resolve this non-uniqueness issue. Accurate deduction of the acoustic impedance depends on precise measurements of the acoustic transfer function. However, measurement uncertainties exists in both the magnitude and the phase of the acoustic transfer function. The ASA/ANSI S1.18 standard impedance deduction method uses phase information, which can be unreliable in many outdoor environments. An improved technique which only relies on magnitude information is developed in this dissertation. A selection of optimal geometries become necessary to reduce the sensitivity of the deduced impedance to small variations in the measured data. A graphical approach is provided which offers greater insight into the optimization problem. A downhill simplex algorithm has been implemented to automate the impedance deduction procedure. Physical constraints are applied to limit the search region and to eliminate rogue solutions. Several case studies consisting of both indoor and outdoor acoustical measurements are presented to validate the proposed technique. The current analysis is limited to locally reacting materials where the acoustic impedance does not depend on the incidence angle of the reflected wave.

  9. Minke whale song, spacing, and acoustic communication on the Great Barrier Reef, Australia

    NASA Astrophysics Data System (ADS)

    Gedamke, Jason

    An inquisitive population of minke whale (Balaenoptera acutorostrata ) that concentrates on the Great Barrier Reef during its suspected breeding season offered a unique opportunity to conduct a multi-faceted study of a little-known Balaenopteran species' acoustic behavior. Chapter one investigates whether the minke whale is the source of an unusual, complex, and stereotyped sound recorded, the "star-wars" vocalization. A hydrophone array was towed from a vessel to record sounds from circling whales for subsequent localization of sound sources. These acoustic locations were matched with shipboard and in-water observations of the minke whale, demonstrating the minke whale was the source of this unusual sound. Spectral and temporal features of this sound and the source levels at which it is produced are described. The repetitive "star-wars" vocalization appears similar to the songs of other whale species and has characteristics consistent with reproductive advertisement displays. Chapter two investigates whether song (i.e. the "star-wars" vocalization) has a spacing function through passive monitoring of singer spatial patterns with a moored five-sonobuoy array. Active song playback experiments to singers were also conducted to further test song function. This study demonstrated that singers naturally maintain spatial separations between them through a nearest-neighbor analysis and animated tracks of singer movements. In response to active song playbacks, singers generally moved away and repeated song more quickly suggesting that song repetition interval may help regulate spatial interaction and singer separation. These results further indicate the Great Barrier Reef may be an important reproductive habitat for this species. Chapter three investigates whether song is part of a potentially graded repertoire of acoustic signals. Utilizing both vessel-based recordings and remote recordings from the sonobuoy array, temporal and spectral features, source levels, and associated contextual data of recorded sounds were analyzed. Two categories of sound are described here: (1) patterned song, which was regularly repeated in one of three patterns: slow, fast, and rapid-clustered repetition, and (2) non-patterned "social" sounds recorded from gregarious assemblages of whales. These discrete acoustic signals may comprise a graded system of communication (Slow/fast song → Rapid-clustered song → Social sounds) that is related to the spacing between whales.

  10. Investigation of the sound generation mechanisms for in-duct orifice plates.

    PubMed

    Tao, Fuyang; Joseph, Phillip; Zhang, Xin; Stalnov, Oksana; Siercke, Matthias; Scheel, Henning

    2017-08-01

    Sound generation due to an orifice plate in a hard-walled flow duct which is commonly used in air distribution systems (ADS) and flow meters is investigated. The aim is to provide an understanding of this noise generation mechanism based on measurements of the source pressure distribution over the orifice plate. A simple model based on Curle's acoustic analogy is described that relates the broadband in-duct sound field to the surface pressure cross spectrum on both sides of the orifice plate. This work describes careful measurements of the surface pressure cross spectrum over the orifice plate from which the surface pressure distribution and correlation length is deduced. This information is then used to predict the radiated in-duct sound field. Agreement within 3 dB between the predicted and directly measured sound fields is obtained, providing direct confirmation that the surface pressure fluctuations acting over the orifice plates are the main noise sources. Based on the developed model, the contributions to the sound field from different radial locations of the orifice plate are calculated. The surface pressure is shown to follow a U 3.9 velocity scaling law and the area over which the surface sources are correlated follows a U 1.8 velocity scaling law.

  11. The Confirmation of the Inverse Square Law Using Diffraction Gratings

    ERIC Educational Resources Information Center

    Papacosta, Pangratios; Linscheid, Nathan

    2014-01-01

    Understanding the inverse square law, how for example the intensity of light or sound varies with distance, presents conceptual and mathematical challenges. Students know intuitively that intensity decreases with distance. A light source appears dimmer and sound gets fainter as the distance from the source increases. The difficulty is in…

  12. The sound strength parameter G and its importance in evaluating and planning the acoustics of halls for music.

    PubMed

    Beranek, Leo

    2011-05-01

    The parameter, "Strength of Sound G" is closely related to loudness. Its magnitude is dependent, inversely, on the total sound absorption in a room. By comparison, the reverberation time (RT) is both inversely related to the total sound absorption in a hall and directly related to its cubic volume. Hence, G and RT in combination are vital in planning the acoustics of a concert hall. A newly proposed "Bass Index" is related to the loudness of the bass sound and equals the value of G at 125 Hz in decibels minus its value at mid-frequencies. Listener envelopment (LEV) is shown for most halls to be directly related to the mid-frequency value of G. The broadening of sound, i.e., apparent source width (ASW) is given by degree of source broadening (DSB) which is determined from the combined effect of early lateral reflections as measured by binaural quality index (BQI) and strength G. The optimum values and limits of these parameters are discussed.

  13. The role of long-term familiarity and attentional maintenance in short-term memory for timbre.

    PubMed

    Siedenburg, Kai; McAdams, Stephen

    2017-04-01

    We study short-term recognition of timbre using familiar recorded tones from acoustic instruments and unfamiliar transformed tones that do not readily evoke sound-source categories. Participants indicated whether the timbre of a probe sound matched with one of three previously presented sounds (item recognition). In Exp. 1, musicians better recognised familiar acoustic compared to unfamiliar synthetic sounds, and this advantage was particularly large in the medial serial position. There was a strong correlation between correct rejection rate and the mean perceptual dissimilarity of the probe to the tones from the sequence. Exp. 2 compared musicians' and non-musicians' performance with concurrent articulatory suppression, visual interference, and with a silent control condition. Both suppression tasks disrupted performance by a similar margin, regardless of musical training of participants or type of sounds. Our results suggest that familiarity with sound source categories and attention play important roles in short-term memory for timbre, which rules out accounts solely based on sensory persistence.

  14. Octopus Cells in the Posteroventral Cochlear Nucleus Provide the Main Excitatory Input to the Superior Paraolivary Nucleus

    PubMed Central

    Felix II, Richard A.; Gourévitch, Boris; Gómez-Álvarez, Marcelo; Leijon, Sara C. M.; Saldaña, Enrique; Magnusson, Anna K.

    2017-01-01

    Auditory streaming enables perception and interpretation of complex acoustic environments that contain competing sound sources. At early stages of central processing, sounds are segregated into separate streams representing attributes that later merge into acoustic objects. Streaming of temporal cues is critical for perceiving vocal communication, such as human speech, but our understanding of circuits that underlie this process is lacking, particularly at subcortical levels. The superior paraolivary nucleus (SPON), a prominent group of inhibitory neurons in the mammalian brainstem, has been implicated in processing temporal information needed for the segmentation of ongoing complex sounds into discrete events. The SPON requires temporally precise and robust excitatory input(s) to convey information about the steep rise in sound amplitude that marks the onset of voiced sound elements. Unfortunately, the sources of excitation to the SPON and the impact of these inputs on the behavior of SPON neurons have yet to be resolved. Using anatomical tract tracing and immunohistochemistry, we identified octopus cells in the contralateral cochlear nucleus (CN) as the primary source of excitatory input to the SPON. Cluster analysis of miniature excitatory events also indicated that the majority of SPON neurons receive one type of excitatory input. Precise octopus cell-driven onset spiking coupled with transient offset spiking make SPON responses well-suited to signal transitions in sound energy contained in vocalizations. Targets of octopus cell projections, including the SPON, are strongly implicated in the processing of temporal sound features, which suggests a common pathway that conveys information critical for perception of complex natural sounds. PMID:28620283

  15. Sub-Microsecond Temperature Measurement in Liquid Water Using Laser Induced Thermal Acoustics

    NASA Technical Reports Server (NTRS)

    Alderfer, David W.; Herring, G. C.; Danehy, Paul M.; Mizukaki, Toshiharu; Takayama, Kazuyoshi

    2005-01-01

    Using laser-induced thermal acoustics, we demonstrate non-intrusive and remote sound speed and temperature measurements over the range 10 - 45 C in liquid water. Averaged accuracy of sound speed and temperature measurements (10 s) are 0.64 m/s and 0.45 C respectively. Single-shot precisions based on one standard deviation of 100 or greater samples range from 1 m/s to 16.5 m/s and 0.3 C to 9.5 C for sound speed and temperature measurements respectively. The time resolution of each single-shot measurement was 300 nsec.

  16. A Green Soundscape Index (GSI): The potential of assessing the perceived balance between natural sound and traffic noise.

    PubMed

    Kogan, Pablo; Arenas, Jorge P; Bermejo, Fernando; Hinalaf, María; Turra, Bruno

    2018-06-13

    Urban soundscapes are dynamic and complex multivariable environmental systems. Soundscapes can be organized into three main entities containing the multiple variables: Experienced Environment (EE), Acoustic Environment (AE), and Extra-Acoustic Environment (XE). This work applies a multidimensional and synchronic data-collecting methodology at eight urban environments in the city of Córdoba, Argentina. The EE was assessed by means of surveys, the AE by acoustic measurements and audio recordings, and the XE by photos, video, and complementary sources. In total, 39 measurement locations were considered, where data corresponding to 61 AE and 203 EE were collected. Multivariate analysis and GIS techniques were used for data processing. The types of sound sources perceived, and their extents make up part of the collected variables that belong to the EE, i.e. traffic, people, natural sounds, and others. Sources explaining most of the variance were traffic noise and natural sounds. Thus, a Green Soundscape Index (GSI) is defined here as the ratio of the perceived extents of natural sounds to traffic noise. Collected data were divided into three ranges according to GSI value: 1) perceptual predominance of traffic noise, 2) balanced perception, and 3) perceptual predominance of natural sounds. For each group, three additional variables from the EE and three from the AE were applied, which reported significant differences, especially between ranges 1 and 2 with 3. These results confirm the key role of perceiving natural sounds in a town environment and also support the proposal of a GSI as a valuable indicator to classify urban soundscapes. In addition, the collected GSI-related data significantly helps to assess the overall soundscape. It is noted that this proposed simple perceptual index not only allows one to assess and classify urban soundscapes but also contributes greatly toward a technique for separating environmental sound sources. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Sound produced by an oscillating arc in a high-pressure gas

    NASA Astrophysics Data System (ADS)

    Popov, Fedor K.; Shneider, Mikhail N.

    2017-08-01

    We suggest a simple theory to describe the sound generated by small periodic perturbations of a cylindrical arc in a dense gas. Theoretical analysis was done within the framework of the non-self-consistent channel arc model and supplemented with time-dependent gas dynamic equations. It is shown that an arc with power amplitude oscillations on the order of several percent is a source of sound whose intensity is comparable with external ultrasound sources used in experiments to increase the yield of nanoparticles in the high pressure arc systems for nanoparticle synthesis.

  18. Focusing and directional beaming effects of airborne sound through a planar lens with zigzag slits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Kun; Qiu, Chunyin, E-mail: cyqiu@whu.edu.cn; Lu, Jiuyang

    2015-01-14

    Based on the Huygens-Fresnel principle, we design a planar lens to efficiently realize the interconversion between the point-like sound source and Gaussian beam in ambient air. The lens is constructed by a planar plate perforated elaborately with a nonuniform array of zigzag slits, where the slit exits act as subwavelength-sized secondary sources carrying desired sound responses. The experiments operated at audible regime agree well with the theoretical predictions. This compact device could be useful in daily life applications, such as for medical and detection purposes.

  19. Some Debye temperatures from single-crystal elastic constant data

    USGS Publications Warehouse

    Robie, R.A.; Edwards, J.L.

    1966-01-01

    The mean velocity of sound has been calculated for 14 crystalline solids by using the best recent values of their single-crystal elastic stiffness constants. These mean sound velocities have been used to obtain the elastic Debye temperatures ??De for these materials. Models of the three wave velocity surfaces for calcite are illustrated. ?? 1966 The American Institute of Physics.

  20. Prospective cohort study on noise levels in a pediatric cardiac intensive care unit.

    PubMed

    Garcia Guerra, Gonzalo; Joffe, Ari R; Sheppard, Cathy; Pugh, Jodie; Moez, Elham Khodayari; Dinu, Irina A; Jou, Hsing; Hartling, Lisa; Vohra, Sunita

    2018-04-01

    To describe noise levels in a pediatric cardiac intensive care unit, and to determine the relationship between sound levels and patient sedation requirements. Prospective observational study at a pediatric cardiac intensive care unit (PCICU). Sound levels were measured continuously in slow A weighted decibels dB(A) with a sound level meter SoundEarPro® during a 4-week period. Sedation requirement was assessed using the number of intermittent (PRNs) doses given per hour. Analysis was conducted with autoregressive moving average models and the Granger test for causality. 39 children were included in the study. The average (SD) sound level in the open area was 59.4 (2.5) dB(A) with a statistically significant but clinically unimportant difference between day/night hours (60.1 vs. 58.6; p-value < 0.001). There was no significant difference between sound levels in the open area/single room (59.4 vs. 60.8, p-value = 0.108). Peak noise levels were > 90 dB. There was a significant association between average (p-value = 0.030) and peak sound levels (p-value = 0.006), and number of sedation PRNs. Sound levels were above the recommended values with no differences between day/night or open area/single room. High sound levels were significantly associated with sedation requirements. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. A Recording-Based Method for Auralization of Rotorcraft Flyover Noise

    NASA Technical Reports Server (NTRS)

    Pera, Nicholas M.; Rizzi, Stephen A.; Krishnamurthy, Siddhartha; Fuller, Christopher R.; Christian, Andrew

    2018-01-01

    Rotorcraft noise is an active field of study as the sound produced by these vehicles is often found to be annoying. A means to auralize rotorcraft flyover noise is sought to help understand the factors leading to annoyance. Previous work by the authors focused on auralization of rotorcraft fly-in noise, in which a simplification was made that enabled the source noise synthesis to be based on a single emission angle. Here, the goal is to auralize a complete flyover event, so the source noise synthesis must be capable of traversing a range of emission angles. The synthesis uses a source noise definition process that yields periodic and aperiodic (modulation) components at a set of discrete emission angles. In this work, only the periodic components are used for the source noise synthesis for the flyover; the inclusion of modulation components is the subject of ongoing research. Propagation of the synthesized source noise to a ground observer is performed using the NASA Auralization Framework. The method is demonstrated using ground recordings from a flight test of the AS350 helicopter for the source noise definition.

  2. High-frequency monopole sound source for anechoic chamber qualification

    NASA Astrophysics Data System (ADS)

    Saussus, Patrick; Cunefare, Kenneth A.

    2003-04-01

    Anechoic chamber qualification procedures require the use of an omnidirectional monopole sound source. Required characteristics for these monopole sources are explicitly listed in ISO 3745. Building a high-frequency monopole source that meets these characteristics has proved difficult due to the size limitations imposed by small wavelengths at high frequency. A prototype design developed for use in hemianechoic chambers employs telescoping tubes, which act as an inverse horn. This same design can be used in anechoic chambers, with minor adaptations. A series of gradually decreasing brass telescoping tubes is attached to the throat of a well-insulated high-frequency compression driver. Therefore, all of the sound emitted from the driver travels through the horn and exits through an opening of approximately 2.5 mm. Directivity test data show that this design meets all of the requirements set forth by ISO 3745.

  3. Assessment of sound levels in a neonatal intensive care unit in tabriz, iran.

    PubMed

    Valizadeh, Sousan; Bagher Hosseini, Mohammad; Alavi, Nasrinsadat; Asadollahi, Malihe; Kashefimehr, Siamak

    2013-03-01

    High levels of sound have several negative effects, such as noise-induced hearing loss and delayed growth and development, on premature infants in neonatal intensive care units (NICUs). In order to reduce sound levels, they should first be measured. This study was performed to assess sound levels and determine sources of noise in the NICU of Alzahra Teaching Hospital (Tabriz, Iran). In a descriptive study, 24 hours in 4 workdays were randomly selected. Equivalent continuous sound level (Leq), sound level that is exceeded only 10% of the time (L10), maximum sound level (Lmax), and peak instantaneous sound pressure level (Lzpeak) were measured by CEL-440 sound level meter (SLM) at 6 fixed locations in the NICU. Data was collected using a questionnaire. SPSS13 was then used for data analysis. Mean values of Leq, L10, and Lmax were determined as 63.46 dBA, 65.81 dBA, and 71.30 dBA, respectively. They were all higher than standard levels (Leq < 45 dB, L10 ≤50 dB, and Lmax ≤65 dB). The highest Leq was measured at the time of nurse rounds. Leq was directly correlated with the number of staff members present in the ward. Finally, sources of noise were ordered based on their intensity. Considering that sound levels were higher than standard levels in our studied NICU, it is necessary to adopt policies to reduce sound.

  4. Assessment of Sound Levels in a Neonatal Intensive Care Unit in Tabriz, Iran

    PubMed Central

    Valizadeh, Sousan; Bagher Hosseini, Mohammad; Alavi, Nasrinsadat; Asadollahi, Malihe; Kashefimehr, Siamak

    2013-01-01

    Introduction: High levels of sound have several negative effects, such as noise-induced hearing loss and delayed growth and development, on premature infants in neonatal intensive care units (NICUs). In order to reduce sound levels, they should first be measured. This study was performed to assess sound levels and determine sources of noise in the NICU of Alzahra Teaching Hospital (Tabriz, Iran). Methods: In a descriptive study, 24 hours in 4 workdays were randomly selected. Equivalent continuous sound level (Leq), sound level that is exceeded only 10% of the time (L10), maximum sound level (Lmax), and peak instantaneous sound pressure level (Lzpeak) were measured by CEL-440 sound level meter (SLM) at 6 fixed locations in the NICU. Data was collected using a questionnaire. SPSS13 was then used for data analysis. Results: Mean values of Leq, L10, and Lmax were determined as 63.46 dBA, 65.81 dBA, and 71.30 dBA, respectively. They were all higher than standard levels (Leq < 45 dB, L10 ≤50 dB, and Lmax ≤65 dB). The highest Leq was measured at the time of nurse rounds. Leq was directly correlated with the number of staff members present in the ward. Finally, sources of noise were ordered based on their intensity. Conclusion: Considering that sound levels were higher than standard levels in our studied NICU, it is necessary to adopt policies to reduce sound. PMID:25276706

  5. Hydrodynamic phonon drift and second sound in a (20,20) single-wall carbon nanotube

    DOE PAGES

    Lee, Sangyeop; Lindsay, Lucas

    2017-05-18

    Here, two hydrodynamic features of phonon transport, phonon drift and second sound, in a (20,20) single wall carbon nanotube (SWCNT) are discussed using lattice dynamics calculations employing an optimized Tersoff potential for atomic interactions. We formally derive a formula for the contribution of drift motion of phonons to total heat flux at steady state. It is found that the drift motion of phonons carry more than 70% and 90% of heat at 300 K and 100 K, respectively, indicating that phonon flow can be reasonably approximated as hydrodynamic if the SWCNT is long enough to avoid ballistic phonon transport. Themore » dispersion relation of second sound is derived from the Peierls-Boltzmann transport equation with Callaway s scattering model and quantifies the speed of second sound and its relaxation. The speed of second sound is around 4000 m/s in a (20,20) SWCNT and the second sound can propagate more than 10 m in an isotopically pure (20,20) SWCNT for frequency around 1 GHz at 100 K.« less

  6. Hydrodynamic phonon drift and second sound in a (20,20) single-wall carbon nanotube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sangyeop; Lindsay, Lucas

    Here, two hydrodynamic features of phonon transport, phonon drift and second sound, in a (20,20) single wall carbon nanotube (SWCNT) are discussed using lattice dynamics calculations employing an optimized Tersoff potential for atomic interactions. We formally derive a formula for the contribution of drift motion of phonons to total heat flux at steady state. It is found that the drift motion of phonons carry more than 70% and 90% of heat at 300 K and 100 K, respectively, indicating that phonon flow can be reasonably approximated as hydrodynamic if the SWCNT is long enough to avoid ballistic phonon transport. Themore » dispersion relation of second sound is derived from the Peierls-Boltzmann transport equation with Callaway s scattering model and quantifies the speed of second sound and its relaxation. The speed of second sound is around 4000 m/s in a (20,20) SWCNT and the second sound can propagate more than 10 m in an isotopically pure (20,20) SWCNT for frequency around 1 GHz at 100 K.« less

  7. In situ analysis of measurements of auroral dynamics and structure

    NASA Astrophysics Data System (ADS)

    Mella, Meghan R.

    Two auroral sounding rocket case studies, one in the dayside and one in the nightside, explore aspects of poleward boundary aurora. The nightside sounding rocket, Cascades-2 was launched on 20 March 2009 at 11:04:00 UT from the Poker Flat Research Range in Alaska, and flew across a series of poleward boundary intensifications (PBIs). Each of the crossings have fundamentally different in situ electron energy and pitch angle structure, and different ground optics images of visible aurora. The different particle distributions show signatures of both a quasistatic acceleration mechanism and an Alfvenic acceleration mechanism, as well as combinations of both. The Cascades-2 experiment is the first sounding rocket observation of a PBI sequence, enabling a detailed investigation of the electron signatures and optical aurora associated with various stages of a PBI sequence as it evolves from an Alfvenic to a more quasistatic structure. The dayside sounding rocket, Scifer-2 was launched on 18 January 2008 at 7:30 UT from the Andoya Rocket Range in Andenes, Norway. It flew northward through the cleft region during a Poleward Moving Auroral Form (PMAF) event. Both the dayside and nightside flights observe dispersed, precipitating ions, each of a different nature. The dispersion signatures are dependent on, among other things, the MLT sector, altitude, source region, and precipitation mechanism. It is found that small changes in the shape of the dispersion have a large influence on whether the precipitation was localized or extended over a range of altitudes. It is also found that a single Maxwellian source will not replicate the data, but rather, a sum of Maxwellians of different temperature, similar to a Kappa distribution, most closely reproduces the data. The various particle signatures are used to argue that both events have similar magnetospheric drivers, that is, Bursty Bulk Flows in the magnetotail.

  8. Statistics of natural binaural sounds.

    PubMed

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  9. Statistics of Natural Binaural Sounds

    PubMed Central

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction. PMID:25285658

  10. Experimental study of outdoor propagation of spherically speading periodic acoustic waves of finite amplitude

    NASA Technical Reports Server (NTRS)

    Theobald, M. A.

    1977-01-01

    The outdoor propagation of spherically spreading sound waves of finite amplitude was investigated. The main purpose of the experiments was to determine the extent to which the outdoor environment, mainly random inhomogeneity of the medium, affects finite amplitude propagation. Periodic sources with fundamental frequencies in the range 6 to 8 kHz and source levels SPLlm from 140 to 149 dB were used. The sources were an array of 7 to 10 horn drivers and a siren. The propagation path was vertical and parallel to an 85 m tower, whose elevator carried the traveling microphone. The general conclusions drawn from the experimental results were as follows. The inhomogeneities caused significant fluctuations in the instantaneous acoustic signal, but with sufficient time averaging of the measured harmonic levels, the results were comparable to results expected for propagation in a quiet medium. Propagation data for the fundamental of the siren approached within 1 dB of the weak shock saturation levels. Extra attenuation on the order of 8 dB was observed. The measurements generally confirmed the predictions of several theoretical models. The maximum propagation distance was 36 m. The narrowbeam arrays were much weaker sources. Nonlinear propagation distortion was produced, but the maximum value of extra attenuation measured was 1.5 dB. The maximum propagation distance was 76 m. The behavior of the asymetric waveforms received in one experiment qualitatively suggested that beam type diffraction effects were present. The role of diffraction of high intensity sound waves in radiation from a single horn was briefly investigated.

  11. Determination of equivalent sound speed profiles for ray tracing in near-ground sound propagation.

    PubMed

    Prospathopoulos, John M; Voutsinas, Spyros G

    2007-09-01

    The determination of appropriate sound speed profiles in the modeling of near-ground propagation using a ray tracing method is investigated using a ray tracing model which is capable of performing axisymmetric calculations of the sound field around an isolated source. Eigenrays are traced using an iterative procedure which integrates the trajectory equations for each ray launched from the source at a specific direction. The calculation of sound energy losses is made by introducing appropriate coefficients to the equations representing the effect of ground and atmospheric absorption and the interaction with the atmospheric turbulence. The model is validated against analytical and numerical predictions of other methodologies for simple cases, as well as against measurements for nonrefractive atmospheric environments. A systematic investigation for near-ground propagation in downward and upward refractive atmosphere is made using experimental data. Guidelines for the suitable simulation of the wind velocity profile are derived by correlating predictions with measurements.

  12. Acoustic centering of sources measured by surrounding spherical microphone arrays.

    PubMed

    Hagai, Ilan Ben; Pollow, Martin; Vorländer, Michael; Rafaely, Boaz

    2011-10-01

    The radiation patterns of acoustic sources have great significance in a wide range of applications, such as measuring the directivity of loudspeakers and investigating the radiation of musical instruments for auralization. Recently, surrounding spherical microphone arrays have been studied for sound field analysis, facilitating measurement of the pressure around a sphere and the computation of the spherical harmonics spectrum of the sound source. However, the sound radiation pattern may be affected by the location of the source inside the microphone array, which is an undesirable property when aiming to characterize source radiation in a unique manner. This paper presents a theoretical analysis of the spherical harmonics spectrum of spatially translated sources and defines four measures for the misalignment of the acoustic center of a radiating source. Optimization is used to promote optimal alignment based on the proposed measures and the errors caused by numerical and array-order limitations are investigated. This methodology is examined using both simulated and experimental data in order to investigate the performance and limitations of the different alignment methods. © 2011 Acoustical Society of America

  13. Strategies for achieving sustained competitive advantage.

    PubMed

    Schlosser, J R

    1987-06-01

    Sound strategic planning, even in the midst of unprecedented uncertainty and turmoil, is a critical element of every successful health care organization's action plan. The author examines how one organization has responded to the changing demands of the marketplace and a dramatically changed reimbursement system through appropriate strategic planning, selective downsizing on certain fronts and new product development expansion on others. The result is an organization molded to the new environment. It is no longer based on an illness-model hospital but rather focuses on a vertically integrated multi-health cluster intent on capturing market share by providing a single source continuum of health care.

  14. L-type calcium channels refine the neural population code of sound level.

    PubMed

    Grimsley, Calum Alex; Green, David Brian; Sivaramakrishnan, Shobhana

    2016-12-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (Ca L : Ca V 1.1-1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of Ca L to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. Ca L is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, Ca L activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, Ca L boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, Ca L either suppresses or enhances firing at sound levels that evoke maximum firing. Ca L multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. Copyright © 2016 the American Physiological Society.

  15. Cognitive and Linguistic Sources of Variance in 2-Year-Olds' Speech-Sound Discrimination: A Preliminary Investigation

    ERIC Educational Resources Information Center

    Lalonde, Kaylah; Holt, Rachael Frush

    2014-01-01

    Purpose: This preliminary investigation explored potential cognitive and linguistic sources of variance in 2- year-olds' speech-sound discrimination by using the toddler change/no-change procedure and examined whether modifications would result in a procedure that can be used consistently with younger 2-year-olds. Method: Twenty typically…

  16. The use of an intraoral electrolarynx for an edentulous patient: a clinical report.

    PubMed

    Wee, Alvin G; Wee, Lisa A; Cheng, Ansgar C; Cwynar, Roger B

    2004-06-01

    This clinical report describes the clinical requirements, treatment sequence, and use of a relatively new intraoral electrolarynx for a completely edentulous patient. This device consists of a sound source attached to the maxilla and a hand-held controller unit that controls the pitch and volume of the intraoral sound source via transmitted radio waves.

  17. Spherical beamforming for spherical array with impedance surface

    NASA Astrophysics Data System (ADS)

    Tontiwattanakul, Khemapat

    2018-01-01

    Spherical microphone array beamforming has been a popular research topic for recent years. Due to their isotropic beam in three dimensional spaces as well as a certain frequency range, the arrays are widely used in many applications such as sound field recording, acoustic beamforming, and noise source localisation. The body of a spherical array is usually considered perfectly rigid. A sound field captured by the sensors on spherical array can be decomposed into a series of spherical harmonics. In noise source localisation, the amplitude density of sound sources is estimated and illustrated by mean of colour maps. In this work, a rigid spherical array covered by fibrous materials is studied via numerical simulation and the performance of the spherical beamforming is discussed.

  18. Study on the Non-contact Acoustic Inspection Method for Concrete Structures by using Strong Ultrasonic Sound source

    NASA Astrophysics Data System (ADS)

    Sugimoto, Tsuneyoshi; Uechi, Itsuki; Sugimoto, Kazuko; Utagawa, Noriyuki; Katakura, Kageyoshi

    Hammering test is widely used to inspect the defects in concrete structures. However, this method has a major difficulty in inspect at high-places, such as a tunnel ceiling or a bridge girder. Moreover, its detection accuracy is dependent on a tester's experience. Therefore, we study about the non-contact acoustic inspection method of the concrete structure using the air borne sound wave and a laser Doppler vibrometer. In this method, the concrete surface is excited by air-borne sound wave emitted with a long range acoustic device (LRAD), and the vibration velocity on the concrete surface is measured by a laser Doppler vibrometer. A defect part is detected by the same flexural resonance as the hammer method. It is already shown clearly that detection of a defect can be performed from a long distance of 5 m or more using a concrete test object. Moreover, it is shown that a real concrete structure can also be applied. However, when the conventional LRAD was used as a sound source, there were problems, such as restrictions of a measurement angle and the surrounding noise. In order to solve these problems, basic examination which used the strong ultrasonic wave sound source was carried out. In the experiment, the concrete test object which includes an imitation defect from 5-m distance was used. From the experimental result, when the ultrasonic sound source was used, restrictions of a measurement angle become less severe and it was shown that circumference noise also falls dramatically.

  19. 75 FR 39915 - Marine Mammals; File No. 15483

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-13

    ... whales adjust their bearing to avoid received sound pressure levels greater than 120 dB, which would... marine mammals may be taken by Level B harassment as researchers attempt to provoke an avoidance response through sound transmission into their environment. The sound source consists of a transmitter and...

  20. 24 CFR 51.103 - Criteria and standards.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...-night average sound level produced as the result of the accumulation of noise from all sources contributing to the external noise environment at the site. Day-night average sound level, abbreviated as DNL and symbolized as Ldn, is the 24-hour average sound level, in decibels, obtained after addition of 10...

  1. Light aircraft sound transmission studies - Noise reduction model

    NASA Technical Reports Server (NTRS)

    Atwal, Mahabir S.; Heitman, Karen E.; Crocker, Malcolm J.

    1987-01-01

    Experimental tests conducted on the fuselage of a single-engine Piper Cherokee light aircraft suggest that the cabin interior noise can be reduced by increasing the transmission loss of the dominant sound transmission paths and/or by increasing the cabin interior sound absorption. The validity of using a simple room equation model to predict the cabin interior sound-pressure level for different fuselage and exterior sound field conditions is also presented. The room equation model is based on the sound power flow balance for the cabin space and utilizes the measured transmitted sound intensity data. The room equation model predictions were considered good enough to be used for preliminary acoustical design studies.

  2. Characterisation of structure-borne sound source using reception plate method.

    PubMed

    Putra, A; Saari, N F; Bakri, H; Ramlan, R; Dan, R M

    2013-01-01

    A laboratory-based experiment procedure of reception plate method for structure-borne sound source characterisation is reported in this paper. The method uses the assumption that the input power from the source installed on the plate is equal to the power dissipated by the plate. In this experiment, rectangular plates having high and low mobility relative to that of the source were used as the reception plates and a small electric fan motor was acting as the structure-borne source. The data representing the source characteristics, namely, the free velocity and the source mobility, were obtained and compared with those from direct measurement. Assumptions and constraints employing this method are discussed.

  3. Complete data listings for CSEM soundings on Kilauea Volcano, Hawaii

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kauahikaua, J.; Jackson, D.B.; Zablocki, C.J.

    1983-01-01

    This document contains complete data from a controlled-source electromagnetic (CSEM) sounding/mapping project at Kilauea volcano, Hawaii. The data were obtained at 46 locations about a fixed-location, horizontal, polygonal loop source in the summit area of the volcano. The data consist of magnetic field amplitudes and phases at excitation frequencies between 0.04 and 8 Hz. The vector components were measured in a cylindrical coordinate system centered on the loop source. 5 references.

  4. Spacecraft Internal Acoustic Environment Modeling

    NASA Technical Reports Server (NTRS)

    Chu, Shao-Sheng R.; Allen Christopher S.

    2010-01-01

    Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. This paper describes the implementation of acoustic modeling for design purposes by incrementally increasing model fidelity and validating the accuracy of the model while predicting the noise of sources under various conditions. During FY 07, a simple-geometry Statistical Energy Analysis (SEA) model was developed and validated using a physical mockup and acoustic measurements. A process for modeling the effects of absorptive wall treatments and the resulting reverberation environment were developed. During FY 08, a model with more complex and representative geometry of the Orion Crew Module (CM) interior was built, and noise predictions based on input noise sources were made. A corresponding physical mockup was also built. Measurements were made inside this mockup, and comparisons were made with the model and showed excellent agreement. During FY 09, the fidelity of the mockup and corresponding model were increased incrementally by including a simple ventilation system. The airborne noise contribution of the fans was measured using a sound intensity technique, since the sound power levels were not known beforehand. This is opposed to earlier studies where Reference Sound Sources (RSS) with known sound power level were used. Comparisons of the modeling result with the measurements in the mockup showed excellent results. During FY 10, the fidelity of the mockup and the model were further increased by including an ECLSS (Environmental Control and Life Support System) wall, associated closeout panels, and the gap between ECLSS wall and mockup wall. The effect of sealing the gap and adding sound absorptive treatment to ECLSS wall were also modeled and validated.

  5. Modeling the utility of binaural cues for underwater sound localization.

    PubMed

    Schneider, Jennifer N; Lloyd, David R; Banks, Patchouly N; Mercado, Eduardo

    2014-06-01

    The binaural cues used by terrestrial animals for sound localization in azimuth may not always suffice for accurate sound localization underwater. The purpose of this research was to examine the theoretical limits of interaural timing and level differences available underwater using computational and physical models. A paired-hydrophone system was used to record sounds transmitted underwater and recordings were analyzed using neural networks calibrated to reflect the auditory capabilities of terrestrial mammals. Estimates of source direction based on temporal differences were most accurate for frequencies between 0.5 and 1.75 kHz, with greater resolution toward the midline (2°), and lower resolution toward the periphery (9°). Level cues also changed systematically with source azimuth, even at lower frequencies than expected from theoretical calculations, suggesting that binaural mechanical coupling (e.g., through bone conduction) might, in principle, facilitate underwater sound localization. Overall, the relatively limited ability of the model to estimate source position using temporal and level difference cues underwater suggests that animals such as whales may use additional cues to accurately localize conspecifics and predators at long distances. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. The meaning of city noises: Investigating sound quality in Paris (France)

    NASA Astrophysics Data System (ADS)

    Dubois, Daniele; Guastavino, Catherine; Maffiolo, Valerie; Guastavino, Catherine; Maffiolo, Valerie

    2004-05-01

    The sound quality of Paris (France) was investigated by using field inquiries in actual environments (open questionnaires) and using recordings under laboratory conditions (free-sorting tasks). Cognitive categories of soundscapes were inferred by means of psycholinguistic analyses of verbal data and of mathematical analyses of similarity judgments. Results show that auditory judgments mainly rely on source identification. The appraisal of urban noise therefore depends on the qualitative evaluation of noise sources. The salience of human sounds in public spaces has been demonstrated, in relation to pleasantness judgments: soundscapes with human presence tend to be perceived as more pleasant than soundscapes consisting solely of mechanical sounds. Furthermore, human sounds are qualitatively processed as indicators of human outdoor activities, such as open markets, pedestrian areas, and sidewalk cafe districts that reflect city life. In contrast, mechanical noises (mainly traffic noise) are commonly described in terms of physical properties (temporal structure, intensity) of a permanent background noise that also characterizes urban areas. This connotes considering both quantitative and qualitative descriptions to account for the diversity of cognitive interpretations of urban soundscapes, since subjective evaluations depend both on the meaning attributed to noise sources and on inherent properties of the acoustic signal.

  7. Sound source localization on an axial fan at different operating points

    NASA Astrophysics Data System (ADS)

    Zenger, Florian J.; Herold, Gert; Becker, Stefan; Sarradj, Ennes

    2016-08-01

    A generic fan with unskewed fan blades is investigated using a microphone array method. The relative motion of the fan with respect to the stationary microphone array is compensated by interpolating the microphone data to a virtual rotating array with the same rotational speed as the fan. Hence, beamforming algorithms with deconvolution, in this case CLEAN-SC, could be applied. Sound maps and integrated spectra of sub-components are evaluated for five operating points. At selected frequency bands, the presented method yields sound maps featuring a clear circular source pattern corresponding to the nine fan blades. Depending on the adjusted operating point, sound sources are located on the leading or trailing edges of the fan blades. Integrated spectra show that in most cases leading edge noise is dominant for the low-frequency part and trailing edge noise for the high-frequency part. The shift from leading to trailing edge noise is strongly dependent on the operating point and frequency range considered.

  8. Impact of NICU design on environmental noise.

    PubMed

    Szymczak, Stacy E; Shellhaas, Renée A

    2014-04-01

    For neonates requiring intensive care, the optimal sound environment is uncertain. Minimal disruptions from medical staff create quieter environments for sleep, but limit language exposure necessary for proper language development. There are two models of neonatal intensive care units (NICUs): open-bay, in which 6-to-10 infants are cared for in a single large room; and single-room, in which neonates are housed in private, individual hospital rooms. We compared the acoustic environments in the two NICU models. We extracted the audio tracks from video-electroencephalography (EEG) monitoring studies from neonates in an open-bay NICU and compared the acoustic environment to that recorded from neonates in a new single-room NICU. From each NICU, 18 term infants were studied (total N=36; mean gestational age 39.3±1.9 weeks). Neither z-scores of the sound level variance (0.088±0.03 vs. 0.083±0.03, p=0.7), nor percent time with peak sound variance (above 2 standard deviations; 3.6% vs. 3.8%, p=0.6) were different. However, time below 0.05 standard deviations was higher in the single-room NICU (76% vs. 70%, p=0.02). We provide objective evidence that single-room NICUs have equal sound peaks and overall noise level variability compared with open-bay units, but the former may offer significantly more time at lower noise levels.

  9. Male sperm whale acoustic behavior observed from multipaths at a single hydrophone

    NASA Astrophysics Data System (ADS)

    Laplanche, Christophe; Adam, Olivier; Lopatka, Maciej; Motsch, Jean-François

    2005-10-01

    Sperm whales generate transient sounds (clicks) when foraging. These clicks have been described as echolocation sounds, a result of having measured the source level and the directionality of these signals and having extrapolated results from biosonar tests made on some small odontocetes. The authors propose a passive acoustic technique requiring only one hydrophone to investigate the acoustic behavior of free-ranging sperm whales. They estimate whale pitch angles from the multipath distribution of click energy. They emphasize the close bond between the sperm whale's physical and acoustic activity, leading to the hypothesis that sperm whales might, like some small odontocetes, control click level and rhythm. An echolocation model estimating the range of the sperm whale's targets from the interclick interval is computed and tested during different stages of the whale's dive. Such a hypothesis on the echolocation process would indicate that sperm whales echolocate their prey layer when initiating their dives and follow a methodic technique when foraging.

  10. Near-field noise of a single-rotation propfan at an angle of attack

    NASA Technical Reports Server (NTRS)

    Nallasamy, M.; Envia, E.; Clark, B. J.; Groeneweg, J. F.

    1990-01-01

    The near field noise characteristics of a propfan operating at an angle of attack are examined utilizing the unsteady pressure field obtained from a 3-D Euler simulation of the propfan flowfield. The near field noise is calculated employing three different procedures: a direct computation method in which the noise field is extracted directly from the Euler solution, and two acoustic-analogy-based frequency domain methods which utilize the computed unsteady pressure distribution on the propfan blades as the source term. The inflow angles considered are -0.4, 1.6, and 4.6 degrees. The results of the direct computation method and one of the frequency domain methods show qualitative agreement with measurements. They show that an increase in the inflow angle is accompanied by an increase in the sound pressure level at the outboard wing boom locations and a decrease in the sound pressure level at the (inboard) fuselage locations. The trends in the computed azimuthal directivities of the noise field also conform to the measured and expected results.

  11. Study of the Acoustic Effects of Hydrokinetic Tidal Turbines in Admiralty Inlet, Puget Sound

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brian Polagye; Jim Thomson; Chris Bassett

    2012-03-30

    Hydrokinetic turbines will be a source of noise in the marine environment - both during operation and during installation/removal. High intensity sound can cause injury or behavioral changes in marine mammals and may also affect fish and invertebrates. These noise effects are, however, highly dependent on the individual marine animals; the intensity, frequency, and duration of the sound; and context in which the sound is received. In other words, production of sound is a necessary, but not sufficient, condition for an environmental impact. At a workshop on the environmental effects of tidal energy development, experts identified sound produced by turbinesmore » as an area of potentially significant impact, but also high uncertainty. The overall objectives of this project are to improve our understanding of the potential acoustic effects of tidal turbines by: (1) Characterizing sources of existing underwater noise; (2) Assessing the effectiveness of monitoring technologies to characterize underwater noise and marine mammal responsiveness to noise; (3) Evaluating the sound profile of an operating tidal turbine; and (4) Studying the effect of turbine sound on surrogate species in a laboratory environment. This study focuses on a specific case study for tidal energy development in Admiralty Inlet, Puget Sound, Washington (USA), but the methodologies and results are applicable to other turbine technologies and geographic locations. The project succeeded in achieving the above objectives and, in doing so, substantially contributed to the body of knowledge around the acoustic effects of tidal energy development in several ways: (1) Through collection of data from Admiralty Inlet, established the sources of sound generated by strong currents (mobilizations of sediment and gravel) and determined that low-frequency sound recorded during periods of strong currents is non-propagating pseudo-sound. This helped to advance the debate within the marine and hydrokinetics acoustic community as to whether strong currents produce propagating sound. (2) Analyzed data collected from a tidal turbine operating at the European Marine Energy Center to develop a profile of turbine sound and developed a framework to evaluate the acoustic effects of deploying similar devices in other locations. This framework has been applied to Public Utility District No. 1 of Snohomish Country's demonstration project in Admiralty Inlet to inform postinstallation acoustic and marine mammal monitoring plans. (3) Demonstrated passive acoustic techniques to characterize the ambient noise environment at tidal energy sites (fixed, long-term observations recommended) and characterize the sound from anthropogenic sources (drifting, short-term observations recommended). (4) Demonstrated the utility and limitations of instrumentation, including bottom mounted instrumentation packages, infrared cameras, and vessel monitoring systems. In doing so, also demonstrated how this type of comprehensive information is needed to interpret observations from each instrument (e.g., hydrophone data can be combined with vessel tracking data to evaluate the contribution of vessel sound to ambient noise). (5) Conducted a study that suggests harbor porpoise in Admiralty Inlet may be habituated to high levels of ambient noise due to omnipresent vessel traffic. The inability to detect behavioral changes associated with a high intensity source of opportunity (passenger ferry) has informed the approach for post-installation marine mammal monitoring. (6) Conducted laboratory exposure experiments of juvenile Chinook salmon and showed that exposure to a worse than worst case acoustic dose of turbine sound does not result in changes to hearing thresholds or biologically significant tissue damage. Collectively, this means that Chinook salmon may be at a relatively low risk of injury from sound produced by tidal turbines located in or near their migration path. In achieving these accomplishments, the project has significantly advanced the District's goals of developing a demonstration-scale tidal energy project in Admiralty Inlet. Pilot demonstrations of this type are an essential step in the development of commercial-scale tidal energy in the United States. This is a renewable resource capable of producing electricity in a highly predictable manner.« less

  12. Increasing low frequency sound attenuation using compounded single layer of sonic crystal

    NASA Astrophysics Data System (ADS)

    Gulia, Preeti; Gupta, Arpan

    2018-05-01

    Sonic crystals (SC) are man-made periodic structures where sound hard scatterers are arranged in a crystalline manner. SC reduces noise in a particular range of frequencies called as band gap. Sonic crystals have a promising application in noise shielding; however, the application is limited due to the size of structure. Particularly for low frequencies, the structure becomes quite bulky, restricting its practical application. This paper presents a compounded model of SC, which has the same overall area and filling fraction but with increased low frequency sound attenuation. Two cases have been considered, a three layer SC and a compounded single layer SC. Both models have been analyzed using finite element simulation and plane wave expansion method. Band gaps for periodic structures have been obtained using both methods which are in good agreement. Further, sound transmission loss has been evaluated using finite element method. The results demonstrate the use of compounded model of Sonic Crystal for low frequency sound attenuation.

  13. Concerns of the Institute of Transport Study and Research for reducing the sound level inside completely repaired buses. [noise and vibration control

    NASA Technical Reports Server (NTRS)

    Groza, A.; Calciu, J.; Nicola, I.; Ionasek, A.

    1974-01-01

    Sound level measurements on noise sources on buses are used to observe the effects of attenuating acoustic pressure levels inside the bus by sound-proofing during complete repair. A spectral analysis of the sound level as a function of motor speed, bus speed along the road, and the category of the road is reported.

  14. A sound budget for the southeastern Bering Sea: measuring wind, rainfall, shipping, and other sources of underwater sound.

    PubMed

    Nystuen, Jeffrey A; Moore, Sue E; Stabeno, Phyllis J

    2010-07-01

    Ambient sound in the ocean contains quantifiable information about the marine environment. A passive aquatic listener (PAL) was deployed at a long-term mooring site in the southeastern Bering Sea from 27 April through 28 September 2004. This was a chain mooring with lots of clanking. However, the sampling strategy of the PAL filtered through this noise and allowed the background sound field to be quantified for natural signals. Distinctive signals include the sound from wind, drizzle and rain. These sources dominate the sound budget and their intensity can be used to quantify wind speed and rainfall rate. The wind speed measurement has an accuracy of +/-0.4 m s(-1) when compared to a buoy-mounted anemometer. The rainfall rate measurement is consistent with a land-based measurement in the Aleutian chain at Cold Bay, AK (170 km south of the mooring location). Other identifiable sounds include ships and short transient tones. The PAL was designed to reject transients in the range important for quantification of wind speed and rainfall, but serendipitously recorded peaks in the sound spectrum between 200 Hz and 3 kHz. Some of these tones are consistent with whale calls, but most are apparently associated with mooring self-noise.

  15. Functional morphology of the sound-generating labia in the syrinx of two songbird species.

    PubMed

    Riede, Tobias; Goller, Franz

    2010-01-01

    In songbirds, two sound sources inside the syrinx are used to produce the primary sound. Laterally positioned labia are passively set into vibration, thus interrupting a passing air stream. Together with subsyringeal pressure, the size and tension of the labia determine the spectral characteristics of the primary sound. Very little is known about how the histological composition and morphology of the labia affect their function as sound generators. Here we related the size and microstructure of the labia to their acoustic function in two songbird species with different acoustic characteristics, the white-crowned sparrow and zebra finch. Histological serial sections of the syrinx and different staining techniques were used to identify collagen, elastin and hyaluronan as extracellular matrix components. The distribution and orientation of elastic fibers indicated that the labia in white-crowned sparrows are multi-layered structures, whereas they are more uniformly structured in the zebra finch. Collagen and hyaluronan were evenly distributed in both species. A multi-layered composition could give rise to complex viscoelastic properties of each sound source. We also measured labia size. Variability was found along the dorso-ventral axis in both species. Lateral asymmetry was identified in some individuals but not consistently at the species level. Different size between the left and right sound sources could provide a morphological basis for the acoustic specialization of each sound generator, but only in some individuals. The inconsistency of its presence requires the investigation of alternative explanations, e.g. differences in viscoelastic properties of the labia of the left and right syrinx. Furthermore, we identified attachments of syringeal muscles to the labia as well as to bronchial half rings and suggest a mechanism for their biomechanical function.

  16. Functional morphology of the sound-generating labia in the syrinx of two songbird species

    PubMed Central

    Riede, Tobias; Goller, Franz

    2010-01-01

    In songbirds, two sound sources inside the syrinx are used to produce the primary sound. Laterally positioned labia are passively set into vibration, thus interrupting a passing air stream. Together with subsyringeal pressure, the size and tension of the labia determine the spectral characteristics of the primary sound. Very little is known about how the histological composition and morphology of the labia affect their function as sound generators. Here we related the size and microstructure of the labia to their acoustic function in two songbird species with different acoustic characteristics, the white-crowned sparrow and zebra finch. Histological serial sections of the syrinx and different staining techniques were used to identify collagen, elastin and hyaluronan as extracellular matrix components. The distribution and orientation of elastic fibers indicated that the labia in white-crowned sparrows are multi-layered structures, whereas they are more uniformly structured in the zebra finch. Collagen and hyaluronan were evenly distributed in both species. A multi-layered composition could give rise to complex viscoelastic properties of each sound source. We also measured labia size. Variability was found along the dorso-ventral axis in both species. Lateral asymmetry was identified in some individuals but not consistently at the species level. Different size between the left and right sound sources could provide a morphological basis for the acoustic specialization of each sound generator, but only in some individuals. The inconsistency of its presence requires the investigation of alternative explanations, e.g. differences in viscoelastic properties of the labia of the left and right syrinx. Furthermore, we identified attachments of syringeal muscles to the labia as well as to bronchial half rings and suggest a mechanism for their biomechanical function. PMID:19900184

  17. [Reading ability of junior high school students in relation to self-evaluation and depression].

    PubMed

    Yamashita, Toshiya; Hayashi, Takashi

    2012-01-01

    Guidelines for the diagnosis of reading disorders in elementary school students were published recently in Japan. On the basis of these guidelines, we administrated reading test batteries to 43 Japanese junior high-school students from grade two. The reading test consisted of single sounds, single words, and single sentences. We evaluated the reading speed and the number of reading errors made by the test takers; their performance was compared with the normal value for elementary school students in grade six, as stated in the guidelines. The reading ability of the junior high-school students was not higher than that of the elementary school students. Seven students (16.3%) were found to have reading difficulties (RD group) and they met the criterion for diagnosis of reading disorder as per the guidelines. Three students had difficulties in reading single sounds and single words, but they faced no problems when reading single sentences. It was supposed that the strategies used by the students for reading sentences may have differed from those used for reading single sounds or single words. No significant differences were found between the RD and non-RD group students on scores of scholastic self-evaluation, self-esteem, and depressive symptoms. Therefore, reading difficulty did not directly influence the level of self-evaluation or depression.

  18. Theory of acoustic design of opera house and a design proposal

    NASA Astrophysics Data System (ADS)

    Ando, Yoichi

    2004-05-01

    First of all, the theory of subjective preference for sound fields based on the model of auditory-brain system is briefly mentioned. It consists of the temporal factors and spatial factors associated with the left and right cerebral hemispheres, respectively. The temporal criteria are the initial time delay gap between the direct sound and the first Reflection (Dt1) and the subsequent reverberation time (Tsub). These preferred conditions are related to the minimum value of effective duration of the running autocorrelation function of source signals (te)min. The spatial criteria are binaural listening level (LL) and the IACC, which may be extracted from the interaural crosscorrelation function. In the opera house, there are two different kind of sound sources, i.e., the vocal source of relatively short values of (te)min in the stage and the orchestra music of long values of (te)min in the pit. For these sources, a proposal is made here.

  19. Unattended Exposure to Components of Speech Sounds Yields Same Benefits as Explicit Auditory Training

    ERIC Educational Resources Information Center

    Seitz, Aaron R.; Protopapas, Athanassios; Tsushima, Yoshiaki; Vlahou, Eleni L.; Gori, Simone; Grossberg, Stephen; Watanabe, Takeo

    2010-01-01

    Learning a second language as an adult is particularly effortful when new phonetic representations must be formed. Therefore the processes that allow learning of speech sounds are of great theoretical and practical interest. Here we examined whether perception of single formant transitions, that is, sound components critical in speech perception,…

  20. Diversity in sound pressure levels and estimated active space of resident killer whale vocalizations.

    PubMed

    Miller, Patrick J O

    2006-05-01

    Signal source intensity and detection range, which integrates source intensity with propagation loss, background noise and receiver hearing abilities, are important characteristics of communication signals. Apparent source levels were calculated for 819 pulsed calls and 24 whistles produced by free-ranging resident killer whales by triangulating the angles-of-arrival of sounds on two beamforming arrays towed in series. Levels in the 1-20 kHz band ranged from 131 to 168 dB re 1 microPa at 1 m, with differences in the means of different sound classes (whistles: 140.2+/-4.1 dB; variable calls: 146.6+/-6.6 dB; stereotyped calls: 152.6+/-5.9 dB), and among stereotyped call types. Repertoire diversity carried through to estimates of active space, with "long-range" stereotyped calls all containing overlapping, independently-modulated high-frequency components (mean estimated active space of 10-16 km in sea state zero) and "short-range" sounds (5-9 km) included all stereotyped calls without a high-frequency component, whistles, and variable calls. Short-range sounds are reported to be more common during social and resting behaviors, while long-range stereotyped calls predominate in dispersed travel and foraging behaviors. These results suggest that variability in sound pressure levels may reflect diverse social and ecological functions of the acoustic repertoire of killer whales.

  1. Sounds and source levels from bowhead whales off Pt. Barrow, Alaska.

    PubMed

    Cummings, W C; Holliday, D V

    1987-09-01

    Sounds were recorded from bowhead whales migrating past Pt. Barrow, AK, to the Canadian Beaufort Sea. They mainly consisted of various low-frequency (25- to 900-Hz) moans and well-defined sound sequences organized into "song" (20-5000 Hz) recorded with our 2.46-km hydrophone array suspended from the ice. Songs were composed of up to 20 repeated phrases (mean, 10) which lasted up to 146 s (mean, 66.3). Several bowhead whales often were within acoustic range of the array at once, but usually only one sang at a time. Vocalizations exhibited diurnal peaks of occurrence (0600-0800, 1600-1800 h). Sounds which were located in the horizontal plane had peak source spectrum levels as follows--44 moans: 129-178 dB re: 1 microPa, 1 m (median, 159); 3 garglelike utterances: 152, 155, and 169 dB; 33 songs: 158-189 dB (median, 177), all presumably from different whales. Based on ambient noise levels, measured total propagation loss, and whale sound source levels, our detection of whale sounds was theoretically noise-limited beyond 2.5 km (moans) and beyond 10.7 km (songs), a model supported by actual localizations. This study showed that over much of the shallow Arctic and sub-Arctic waters, underwater communications of the bowhead whale would be limited to much shorter ranges than for other large whales in lower latitude, deep-water regions.

  2. Neural population encoding and decoding of sound source location across sound level in the rabbit inferior colliculus

    PubMed Central

    Delgutte, Bertrand

    2015-01-01

    At lower levels of sensory processing, the representation of a stimulus feature in the response of a neural population can vary in complex ways across different stimulus intensities, potentially changing the amount of feature-relevant information in the response. How higher-level neural circuits could implement feature decoding computations that compensate for these intensity-dependent variations remains unclear. Here we focused on neurons in the inferior colliculus (IC) of unanesthetized rabbits, whose firing rates are sensitive to both the azimuthal position of a sound source and its sound level. We found that the azimuth tuning curves of an IC neuron at different sound levels tend to be linear transformations of each other. These transformations could either increase or decrease the mutual information between source azimuth and spike count with increasing level for individual neurons, yet population azimuthal information remained constant across the absolute sound levels tested (35, 50, and 65 dB SPL), as inferred from the performance of a maximum-likelihood neural population decoder. We harnessed evidence of level-dependent linear transformations to reduce the number of free parameters in the creation of an accurate cross-level population decoder of azimuth. Interestingly, this decoder predicts monotonic azimuth tuning curves, broadly sensitive to contralateral azimuths, in neurons at higher levels in the auditory pathway. PMID:26490292

  3. Active noise control using a steerable parametric array loudspeaker.

    PubMed

    Tanaka, Nobuo; Tanaka, Motoki

    2010-06-01

    Arguably active noise control enables the sound suppression at the designated control points, while the sound pressure except the targeted locations is likely to augment. The reason is clear; a control source normally radiates the sound omnidirectionally. To cope with this problem, this paper introduces a parametric array loudspeaker (PAL) which produces a spatially focused sound beam due to the attribute of ultrasound used for carrier waves, thereby allowing one to suppress the sound pressure at the designated point without causing spillover in the whole sound field. First the fundamental characteristics of PAL are overviewed. The scattered pressure in the near field contributed by source strength of PAL is then described, which is needed for the design of an active noise control system. Furthermore, the optimal control law for minimizing the sound pressure at control points is derived, the control effect being investigated analytically and experimentally. With a view to tracking a moving target point, a steerable PAL based upon a phased array scheme is presented, with the result that the generation of a moving zone of quiet becomes possible without mechanically rotating the PAL. An experiment is finally conducted, demonstrating the validity of the proposed method.

  4. A mechanism study of sound wave-trapping barriers.

    PubMed

    Yang, Cheng; Pan, Jie; Cheng, Li

    2013-09-01

    The performance of a sound barrier is usually degraded if a large reflecting surface is placed on the source side. A wave-trapping barrier (WTB), with its inner surface covered by wedge-shaped structures, has been proposed to confine waves within the area between the barrier and the reflecting surface, and thus improve the performance. In this paper, the deterioration in performance of a conventional sound barrier due to the reflecting surface is first explained in terms of the resonance effect of the trapped modes. At each resonance frequency, a strong and mode-controlled sound field is generated by the noise source both within and in the vicinity outside the region bounded by the sound barrier and the reflecting surface. It is found that the peak sound pressures in the barrier's shadow zone, which correspond to the minimum values in the barrier's insertion loss, are largely determined by the resonance frequencies and by the shapes and losses of the trapped modes. These peak pressures usually result in high sound intensity component impinging normal to the barrier surface near the top. The WTB can alter the sound wave diffraction at the top of the barrier if the wavelengths of the sound wave are comparable or smaller than the dimensions of the wedge. In this case, the modified barrier profile is capable of re-organizing the pressure distribution within the bounded domain and altering the acoustic properties near the top of the sound barrier.

  5. Surface acoustical intensity measurements on a diesel engine

    NASA Technical Reports Server (NTRS)

    Mcgary, M. C.; Crocker, M. J.

    1980-01-01

    The use of surface intensity measurements as an alternative to the conventional selective wrapping technique of noise source identification and ranking on diesel engines was investigated. A six cylinder, in line turbocharged, 350 horsepower diesel engine was used. Sound power was measured under anechoic conditions for eight separate parts of the engine at steady state operating conditions using the conventional technique. Sound power measurements were repeated on five separate parts of the engine using the surface intensity at the same steady state operating conditions. The results were compared by plotting sound power level against frequency and noise source rankings for the two methods.

  6. Building Acoustics

    NASA Astrophysics Data System (ADS)

    Cowan, James

    This chapter summarizes and explains key concepts of building acoustics. These issues include the behavior of sound waves in rooms, the most commonly used rating systems for sound and sound control in buildings, the most common noise sources found in buildings, practical noise control methods for these sources, and the specific topic of office acoustics. Common noise issues for multi-dwelling units can be derived from most of the sections of this chapter. Books can be and have been written on each of these topics, so the purpose of this chapter is to summarize this information and provide appropriate resources for further exploration of each topic.

  7. Comparison of the benefits of cochlear implantation versus contra-lateral routing of signal hearing aids in adult patients with single-sided deafness: study protocol for a prospective within-subject longitudinal trial.

    PubMed

    Kitterick, Pádraig T; O'Donoghue, Gerard M; Edmondson-Jones, Mark; Marshall, Andrew; Jeffs, Ellen; Craddock, Louise; Riley, Alison; Green, Kevin; O'Driscoll, Martin; Jiang, Dan; Nunn, Terry; Saeed, Shakeel; Aleksy, Wanda; Seeber, Bernhard U

    2014-01-01

    Individuals with a unilateral severe-to-profound hearing loss, or single-sided deafness, report difficulty with listening in many everyday situations despite having access to well-preserved acoustic hearing in one ear. The standard of care for single-sided deafness available on the UK National Health Service is a contra-lateral routing of signals hearing aid which transfers sounds from the impaired ear to the non-impaired ear. This hearing aid has been found to improve speech understanding in noise when the signal-to-noise ratio is more favourable at the impaired ear than the non-impaired ear. However, the indiscriminate routing of signals to a single ear can have detrimental effects when interfering sounds are located on the side of the impaired ear. Recent published evidence has suggested that cochlear implantation in individuals with a single-sided deafness can restore access to the binaural cues which underpin the ability to localise sounds and segregate speech from other interfering sounds. The current trial was designed to assess the efficacy of cochlear implantation compared to a contra-lateral routing of signals hearing aid in restoring binaural hearing in adults with acquired single-sided deafness. Patients are assessed at baseline and after receiving a contra-lateral routing of signals hearing aid. A cochlear implant is then provided to those patients who do not receive sufficient benefit from the hearing aid. This within-subject longitudinal design reflects the expected care pathway should cochlear implantation be provided for single-sided deafness on the UK National Health Service. The primary endpoints are measures of binaural hearing at baseline, after provision of a contra-lateral routing of signals hearing aid, and after cochlear implantation. Binaural hearing is assessed in terms of the accuracy with which sounds are localised and speech is perceived in background noise. The trial is also designed to measure the impact of the interventions on hearing- and health-related quality of life. This multi-centre trial was designed to provide evidence for the efficacy of cochlear implantation compared to the contra-lateral routing of signals. A purpose-built sound presentation system and established measurement techniques will provide reliable and precise measures of binaural hearing. Current Controlled Trials http://www.controlled-trials.com/ISRCTN33301739 (05/JUL/2013).

  8. A Computational and Experimental Study of Resonators in Three Dimensions

    NASA Technical Reports Server (NTRS)

    Tam, C. K. W.; Ju, H.; Jones, Michael G.; Watson, Willie R.; Parrott, Tony L.

    2009-01-01

    In a previous work by the present authors, a computational and experimental investigation of the acoustic properties of two-dimensional slit resonators was carried out. The present paper reports the results of a study extending the previous work to three dimensions. This investigation has two basic objectives. The first is to validate the computed results from direct numerical simulations of the flow and acoustic fields of slit resonators in three dimensions by comparing with experimental measurements in a normal incidence impedance tube. The second objective is to study the flow physics of resonant liners responsible for sound wave dissipation. Extensive comparisons are provided between computed and measured acoustic liner properties with both discrete frequency and broadband sound sources. Good agreements are found over a wide range of frequencies and sound pressure levels. Direct numerical simulation confirms the previous finding in two dimensions that vortex shedding is the dominant dissipation mechanism at high sound pressure intensity. However, it is observed that the behavior of the shed vortices in three dimensions is quite different from those of two dimensions. In three dimensions, the shed vortices tend to evolve into ring (circular in plan form) vortices, even though the slit resonator opening from which the vortices are shed has an aspect ratio of 2.5. Under the excitation of discrete frequency sound, the shed vortices align themselves into two regularly spaced vortex trains moving away from the resonator opening in opposite directions. This is different from the chaotic shedding of vortices found in two-dimensional simulations. The effect of slit aspect ratio at a fixed porosity is briefly studied. For the range of liners considered in this investigation, it is found that the absorption coefficient of a liner increases when the open area of the single slit is subdivided into multiple, smaller slits.

  9. Effects of hydrokinetic turbine sound on the behavior of four species of fish within an experimental mesocosm

    DOE PAGES

    Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin

    2017-02-04

    The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less

  10. Effects of hydrokinetic turbine sound on the behavior of four species of fish within an experimental mesocosm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin

    The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less

  11. On Identifying the Sound Sources in a Turbulent Flow

    NASA Technical Reports Server (NTRS)

    Goldstein, M. E.

    2008-01-01

    A space-time filtering approach is used to divide an unbounded turbulent flow into its radiating and non-radiating components. The result is then used to clarify a number of issues including the possibility of identifying the sources of the sound in such flows. It is also used to investigate the efficacy of some of the more recent computational approaches.

  12. The sound field of a rotating dipole in a plug flow.

    PubMed

    Wang, Zhao-Huan; Belyaev, Ivan V; Zhang, Xiao-Zheng; Bi, Chuan-Xing; Faranosov, Georgy A; Dowell, Earl H

    2018-04-01

    An analytical far field solution for a rotating point dipole source in a plug flow is derived. The shear layer of the jet is modelled as an infinitely thin cylindrical vortex sheet and the far field integral is calculated by the stationary phase method. Four numerical tests are performed to validate the derived solution as well as to assess the effects of sound refraction from the shear layer. First, the calculated results using the derived formulations are compared with the known solution for a rotating dipole in a uniform flow to validate the present model in this fundamental test case. After that, the effects of sound refraction for different rotating dipole sources in the plug flow are assessed. Then the refraction effects on different frequency components of the signal at the observer position, as well as the effects of the motion of the source and of the type of source are considered. Finally, the effect of different sound speeds and densities outside and inside the plug flow is investigated. The solution obtained may be of particular interest for propeller and rotor noise measurements in open jet anechoic wind tunnels.

  13. A Robust Sound Source Localization Approach for Microphone Array with Model Errors

    NASA Astrophysics Data System (ADS)

    Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong

    In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.

  14. CAVITATION SOUNDS DURING CERVICOTHORACIC SPINAL MANIPULATION

    PubMed Central

    Mourad, Firas; Zingoni, Andrea; Iorio, Raffaele; Perreault, Thomas; Zacharko, Noah; de las Peñas, César Fernández; Butts, Raymond; Cleland, Joshua A.

    2017-01-01

    Background No study has previously investigated the side, duration or number of audible cavitation sounds during high-velocity low-amplitude (HVLA) thrust manipulation to the cervicothoracic spine. Purpose The primary purpose was to determine which side of the spine cavitates during cervicothoracic junction (CTJ) HVLA thrust manipulation. Secondary aims were to calculate the average number of cavitations, the duration of cervicothoracic thrust manipulation, and the duration of a single cavitation. Study Design Quasi-experimental study Methods Thirty-two patients with upper trapezius myalgia received two cervicothoracic HVLA thrust manipulations targeting the right and left T1-2 articulation, respectively. Two high sampling rate accelerometers were secured bilaterally 25 mm lateral to midline of the T1-2 interspace. For each manipulation, two audio signals were extracted using Short-Time Fourier Transformation (STFT) and singularly processed via spectrogram calculation in order to evaluate the frequency content and number of instantaneous energy bursts of both signals over time for each side of the CTJ. Result Unilateral cavitation sounds were detected in 53 (91.4%) of 58 cervicothoracic HVLA thrust manipulations and bilateral cavitation sounds were detected in just five (8.6%) of the 58 thrust manipulations; that is, cavitation was significantly (p<0.001) more likely to occur unilaterally than bilaterally. In addition, cavitation was significantly (p<0.0001) more likely to occur on the side contralateral to the clinician's short-lever applicator. The mean number of audible cavitations per manipulation was 4.35 (95% CI 2.88, 5.76). The mean duration of a single manipulation was 60.77 ms (95% CI 28.25, 97.42) and the mean duration of a single audible cavitation was 4.13 ms (95% CI 0.82, 7.46). In addition to single-peak and multi-peak energy bursts, spectrogram analysis also demonstrated high frequency sounds, low frequency sounds, and sounds of multiple frequencies for all 58 manipulations. Discussion Cavitation was significantly more likely to occur unilaterally, and on the side contralateral to the short-lever applicator contact, during cervicothoracic HVLA thrust manipulation. Clinicians should expect multiple cavitation sounds when performing HVLA thrust manipulation to the CTJ. Due to the presence of multi-peak energy bursts and sounds of multiple frequencies, the cavitation hypothesis (i.e. intra-articular gas bubble collapse) alone appears unable to explain all of the audible sounds during HVLA thrust manipulation, and the possibility remains that several phenomena may be occurring simultaneously. Level of Evidence 2b PMID:28900571

  15. Active Exhaust Silencing Systen For the Management of Auxillary Power Unit Sound Signatures

    DTIC Science & Technology

    2014-08-01

    conceptual mass-less pistons are introduced into the system before and after the injection site, such that they will move exactly with the plane wave...Unit Sound Signatures, Helminen, et al. Page 2 of 7 either the primary source or the injected source. It is assumed that the pistons are ‘close...source, it causes both pistons to move identically. The pressures induced by the flow on the pistons do not affect the flow generated by the

  16. The rotary subwoofer: a controllable infrasound source.

    PubMed

    Park, Joseph; Garcés, Milton; Thigpen, Bruce

    2009-04-01

    The rotary subwoofer is a novel acoustic transducer capable of projecting infrasonic signals at high sound pressure levels. The projector produces higher acoustic particle velocities than conventional transducers which translate into higher radiated sound pressure levels. This paper characterizes measured performance of a rotary subwoofer and presents a model to predict sound pressure levels.

  17. Physics of thermo-acoustic sound generation

    NASA Astrophysics Data System (ADS)

    Daschewski, M.; Boehm, R.; Prager, J.; Kreutzbruck, M.; Harrer, A.

    2013-09-01

    We present a generalized analytical model of thermo-acoustic sound generation based on the analysis of thermally induced energy density fluctuations and their propagation into the adjacent matter. The model provides exact analytical prediction of the sound pressure generated in fluids and solids; consequently, it can be applied to arbitrary thermal power sources such as thermophones, plasma firings, laser beams, and chemical reactions. Unlike existing approaches, our description also includes acoustic near-field effects and sound-field attenuation. Analytical results are compared with measurements of sound pressures generated by thermo-acoustic transducers in air for frequencies up to 1 MHz. The tested transducers consist of titanium and indium tin oxide coatings on quartz glass and polycarbonate substrates. The model reveals that thermo-acoustic efficiency increases linearly with the supplied thermal power and quadratically with thermal excitation frequency. Comparison of the efficiency of our thermo-acoustic transducers with those of piezoelectric-based airborne ultrasound transducers using impulse excitation showed comparable sound pressure values. The present results show that thermo-acoustic transducers can be applied as broadband, non-resonant, high-performance ultrasound sources.

  18. 3-D localization of virtual sound sources: effects of visual environment, pointing method, and training.

    PubMed

    Majdak, Piotr; Goupell, Matthew J; Laback, Bernhard

    2010-02-01

    The ability to localize sound sources in three-dimensional space was tested in humans. In Experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE; darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In Experiment 2, subjects were provided with sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of Experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies.

  19. Optical and Acoustic Sensor-Based 3D Ball Motion Estimation for Ball Sport Simulators †.

    PubMed

    Seo, Sang-Woo; Kim, Myunggyu; Kim, Yejin

    2018-04-25

    Estimation of the motion of ball-shaped objects is essential for the operation of ball sport simulators. In this paper, we propose an estimation system for 3D ball motion, including speed and angle of projection, by using acoustic vector and infrared (IR) scanning sensors. Our system is comprised of three steps to estimate a ball motion: sound-based ball firing detection, sound source localization, and IR scanning for motion analysis. First, an impulsive sound classification based on the mel-frequency cepstrum and feed-forward neural network is introduced to detect the ball launch sound. An impulsive sound source localization using a 2D microelectromechanical system (MEMS) microphones and delay-and-sum beamforming is presented to estimate the firing position. The time and position of a ball in 3D space is determined from a high-speed infrared scanning method. Our experimental results demonstrate that the estimation of ball motion based on sound allows a wider activity area than similar camera-based methods. Thus, it can be practically applied to various simulations in sports such as soccer and baseball.

  20. Impulsivity of Noise due to Single Lightweight Vehicles Transit on Transverse Rumble Strip

    NASA Astrophysics Data System (ADS)

    Darus, N.; Haron, Z.; Yahya, K.; Halil, M. H. Abd; Norudin, W. M. A.; Othman, M. H.; Hezmi, M. A.

    2018-03-01

    Transverse Rumble Strips (TRS) acts as safety device that alert inattentive drivers from potential dangers. However, the noise produced due to TRS was reported as noise annoyance among the nearby residents lived adjacent to roadways. Thus, this paper investigates the impulsivity characteristic of noise due to single lightweight vehicles transit on TRS. The objectives of this study are to determine the increase of sound level and to evaluate the impulsivity of noise. Two TRS profiles namely middle overlapped (MO) and middle layer overlapped (MLO) were selected. Three types of single lightweight vehicles which include hatchback, sedan and multipurpose (MPV) were tested at speed of 30, 50 and 70km/h. The sound level was measured using sound level meter (SLM). Noise indices such as LAeq, LAIeqT, LAImax, LAFmax and LASmax were obtained from the measurement. This study considered the differences of LAImax - LAFmax > 2dBA, LAFmax - LAeq ≥ 10dBA, LAIeqT - LAeq ≥ 2dBA and LAImax - LASmax > 6dBA to evaluate the impulsivity of noise. It was found that TRS increased the sound level by at most of 6dBA. Furthermore, all single lightweight vehicles transit on TRS show significant impulsive characteristic. These results proved that TRS produce significant impact to the nearby residents.

  1. Compensating Level-Dependent Frequency Representation in Auditory Cortex by Synaptic Integration of Corticocortical Input

    PubMed Central

    Happel, Max F. K.; Ohl, Frank W.

    2017-01-01

    Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI) of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD) analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex. PMID:28046062

  2. Turbine Sound May Influence the Metamorphosis Behaviour of Estuarine Crab Megalopae

    PubMed Central

    Pine, Matthew K.; Jeffs, Andrew G.; Radford, Craig A.

    2012-01-01

    It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21–31% compared to silent control treatments, 38–47% compared to tidal turbine sound treatments, and 46–60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment. PMID:23240063

  3. Turbine sound may influence the metamorphosis behaviour of estuarine crab megalopae.

    PubMed

    Pine, Matthew K; Jeffs, Andrew G; Radford, Craig A

    2012-01-01

    It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21-31% compared to silent control treatments, 38-47% compared to tidal turbine sound treatments, and 46-60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment.

  4. Neural plasticity associated with recently versus often heard objects.

    PubMed

    Bourquin, Nathalie M-P; Spierer, Lucas; Murray, Micah M; Clarke, Stephanie

    2012-09-01

    In natural settings the same sound source is often heard repeatedly, with variations in spectro-temporal and spatial characteristics. We investigated how such repetitions influence sound representations and in particular how auditory cortices keep track of recently vs. often heard objects. A set of 40 environmental sounds was presented twice, i.e. as prime and as repeat, while subjects categorized the corresponding sound sources as living vs. non-living. Electrical neuroimaging analyses were applied to auditory evoked potentials (AEPs) comparing primes vs. repeats (effect of presentation) and the four experimental sections. Dynamic analysis of distributed source estimations revealed i) a significant main effect of presentation within the left temporal convexity at 164-215 ms post-stimulus onset; and ii) a significant main effect of section in the right temporo-parietal junction at 166-213 ms. A 3-way repeated measures ANOVA (hemisphere×presentation×section) applied to neural activity of the above clusters during the common time window confirmed the specificity of the left hemisphere for the effect of presentation, but not that of the right hemisphere for the effect of section. In conclusion, spatio-temporal dynamics of neural activity encode the temporal history of exposure to sound objects. Rapidly occurring plastic changes within the semantic representations of the left hemisphere keep track of objects heard a few seconds before, independent of the more general sound exposure history. Progressively occurring and more long-lasting plastic changes occurring predominantly within right hemispheric networks, which are known to code for perceptual, semantic and spatial aspects of sound objects, keep track of multiple exposures. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Speech training alters tone frequency tuning in rat primary auditory cortex

    PubMed Central

    Engineer, Crystal T.; Perez, Claudia A.; Carraway, Ryan S.; Chang, Kevin Q.; Roland, Jarod L.; Kilgard, Michael P.

    2013-01-01

    Previous studies in both humans and animals have documented improved performance following discrimination training. This enhanced performance is often associated with cortical response changes. In this study, we tested the hypothesis that long-term speech training on multiple tasks can improve primary auditory cortex (A1) responses compared to rats trained on a single speech discrimination task or experimentally naïve rats. Specifically, we compared the percent of A1 responding to trained sounds, the responses to both trained and untrained sounds, receptive field properties of A1 neurons, and the neural discrimination of pairs of speech sounds in speech trained and naïve rats. Speech training led to accurate discrimination of consonant and vowel sounds, but did not enhance A1 response strength or the neural discrimination of these sounds. Speech training altered tone responses in rats trained on six speech discrimination tasks but not in rats trained on a single speech discrimination task. Extensive speech training resulted in broader frequency tuning, shorter onset latencies, a decreased driven response to tones, and caused a shift in the frequency map to favor tones in the range where speech sounds are the loudest. Both the number of trained tasks and the number of days of training strongly predict the percent of A1 responding to a low frequency tone. Rats trained on a single speech discrimination task performed less accurately than rats trained on multiple tasks and did not exhibit A1 response changes. Our results indicate that extensive speech training can reorganize the A1 frequency map, which may have downstream consequences on speech sound processing. PMID:24344364

  6. Speech perception with combined electric-acoustic stimulation and bilateral cochlear implants in a multisource noise field.

    PubMed

    Rader, Tobias; Fastl, Hugo; Baumann, Uwe

    2013-01-01

    The aim of the study was to measure and compare speech perception in users of electric-acoustic stimulation (EAS) supported by a hearing aid in the unimplanted ear and in bilateral cochlear implant (CI) users under different noise and sound field conditions. Gap listening was assessed by comparing performance in unmodulated and modulated Comité Consultatif International Téléphonique et Télégraphique (CCITT) noise conditions, and binaural interaction was investigated by comparing single source and multisource sound fields. Speech perception in noise was measured using a closed-set sentence test (Oldenburg Sentence Test, OLSA) in a multisource noise field (MSNF) consisting of a four-loudspeaker array with independent noise sources and a single source in frontal position (S0N0). Speech simulating noise (Fastl-noise), CCITT-noise (continuous), and OLSA-noise (pseudo continuous) served as noise sources with different temporal patterns. Speech tests were performed in two groups of subjects who were using either EAS (n = 12) or bilateral CIs (n = 10). All subjects in the EAS group were fitted with a high-power hearing aid in the opposite ear (bimodal EAS). The average group score on monosyllable in quiet was 68.8% (EAS) and 80.5% (bilateral CI). A group of 22 listeners with normal hearing served as controls to compare and evaluate potential gap listening effects in implanted patients. Average speech reception thresholds in the EAS group were significantly lower than those for the bilateral CI group in all test conditions (CCITT 6.1 dB, p = 0.001; Fastl-noise 5.4 dB, p < 0.01; Oldenburg-(OL)-noise 1.6 dB, p < 0.05). Bilateral CI and EAS user groups showed a significant improvement of 4.3 dB (p = 0.004) and 5.4 dB (p = 0.002) between S0N0 and MSNF sound field conditions respectively, which signifies advantages caused by bilateral interaction in both groups. Performance in the control group showed a significant gap listening effect with a difference of 6.5 dB between modulated and unmodulated noise in S0N0, and a difference of 3.0 dB in MSNF. The ability to "glimpse" into short temporal masker gaps was absent in both groups of implanted subjects. Combined EAS in one ear supported by a hearing aid on the contralateral ear provided significantly improved speech perception compared with bilateral cochlear implantation. Although the scores for monosyllable words in quiet were higher in the bilateral CI group, the EAS group performed better in different noise and sound field conditions. Furthermore, the results indicated that binaural interaction between EAS in one ear and residual acoustic hearing in the opposite ear enhances speech perception in complex noise situations. Both bilateral CI and bimodal EAS users did not benefit from short temporal masker gaps, therefore the better performance of the EAS group in modulated noise conditions could be explained by the improved transmission of fundamental frequency cues in the lower-frequency region of acoustic hearing, which might foster the grouping of auditory objects.

  7. Media and the Learner: the Influence of Media-Message Conponents on Students' Recall and Attitudes Toward the Learning Experience.

    ERIC Educational Resources Information Center

    Hempstead, John Orson

    The level of abstraction of the message and the educational effects of five media presentations (Print, verbal sound, print/pictures, print/verbal sound, and pictures/verbal sound) were experimentally investigated. The media components were presented singly or in combination to 6th grade students in a uniformly controlled consistent environment.…

  8. Locating arbitrarily time-dependent sound sources in three dimensional space in real time.

    PubMed

    Wu, Sean F; Zhu, Na

    2010-08-01

    This paper presents a method for locating arbitrarily time-dependent acoustic sources in a free field in real time by using only four microphones. This method is capable of handling a wide variety of acoustic signals, including broadband, narrowband, impulsive, and continuous sound over the entire audible frequency range, produced by multiple sources in three dimensional (3D) space. Locations of acoustic sources are indicated by the Cartesian coordinates. The underlying principle of this method is a hybrid approach that consists of modeling of acoustic radiation from a point source in a free field, triangulation, and de-noising to enhance the signal to noise ratio (SNR). Numerical simulations are conducted to study the impacts of SNR, microphone spacing, source distance and frequency on spatial resolution and accuracy of source localizations. Based on these results, a simple device that consists of four microphones mounted on three mutually orthogonal axes at an optimal distance, a four-channel signal conditioner, and a camera is fabricated. Experiments are conducted in different environments to assess its effectiveness in locating sources that produce arbitrarily time-dependent acoustic signals, regardless whether a sound source is stationary or moves in space, even toward behind measurement microphones. Practical limitations on this method are discussed.

  9. Comparison between bilateral cochlear implants and Neurelec Digisonic(®) SP Binaural cochlear implant: speech perception, sound localization and patient self-assessment.

    PubMed

    Bonnard, Damien; Lautissier, Sylvie; Bosset-Audoit, Amélie; Coriat, Géraldine; Beraha, Max; Maunoury, Antoine; Martel, Jacques; Darrouzet, Vincent; Bébéar, Jean-Pierre; Dauman, René

    2013-01-01

    An alternative to bilateral cochlear implantation is offered by the Neurelec Digisonic(®) SP Binaural cochlear implant, which allows stimulation of both cochleae within a single device. The purpose of this prospective study was to compare a group of Neurelec Digisonic(®) SP Binaural implant users (denoted BINAURAL group, n = 7) with a group of bilateral adult cochlear implant users (denoted BILATERAL group, n = 6) in terms of speech perception, sound localization, and self-assessment of health status and hearing disability. Speech perception was assessed using word recognition at 60 dB SPL in quiet and in a 'cocktail party' noise delivered through five loudspeakers in the hemi-sound field facing the patient (signal-to-noise ratio = +10 dB). The sound localization task was to determine the source of a sound stimulus among five speakers positioned between -90° and +90° from midline. Change in health status was assessed using the Glasgow Benefit Inventory and hearing disability was evaluated with the Abbreviated Profile of Hearing Aid Benefit. Speech perception was not statistically different between the two groups, even though there was a trend in favor of the BINAURAL group (mean percent word recognition in the BINAURAL and BILATERAL groups: 70 vs. 56.7% in quiet, 55.7 vs. 43.3% in noise). There was also no significant difference with regard to performance in sound localization and self-assessment of health status and hearing disability. On the basis of the BINAURAL group's performance in hearing tasks involving the detection of interaural differences, implantation with the Neurelec Digisonic(®) SP Binaural implant may be considered to restore effective binaural hearing. Based on these first comparative results, this device seems to provide benefits similar to those of traditional bilateral cochlear implantation, with a new approach to stimulate both auditory nerves. Copyright © 2013 S. Karger AG, Basel.

  10. Bilateral and multiple cavitation sounds during upper cervical thrust manipulation

    PubMed Central

    2013-01-01

    Background The popping produced during high-velocity, low-amplitude (HVLA) thrust manipulation is a common sound; however to our knowledge, no study has previously investigated the location of cavitation sounds during manipulation of the upper cervical spine. The primary purpose was to determine which side of the spine cavitates during C1-2 rotatory HVLA thrust manipulation. Secondary aims were to calculate the average number of pops, the duration of upper cervical thrust manipulation, and the duration of a single cavitation. Methods Nineteen asymptomatic participants received two upper cervical thrust manipulations targeting the right and left C1-2 articulation, respectively. Skin mounted microphones were secured bilaterally over the transverse process of C1, and sound wave signals were recorded. Identification of the side, duration, and number of popping sounds were determined by simultaneous analysis of spectrograms with audio feedback using custom software developed in Matlab. Results Bilateral popping sounds were detected in 34 (91.9%) of 37 manipulations while unilateral popping sounds were detected in just 3 (8.1%) manipulations; that is, cavitation was significantly (P < 0.001) more likely to occur bilaterally than unilaterally. Of the 132 total cavitations, 72 occurred ipsilateral and 60 occurred contralateral to the targeted C1-2 articulation. In other words, cavitation was no more likely to occur on the ipsilateral than the contralateral side (P = 0.294). The mean number of pops per C1-2 rotatory HVLA thrust manipulation was 3.57 (95% CI: 3.19, 3.94) and the mean number of pops per subject following both right and left C1-2 thrust manipulations was 6.95 (95% CI: 6.11, 7.79). The mean duration of a single audible pop was 5.66 ms (95% CI: 5.36, 5.96) and the mean duration of a single manipulation was 96.95 ms (95% CI: 57.20, 136.71). Conclusions Cavitation was significantly more likely to occur bilaterally than unilaterally during upper cervical HVLA thrust manipulation. Most subjects produced 3–4 pops during a single rotatory HVLA thrust manipulation targeting the right or left C1-2 articulation; therefore, practitioners of spinal manipulative therapy should expect multiple popping sounds when performing upper cervical thrust manipulation to the atlanto-axial joint. Furthermore, the traditional manual therapy approach of targeting a single ipsilateral or contralateral facet joint in the upper cervical spine may not be realistic. PMID:23320608

  11. Open Source Software Openfoam as a New Aerodynamical Simulation Tool for Rocket-Borne Measurements

    NASA Astrophysics Data System (ADS)

    Staszak, T.; Brede, M.; Strelnikov, B.

    2015-09-01

    The only way to do in-situ measurements, which are very important experimental studies for atmospheric science, in the mesoshere/lower thermosphere (MLT) is to use sounding rockets. The drawback of using rockets is the shock wave appearing because of the very high speed of the rocket motion (typically about 1000 mIs). This shock wave disturbs the density, the temperature and the velocity fields in the vicinity of the rocket, compared to undisturbed values of the atmosphere. This effect, however, can be quantified and the measured data has to be corrected not just to make it more precise but simply usable. The commonly accepted and widely used tool for this calculations is the Direct Simulation Monte Carlo (DSMC) technique developed by GA. Bird which is available as stand-alone program limited to use a single processor. Apart from complications with simulations of flows around bodies related to different flow regimes in the altitude range of MLT, that rise due to exponential density change by several orders of magnitude, a particular hardware configuration introduces significant difficulty for aerodynamical calculations due to choice of the grid sizes mainly depending on the demands on adequate DSMCs and good resolution of geometries with scale differences of factor of iO~. This makes either the calculation time unreasonably long or even prevents the calculation algorithm from converging. In this paper we apply the free open source software OpenFOAM (licensed under GNU GPL) for a three-dimensional CFD-Simulation of a flow around a sounding rocket instrumentation. An advantage of this software package, among other things, is that it can run on high performance clusters, which are easily scalable. We present the first results and discuss the potential of the new tool in applications for sounding rockets.

  12. 200 kHz Commercial Sonar Systems Generate Lower Frequency Side Lobes Audible to Some Marine Mammals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Zhiqun; Southall, Brandon; Carlson, Thomas J.

    2014-04-15

    The spectral properties of pulses transmitted by three commercially available 200 kHz echo sounders were measured to assess the possibility that sound energy in below the center (carrier) frequency might be heard by marine mammals. The study found that all three sounders generated sound at frequencies below the center frequency and within the hearing range of some marine mammals and that this sound was likely detectable by the animals over limited ranges. However, at standard operating source levels for the sounders, the sound below the center frequency was well below potentially harmful levels. It was concluded that the sounds generatedmore » by the sounders could affect the behavior of marine mammals within fairly close proximity to the sources and that that the blanket exclusion of echo sounders from environmental impact analysis based solely on the center frequency output in relation to the range of marine mammal hearing should be reconsidered.« less

  13. Blue whales respond to simulated mid-frequency military sonar.

    PubMed

    Goldbogen, Jeremy A; Southall, Brandon L; DeRuiter, Stacy L; Calambokidis, John; Friedlaender, Ari S; Hazen, Elliott L; Falcone, Erin A; Schorr, Gregory S; Douglas, Annie; Moretti, David J; Kyburg, Chris; McKenna, Megan F; Tyack, Peter L

    2013-08-22

    Mid-frequency military (1-10 kHz) sonars have been associated with lethal mass strandings of deep-diving toothed whales, but the effects on endangered baleen whale species are virtually unknown. Here, we used controlled exposure experiments with simulated military sonar and other mid-frequency sounds to measure behavioural responses of tagged blue whales (Balaenoptera musculus) in feeding areas within the Southern California Bight. Despite using source levels orders of magnitude below some operational military systems, our results demonstrate that mid-frequency sound can significantly affect blue whale behaviour, especially during deep feeding modes. When a response occurred, behavioural changes varied widely from cessation of deep feeding to increased swimming speed and directed travel away from the sound source. The variability of these behavioural responses was largely influenced by a complex interaction of behavioural state, the type of mid-frequency sound and received sound level. Sonar-induced disruption of feeding and displacement from high-quality prey patches could have significant and previously undocumented impacts on baleen whale foraging ecology, individual fitness and population health.

  14. 77 FR 42279 - Takes of Marine Mammals Incidental to Specified Activities; Taking Marine Mammals Incidental to a...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-18

    ... sound waves emanating from the pile, thereby reducing the sound energy. A confined bubble curtain... physically block sound waves and they prevent air bubbles from migrating away from the pile. The literature... acoustic pressure wave propagates out from a source, was estimated as so-called ``practical spreading loss...

  15. Physics and Psychophysics of High-Fidelity Sound. Part III: The Components of a Sound-Reproducing System: Amplifiers and Loudspeakers.

    ERIC Educational Resources Information Center

    Rossing, Thomas D.

    1980-01-01

    Described are the components for a high-fidelity sound-reproducing system which focuses on various program sources, the amplifier, and loudspeakers. Discussed in detail are amplifier power and distortion, air suspension, loudspeaker baffles and enclosures, bass-reflex enclosure, drone cones, rear horn and acoustic labyrinth enclosures, horn…

  16. Auditory enhancement of increments in spectral amplitude stems from more than one source.

    PubMed

    Carcagno, Samuele; Semal, Catherine; Demany, Laurent

    2012-10-01

    A component of a test sound consisting of simultaneous pure tones perceptually "pops out" if the test sound is preceded by a copy of itself with that component attenuated. Although this "enhancement" effect was initially thought to be purely monaural, it is also observable when the test sound and the precursor sound are presented contralaterally (i.e., to opposite ears). In experiment 1, we assessed the magnitude of ipsilateral and contralateral enhancement as a function of the time interval between the precursor and test sounds (10, 100, or 600 ms). The test sound, randomly transposed in frequency from trial to trial, was followed by a probe tone, either matched or mismatched in frequency to the test sound component which was the target of enhancement. Listeners' ability to discriminate matched probes from mismatched probes was taken as an index of enhancement magnitude. The results showed that enhancement decays more rapidly for ipsilateral than for contralateral precursors, suggesting that ipsilateral enhancement and contralateral enhancement stem from at least partly different sources. It could be hypothesized that, in experiment 1, contralateral precursors were effective only because they provided attentional cues about the target tone frequency. In experiment 2, this hypothesis was tested by presenting the probe tone before the precursor sound rather than after the test sound. Although the probe tone was then serving as a frequency cue, contralateral precursors were again found to produce enhancement. This indicates that contralateral enhancement cannot be explained by cuing alone and is a genuine sensory phenomenon.

  17. Experiments to investigate the acoustic properties of sound propagation

    NASA Astrophysics Data System (ADS)

    Dagdeviren, Omur E.

    2018-07-01

    Propagation of sound waves is one of the fundamental concepts in physics. Some of the properties of sound propagation such as attenuation of sound intensity with increasing distance are familiar to everybody from the experiences of daily life. However, the frequency dependence of sound propagation and the effect of acoustics in confined environments are not straightforward to estimate. In this article, we propose experiments, which can be conducted in a classroom environment with commonly available devices such as smartphones and laptops to measure sound intensity level as a function of the distance between the source and the observer and frequency of the sound. Our experiments and deviations from the theoretical calculations can be used to explain basic concepts of sound propagation and acoustics to a diverse population of students.

  18. Empirical wind model for the middle and lower atmosphere. Part 2: Local time variations

    NASA Technical Reports Server (NTRS)

    Hedin, A. E.; Fleming, E. L.; Manson, A. H.; Schmidlin, F. J.; Avery, S. K.; Clark, R. R.; Franke, S. J.; Fraser, G. J.; Tsuda, T.; Vial, F.

    1993-01-01

    The HWM90 thermospheric wind model was revised in the lower thermosphere and extended into the mesosphere and lower atmosphere to provide a single analytic model for calculating zonal and meridional wind profiles representative of the climatological average for various geophysical conditions. Local time variations in the mesosphere are derived from rocket soundings, incoherent scatter radar, MF radar, and meteor radar. Low-order spherical harmonics and Fourier series are used to describe these variations as a function of latitude and day of year with cubic spline interpolation in altitude. The model represents a smoothed compromise between the original data sources. Although agreement between various data sources is generally good, some systematic differences are noted. Overall root mean square differences between measured and model tidal components are on the order of 5 to 10 m/s.

  19. The Robustness of Acoustic Analogies

    NASA Technical Reports Server (NTRS)

    Freund, J. B.; Lele, S. K.; Wei, M.

    2004-01-01

    Acoustic analogies for the prediction of flow noise are exact rearrangements of the flow equations N(right arrow q) = 0 into a nominal sound source S(right arrow q) and sound propagation operator L such that L(right arrow q) = S(right arrow q). In practice, the sound source is typically modeled and the propagation operator inverted to make predictions. Since the rearrangement is exact, any sufficiently accurate model of the source will yield the correct sound, so other factors must determine the merits of any particular formulation. Using data from a two-dimensional mixing layer direct numerical simulation (DNS), we evaluate the robustness of two analogy formulations to different errors intentionally introduced into the source. The motivation is that since S can not be perfectly modeled, analogies that are less sensitive to errors in S are preferable. Our assessment is made within the framework of Goldstein's generalized acoustic analogy, in which different choices of a base flow used in constructing L give different sources S and thus different analogies. A uniform base flow yields a Lighthill-like analogy, which we evaluate against a formulation in which the base flow is the actual mean flow of the DNS. The more complex mean flow formulation is found to be significantly more robust to errors in the energetic turbulent fluctuations, but its advantage is less pronounced when errors are made in the smaller scales.

  20. Sound exposure changes European seabass behaviour in a large outdoor floating pen: Effects of temporal structure and a ramp-up procedure.

    PubMed

    Neo, Y Y; Hubert, J; Bolle, L; Winter, H V; Ten Cate, C; Slabbekoorn, H

    2016-07-01

    Underwater sound from human activities may affect fish behaviour negatively and threaten the stability of fish stocks. However, some fundamental understanding is still lacking for adequate impact assessments and potential mitigation strategies. For example, little is known about the potential contribution of the temporal features of sound, the efficacy of ramp-up procedures, and the generalisability of results from indoor studies to the outdoors. Using a semi-natural set-up, we exposed European seabass in an outdoor pen to four treatments: 1) continuous sound, 2) intermittent sound with a regular repetition interval, 3) irregular repetition intervals and 4) a regular repetition interval with amplitude 'ramp-up'. Upon sound exposure, the fish increased swimming speed and depth, and swam away from the sound source. The behavioural readouts were generally consistent with earlier indoor experiments, but the changes and recovery were more variable and were not significantly influenced by sound intermittency and interval regularity. In addition, the 'ramp-up' procedure elicited immediate diving response, similar to the onset of treatment without a 'ramp-up', but the fish did not swim away from the sound source as expected. Our findings suggest that while sound impact studies outdoors increase ecological and behavioural validity, the inherently higher variability also reduces resolution that may be counteracted by increasing sample size or looking into different individual coping styles. Our results also question the efficacy of 'ramp-up' in deterring marine animals, which warrants more investigation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Sound Waves Induce Neural Differentiation of Human Bone Marrow-Derived Mesenchymal Stem Cells via Ryanodine Receptor-Induced Calcium Release and Pyk2 Activation.

    PubMed

    Choi, Yura; Park, Jeong-Eun; Jeong, Jong Seob; Park, Jung-Keug; Kim, Jongpil; Jeon, Songhee

    2016-10-01

    Mesenchymal stem cells (MSCs) have shown considerable promise as an adaptable cell source for use in tissue engineering and other therapeutic applications. The aims of this study were to develop methods to test the hypothesis that human MSCs could be differentiated using sound wave stimulation alone and to find the underlying mechanism. Human bone marrow (hBM)-MSCs were stimulated with sound waves (1 kHz, 81 dB) for 7 days and the expression of neural markers were analyzed. Sound waves induced neural differentiation of hBM-MSC at 1 kHz and 81 dB but not at 1 kHz and 100 dB. To determine the signaling pathways involved in the neural differentiation of hBM-MSCs by sound wave stimulation, we examined the Pyk2 and CREB phosphorylation. Sound wave induced an increase in the phosphorylation of Pyk2 and CREB at 45 min and 90 min, respectively, in hBM-MSCs. To find out the upstream activator of Pyk2, we examined the intracellular calcium source that was released by sound wave stimulation. When we used ryanodine as a ryanodine receptor antagonist, sound wave-induced calcium release was suppressed. Moreover, pre-treatment with a Pyk2 inhibitor, PF431396, prevented the phosphorylation of Pyk2 and suppressed sound wave-induced neural differentiation in hBM-MSCs. These results suggest that specific sound wave stimulation could be used as a neural differentiation inducer of hBM-MSCs.

  2. Study on acoustical properties of sintered bronze porous material for transient exhaust noise of pneumatic system

    NASA Astrophysics Data System (ADS)

    Li, Jingxiang; Zhao, Shengdun; Ishihara, Kunihiko

    2013-05-01

    A novel approach is presented to study the acoustical properties of sintered bronze material, especially used to suppress the transient noise generated by the pneumatic exhaust of pneumatic friction clutch and brake (PFC/B) systems. The transient exhaust noise is impulsive and harmful due to the large sound pressure level (SPL) that has high-frequency. In this paper, the exhaust noise is related to the transient impulsive exhaust, which is described by a one-dimensional aerodynamic model combining with a pressure drop expression of the Ergun equation. A relation of flow parameters and sound source is set up. Additionally, the piston acoustic source approximation of sintered bronze silencer with cylindrical geometry is presented to predict SPL spectrum at a far-field observation point. A semi-phenomenological model is introduced to analyze the sound propagation and reduction in the sintered bronze materials assumed as an equivalent fluid with rigid frame. Experiment results under different initial cylinder pressures are shown to corroborate the validity of the proposed aerodynamic model. In addition, the calculated sound pressures according to the equivalent sound source are compared with the measured noise signals both in time-domain and frequency-domain. Influences of porosity of the sintered bronze material are also discussed.

  3. Evidence of Cnidarians sensitivity to sound after exposure to low frequency noise underwater sources

    NASA Astrophysics Data System (ADS)

    Solé, Marta; Lenoir, Marc; Fontuño, José Manuel; Durfort, Mercè; van der Schaar, Mike; André, Michel

    2016-12-01

    Jellyfishes represent a group of species that play an important role in oceans, particularly as a food source for different taxa and as a predator of fish larvae and planktonic prey. The massive introduction of artificial sound sources in the oceans has become a concern to science and society. While we are only beginning to understand that non-hearing specialists like cephalopods can be affected by anthropogenic noises and regulation is underway to measure European water noise levels, we still don’t know yet if the impact of sound may be extended to other lower level taxa of the food web. Here we exposed two species of Mediterranean Scyphozoan medusa, Cotylorhiza tuberculata and Rhizostoma pulmo to a sweep of low frequency sounds. Scanning electron microscopy (SEM) revealed injuries in the statocyst sensory epithelium of both species after exposure to sound, that are consistent with the manifestation of a massive acoustic trauma observed in other species. The presence of acoustic trauma in marine species that are not hearing specialists, like medusa, shows the magnitude of the problem of noise pollution and the complexity of the task to determine threshold values that would help building up regulation to prevent permanent damage of the ecosystems.

  4. Estimating the sound speed of a shallow-water marine sediment from the head wave excited by a low-flying helicopter.

    PubMed

    Bevans, Dieter A; Buckingham, Michael J

    2017-10-01

    The frequency bandwidth of the sound from a light helicopter, such as a Robinson R44, extends from about 13 Hz to 2.5 kHz. As such, the R44 has potential as a low-frequency sound source in underwater acoustics applications. To explore this idea, an experiment was conducted in shallow water off the coast of southern California in which a horizontal line of hydrophones detected the sound of an R44 hovering in an end-fire position relative to the array. Some of the helicopter sound interacted with seabed to excite the head wave in the water column. A theoretical analysis of the sound field in the water column generated by a stationary airborne source leads to an expression for the two-point horizontal coherence function of the head wave, which, apart from frequency, depends only on the sensor separation and the sediment sound speed. By matching the zero crossings of the measured and theoretical horizontal coherence functions, the sound speed in the sediment was recovered and found to take a value of 1682.42 ± 16.20 m/s. This is consistent with the sediment type at the experiment site, which is known from a previous survey to be a fine to very-fine sand.

  5. Acoustic investigation of wall jet over a backward-facing step using a microphone phased array

    NASA Astrophysics Data System (ADS)

    Perschke, Raimund F.; Ramachandran, Rakesh C.; Raman, Ganesh

    2015-02-01

    The acoustic properties of a wall jet over a hard-walled backward-facing step of aspect ratios 6, 3, 2, and 1.5 are studied using a 24-channel microphone phased array at Mach numbers up to M=0.6. The Reynolds number based on inflow velocity and step height assumes values from Reh = 3.0 ×104 to 7.2 ×105. Flow without and with side walls is considered. The experimental setup is open in the wall-normal direction and the expansion ratio is effectively 1. In case of flow through a duct, symmetry of the flow in the spanwise direction is lost downstream of separation at all but the largest aspect ratio as revealed by oil paint flow visualization. Hydrodynamic scattering of turbulence from the trailing edge of the step contributes significantly to the radiated sound. Reflection of acoustic waves from the bottom plate results in a modulation of power spectral densities. Acoustic source localization has been conducted using a 24-channel microphone phased array. Convective mean-flow effects on the apparent source origin have been assessed by placing a loudspeaker underneath a perforated flat plate and evaluating the displacement of the beamforming peak with inflow Mach number. Two source mechanisms are found near the step. One is due to interaction of the turbulent wall jet with the convex edge of the step. Free-stream turbulence sound is found to be peaked downstream of the step. Presence of the side walls increases free-stream sound. Results of the flow visualization are correlated with acoustic source maps. Trailing-edge sound and free-stream turbulence sound can be discriminated using source localization.

  6. Understanding the Doppler effect by analysing spectrograms of the sound of a passing vehicle

    NASA Astrophysics Data System (ADS)

    Lubyako, Dmitry; Martinez-Piedra, Gordon; Ushenin, Arthur; Denvir, Patrick; Dunlop, John; Hall, Alex; Le Roux, Gus; van Someren, Laurence; Weinberger, Harvey

    2017-11-01

    The purpose of this paper is to demonstrate how the Doppler effect can be analysed to deduce information about a moving source of sound waves. Specifically, we find the speed of a car and the distance of its closest approach to an observer using sound recordings from smartphones. A key focus of this paper is how this can be achieved in a classroom, both theoretically and experimentally, to deepen students’ understanding of the Doppler effect. Included are our own experimental data (48 sound recordings) to allow others to reproduce the analysis, if they cannot repeat the whole experiment themselves. In addition to its educational purpose, this paper examines the percentage errors in our results. This enabled us to determine sources of error, allowing those conducting similar future investigations to optimize their accuracy.

  7. Neuromorphic audio-visual sensor fusion on a sound-localizing robot.

    PubMed

    Chan, Vincent Yue-Sek; Jin, Craig T; van Schaik, André

    2012-01-01

    This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment.

  8. Experimental and Analytical Determination of the Geometric Far Field for Round Jets

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle; Bridges, James E.; Brown, Clifford E.; Khavaran, Abbas

    2005-01-01

    An investigation was conducted at the NASA Glenn Research Center using a set of three round jets operating under unheated subsonic conditions to address the question: "How close is too close?" Although sound sources are distributed at various distances throughout a jet plume downstream of the nozzle exit, at great distances from the nozzle the sound will appear to emanate from a point and the inverse-square law can be properly applied. Examination of normalized sound spectra at different distances from a jet, from experiments and from computational tools, established the required minimum distance for valid far-field measurements of the sound from subsonic round jets. Experimental data were acquired in the Aeroacoustic Propulsion Laboratory at the NASA Glenn Research Center. The WIND computer program solved the Reynolds-Averaged Navier-Stokes equations for aerodynamic computations; the MGBK jet-noise prediction computer code was used to predict the sound pressure levels. Results from both the experiments and the analytical exercises indicated that while the shortest measurement arc (with radius approximately 8 nozzle diameters) was already in the geometric far field for high-frequency sound (Strouhal number >5), low-frequency sound (Strouhal number <0.2) reached the geometric far field at a measurement radius of at least 50 nozzle diameters because of its extended source distribution.

  9. On the sound insulation of acoustic metasurface using a sub-structuring approach

    NASA Astrophysics Data System (ADS)

    Yu, Xiang; Lu, Zhenbo; Cheng, Li; Cui, Fangsen

    2017-08-01

    The feasibility of using an acoustic metasurface (AMS) with acoustic stop-band property to realize sound insulation with ventilation function is investigated. An efficient numerical approach is proposed to evaluate its sound insulation performance. The AMS is excited by a reverberant sound source and the standardized sound reduction index (SRI) is numerically investigated. To facilitate the modeling, the coupling between the AMS and the adjacent acoustic fields is formulated using a sub-structuring approach. A modal based formulation is applied to both the source and receiving room, enabling an efficient calculation in the frequency range from 125 Hz to 2000 Hz. The sound pressures and the velocities at the interface are matched by using a transfer function relation based on ;patches;. For illustration purposes, numerical examples are investigated using the proposed approach. The unit cell constituting the AMS is constructed in the shape of a thin acoustic chamber with tailored inner structures, whose stop-band property is numerically analyzed and experimentally demonstrated. The AMS is shown to provide effective sound insulation of over 30 dB in the stop-band frequencies from 600 to 1600 Hz. It is also shown that the proposed approach has the potential to be applied to a broad range of AMS studies and optimization problems.

  10. Geoelectrical characterization by joint inversion of VES/TEM in Paraná basin, Brazil

    NASA Astrophysics Data System (ADS)

    Bortolozo, C. A.; Couto, M. A.; Almeida, E. R.; Porsani, J. L.; Santos, F. M.

    2012-12-01

    For many years electrical (DC) and transient electromagnetic (TEM) soundings have been used in a great number of environmental, hydrological and mining exploration studies. The data of both methods are interpreted usually by individual 1D models resulting in many cases in ambiguous models. This can be explained by how the two different methodologies sample the subsurface. The vertical electrical sounding (VES) is good on marking very resistive structures, while the transient electromagnetic sounding (TEM) is very sensitive to map conductive structures. Another characteristic is that VES is more sensitive to shallow structures, while TEM soundings can reach deeper structures. A Matlab program for joint inversion of VES and TEM soundings, by using CRS algorithm was developed aiming explore the best of the both methods. Initially, the algorithm was tested with synthetic data and after it was used to invert experimental data from Paraná sedimentary basin. We present the results of a re-interpretation of 46 VES/TEM soundings data set acquired in Bebedouro region in São Paulo State - Brazil. The previous interpretation was based in geoelectrical models obtained by single inversion of the VES and TEM soundings. In this work we present the results with single inversion of VES and TEM sounding inverted by the Curupira Program and a new interpretation based in the joint inversion of both methodologies. The goal is increase the accuracy in determining the underground structures. As a result a new geoelectrical model of the region is obtained.

  11. Speed of sound estimation for thermal monitoring using an active ultrasound element during liver ablation therapy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kim, Younsu; Audigier, Chloé; Dillow, Austin; Cheng, Alexis; Boctor, Emad M.

    2017-03-01

    Thermal monitoring for ablation therapy has high demands for preserving healthy tissues while removing malignant ones completely. Various methods have been investigated. However, exposure to radiation, cost-effectiveness, and inconvenience hinder the use of X-ray or MRI methods. Due to the non-invasiveness and real-time capabilities of ultrasound, it is widely used in intraoperative procedures. Ultrasound thermal monitoring methods have been developed for affordable monitoring in real-time. We propose a new method for thermal monitoring using an ultrasound element. By inserting a Lead-zirconate-titanate (PZT) element to generate the ultrasound signal in the liver tissues, the single travel time of flight is recorded from the PZT element to the ultrasound transducer. We detect the speed of sound change caused by the increase in temperature during ablation therapy. We performed an ex vivo experiment with liver tissues to verify the feasibility of our speed of sound estimation technique. The time of flight information is used in an optimization method to recover the speed of sound maps during the ablation, which are then converted into temperature maps. The result shows that the trend of temperature changes matches with the temperature measured at a single point. The estimation error can be decreased by using a proper curve linking the speed of sound to the temperature. The average error over time was less than 3 degrees Celsius for a bovine liver. The speed of sound estimation using a single PZT element can be used for thermal monitoring.

  12. An Inexpensive and Versatile Version of Kundt's Tube for Measuring the Speed of Sound in Air

    NASA Astrophysics Data System (ADS)

    Papacosta, Pangratios; Linscheid, Nathan

    2016-01-01

    Experiments that measure the speed of sound in air are common in high schools and colleges. In the Kundt's tube experiment, a horizontal air column is adjusted until a resonance mode is achieved for a specific frequency of sound. When this happens, the cork dust in the tube is disturbed at the displacement antinode regions. The location of the displacement antinodes enables the measurement of the wavelength of the sound that is being used. This paper describes a design that uses a speaker instead of the traditional aluminum rod as the sound source. This allows the use of multiple sound frequencies that yield a much more accurate speed of sound in air.

  13. Nonlinear wave fronts and ionospheric irregularities observed by HF sounding over a powerful acoustic source

    NASA Astrophysics Data System (ADS)

    Blanc, Elisabeth; Rickel, Dwight

    1989-06-01

    Different wave fronts affected by significant nonlinearities have been observed in the ionosphere by a pulsed HF sounding experiment at a distance of 38 km from the source point of a 4800-kg ammonium nitrate and fuel oil (ANFO) explosion on the ground. These wave fronts are revealed by partial reflections of the radio sounding waves. A small-scale irregular structure has been generated by a first wave front at the level of a sporadic E layer which characterized the ionosphere at the time of the experiment. The time scale of these fluctuations is about 1 to 2 s; its lifetime is about 2 min. Similar irregularities were also observed at the level of a second wave front in the F region. This structure appears also as diffusion on a continuous wave sounding at horizontal distances of the order of 200 km from the source. In contrast, a third front unaffected by irregularities may originate from the lowest layers of the ionosphere or from a supersonic wave front propagating at the base of the thermosphere. The origin of these structures is discussed.

  14. Transmission and scattering of acoustic energy in turbulent flows

    NASA Astrophysics Data System (ADS)

    Gaitonde, Datta; Unnikrishnan, S.

    2017-11-01

    Sound scattering and transmission in turbulent jets are explored through a control volume analysis of a Large-Eddy Simulation. The fluctuating momentum flux across any control surface is first split into its rotational turbulent ((ρu)'H) and the irrotational-isentropic acoustic ((ρu)'A) components using momentum potential theory (MPT). The former has low spatio-temporal coherence, while the latter exhibits a persistent wavepacket form. The energy variable, specifically, total fluctuating enthalpy, is also split into its turbulent and acoustic modes, HH' and HA' respectively. Scattering of acoustic energy is then (ρu)'HHA' , and transmission is (ρu)'AHA' . This facilitates a quantitative comparison of scattering versus transmission in the presence of acoustic energy sources, also obtained from MPT, in any turbulent scenario. The wavepacket converts stochastic sound sources into coherent sound radiation. Turbulent eddies are not only sources of sound, but also play a strong role in scattering, particularly near the lipline. The net acoustic flux from the jet is the transport of HA' by the wavepacket, whose axisymmetric and higher azimuthal modes contribute to downstream and sideline radiation respectively.

  15. An acoustic glottal source for vocal tract physical models

    NASA Astrophysics Data System (ADS)

    Hannukainen, Antti; Kuortti, Juha; Malinen, Jarmo; Ojalammi, Antti

    2017-11-01

    A sound source is proposed for the acoustic measurement of physical models of the human vocal tract. The physical models are produced by fast prototyping, based on magnetic resonance imaging during prolonged vowel production. The sound source, accompanied by custom signal processing algorithms, is used for two kinds of measurements from physical models of the vocal tract: (i) amplitude frequency response and resonant frequency measurements, and (ii) signal reconstructions at the source output according to a target pressure waveform with measurements at the mouth position. The proposed source and the software are validated by computational acoustics experiments and measurements on a physical model of the vocal tract corresponding to the vowels [] of a male speaker.

  16. Sound transmission in ducts containing nearly choked flows

    NASA Technical Reports Server (NTRS)

    Callegari, A. J.; Myers, M. K.

    1979-01-01

    The nonlinear theory previously developed by the authors (1977, 1978) is used to obtain numerical results for sound transmission through a nearly choked throat in a variable-area duct. Parametric studies are performed for different source locations, strengths and frequencies. It is shown that the nonlinear interactions in the throat region generate superharmonics of the fundamental (source) frequency throughout the duct. The amplitudes of these superharmonics increase as the source parameters (frequency and strength) are increased toward values leading to acoustic shocks. For a downstream source, superharmonics carry about 20% of the total acoustic power as shocking conditions are approached. For the source strength levels and frequencies considered, streaming effects are negligible.

  17. Emission of Sound from Turbulence Convected by a Parallel Mean Flow in the Presence of a Confining Duct

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.; Leib, Stewart J.

    1999-01-01

    An approximate method for calculating the noise generated by a turbulent flow within a semi-infinite duct of arbitrary cross section is developed. It is based on a previously derived high-frequency solution to Lilley's equation, which describes the sound propagation in a transversely-sheared mean flow. The source term is simplified by assuming the turbulence to be axisymmetric about the mean flow direction. Numerical results are presented for the special case of a ring source in a circular duct with an axisymmetric mean flow. They show that the internally generated noise is suppressed at sufficiently large upstream angles in a hard walled duct, and that acoustic liners can significantly reduce the sound radiated in both the upstream and downstream regions, depending upon the source location and Mach number of the flow.

  18. Emission of Sound From Turbulence Convected by a Parallel Mean Flow in the Presence of a Confining Duct

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.; Leib, Stewart J.

    1999-01-01

    An approximate method for calculating the noise generated by a turbulent flow within a semi-infinite duct of arbitrary cross section is developed. It is based on a previously derived high-frequency solution to Lilley's equation, which describes the sound propagation in transversely-sheared mean flow. The source term is simplified by assuming the turbulence to be axisymmetric about the mean flow direction. Numerical results are presented for the special case of a ring source in a circular duct with an axisymmetric mean flow. They show that the internally generated noise is suppressed at sufficiently large upstream angles in a hard walled duct, and that acoustic liners can significantly reduce the sound radiated in both the upstream and downstream regions, depending upon the source location and Mach number of the flow.

  19. Translation of an Object Using Phase-Controlled Sound Sources in Acoustic Levitation

    NASA Astrophysics Data System (ADS)

    Matsui, Takayasu; Ohdaira, Etsuzo; Masuzawa, Nobuyoshi; Ide, Masao

    1995-05-01

    Acoustic levitation is used for positioning materials in the development of new materials in space where there is no gravity. This technique is applicable to materials for which electromagnetic force cannot be used. If the levitation point of the materials can be controlled freely in this application, possibilities of new applications will be extended. In this paper we report on an experimental study on controlling the levitation point of the object in an acoustic levitation system. The system fabricated and tested in this study has two sound sources with vibrating plates facing each other. Translation of the object can be achieved by controlling the phase of the energizing electrical signal for one of the sound sources. It was found that the levitation point can be moved smoothly in proportion to the phase difference between the vibrating plates.

  20. Acoustic design by topology optimization

    NASA Astrophysics Data System (ADS)

    Dühring, Maria B.; Jensen, Jakob S.; Sigmund, Ole

    2008-11-01

    To bring down noise levels in human surroundings is an important issue and a method to reduce noise by means of topology optimization is presented here. The acoustic field is modeled by Helmholtz equation and the topology optimization method is based on continuous material interpolation functions in the density and bulk modulus. The objective function is the squared sound pressure amplitude. First, room acoustic problems are considered and it is shown that the sound level can be reduced in a certain part of the room by an optimized distribution of reflecting material in a design domain along the ceiling or by distribution of absorbing and reflecting material along the walls. We obtain well defined optimized designs for a single frequency or a frequency interval for both 2D and 3D problems when considering low frequencies. Second, it is shown that the method can be applied to design outdoor sound barriers in order to reduce the sound level in the shadow zone behind the barrier. A reduction of up to 10 dB for a single barrier and almost 30 dB when using two barriers are achieved compared to utilizing conventional sound barriers.

  1. An Auditory Illusion of Proximity of the Source Induced by Sonic Crystals

    PubMed Central

    Spiousas, Ignacio; Etchemendy, Pablo E.; Vergara, Ramiro O.; Calcagno, Esteban R.; Eguia, Manuel C.

    2015-01-01

    In this work we report an illusion of proximity of a sound source created by a sonic crystal placed between the source and a listener. This effect seems, at first, paradoxical to naïve listeners since the sonic crystal is an obstacle formed by almost densely packed cylindrical scatterers. Even when the singular acoustical properties of these periodic composite materials have been studied extensively (including band gaps, deaf bands, negative refraction, and birrefringence), the possible perceptual effects remain unexplored. The illusion reported here is studied through acoustical measurements and a psychophysical experiment. The results of the acoustical measurements showed that, for a certain frequency range and region in space where the focusing phenomenon takes place, the sonic crystal induces substantial increases in binaural intensity, direct-to-reverberant energy ratio and interaural cross-correlation values, all cues involved in the auditory perception of distance. Consistently, the results of the psychophysical experiment revealed that the presence of the sonic crystal between the sound source and the listener produces a significant reduction of the perceived relative distance to the sound source. PMID:26222281

  2. An Auditory Illusion of Proximity of the Source Induced by Sonic Crystals.

    PubMed

    Spiousas, Ignacio; Etchemendy, Pablo E; Vergara, Ramiro O; Calcagno, Esteban R; Eguia, Manuel C

    2015-01-01

    In this work we report an illusion of proximity of a sound source created by a sonic crystal placed between the source and a listener. This effect seems, at first, paradoxical to naïve listeners since the sonic crystal is an obstacle formed by almost densely packed cylindrical scatterers. Even when the singular acoustical properties of these periodic composite materials have been studied extensively (including band gaps, deaf bands, negative refraction, and birrefringence), the possible perceptual effects remain unexplored. The illusion reported here is studied through acoustical measurements and a psychophysical experiment. The results of the acoustical measurements showed that, for a certain frequency range and region in space where the focusing phenomenon takes place, the sonic crystal induces substantial increases in binaural intensity, direct-to-reverberant energy ratio and interaural cross-correlation values, all cues involved in the auditory perception of distance. Consistently, the results of the psychophysical experiment revealed that the presence of the sonic crystal between the sound source and the listener produces a significant reduction of the perceived relative distance to the sound source.

  3. Monaural Sound Localization Revisited

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Kistler, Doris J.

    1997-01-01

    Research reported during the past few decades has revealed the importance for human sound localization of the so-called 'monaural spectral cues.' These cues are the result of the direction-dependent filtering of incoming sound waves accomplished by the pinnae. One point of view about how these cues are extracted places great emphasis on the spectrum of the received sound at each ear individually. This leads to the suggestion that an effective way of studying the influence of these cues is to measure the ability of listeners to localize sounds when one of their ears is plugged. Numerous studies have appeared using this monaural localization paradigm. Three experiments are described here which are intended to clarify the results of the previous monaural localization studies and provide new data on how monaural spectral cues might be processed. Virtual sound sources are used in the experiments in order to manipulate and control the stimuli independently at the two ears. Two of the experiments deal with the consequences of the incomplete monauralization that may have contaminated previous work. The results suggest that even very low sound levels in the occluded ear provide access to interaural localization cues. The presence of these cues complicates the interpretation of the results of nominally monaural localization studies. The third experiment concerns the role of prior knowledge of the source spectrum, which is required if monaural cues are to be useful. The results of this last experiment demonstrate that extraction of monaural spectral cues can be severely disrupted by trial-to-trial fluctuations in the source spectrum. The general conclusion of the experiments is that, while monaural spectral cues are important, the monaural localization paradigm may not be the most appropriate way to study their role.

  4. Monaural sound localization revisited.

    PubMed

    Wightman, F L; Kistler, D J

    1997-02-01

    Research reported during the past few decades has revealed the importance for human sound localization of the so-called "monaural spectral cues." These cues are the result of the direction-dependent filtering of incoming sound waves accomplished by the pinnae. One point of view about how these cues are extracted places great emphasis on the spectrum of the received sound at each ear individually. This leads to the suggestion that an effective way of studying the influence of these cues is to measure the ability of listeners to localize sounds when one of their ears is plugged. Numerous studies have appeared using this monaural localization paradigm. Three experiments are described here which are intended to clarify the results of the previous monaural localization studies and provide new data on how monaural spectral cues might be processed. Virtual sound sources are used in the experiments in order to manipulate and control the stimuli independently at the two ears. Two of the experiments deal with the consequences of the incomplete monauralization that may have contaminated previous work. The results suggest that even very low sound levels in the occluded ear provide access to interaural localization cues. The presence of these cues complicates the interpretation of the results of nominally monaural localization studies. The third experiment concerns the role of prior knowledge of the source spectrum, which is required if monaural cues are to be useful. The results of this last experiment demonstrate that extraction of monaural spectral cues can be severely disrupted by trial-to-trial fluctuations in the source spectrum. The general conclusion of the experiments is that, while monaural spectral cues are important, the monaural localization paradigm may not be the most appropriate way to study their role.

  5. Sound stream segregation: a neuromorphic approach to solve the “cocktail party problem” in real-time

    PubMed Central

    Thakur, Chetan Singh; Wang, Runchun M.; Afshar, Saeed; Hamilton, Tara J.; Tapson, Jonathan C.; Shamma, Shihab A.; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the “cocktail party effect.” It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and speech recognition. PMID:26388721

  6. Sound stream segregation: a neuromorphic approach to solve the "cocktail party problem" in real-time.

    PubMed

    Thakur, Chetan Singh; Wang, Runchun M; Afshar, Saeed; Hamilton, Tara J; Tapson, Jonathan C; Shamma, Shihab A; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the "cocktail party effect." It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and speech recognition.

  7. Human-assisted sound event recognition for home service robots.

    PubMed

    Do, Ha Manh; Sheng, Weihua; Liu, Meiqin

    This paper proposes and implements an open framework of active auditory learning for a home service robot to serve the elderly living alone at home. The framework was developed to realize the various auditory perception capabilities while enabling a remote human operator to involve in the sound event recognition process for elderly care. The home service robot is able to estimate the sound source position and collaborate with the human operator in sound event recognition while protecting the privacy of the elderly. Our experimental results validated the proposed framework and evaluated auditory perception capabilities and human-robot collaboration in sound event recognition.

  8. Mapping Underwater Sound in the Dutch Part of the North Sea.

    PubMed

    Sertlek, H Özkan; Aarts, Geert; Brasseur, Sophie; Slabbekoorn, Hans; ten Cate, Carel; von Benda-Beckmann, Alexander M; Ainslie, Michael A

    2016-01-01

    The European Union requires member states to achieve or maintain good environmental status for their marine territorial waters and explicitly mentions potentially adverse effects of underwater sound. In this study, we focused on producing maps of underwater sound from various natural and anthropogenic origins in the Dutch North Sea. The source properties and sound propagation are simulated by mathematical methods. These maps could be used to assess and predict large-scale effects on behavior and distribution of underwater marine life and therefore become a valuable tool in assessing and managing the impact of underwater sound on marine life.

  9. Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners with Bilateral and with Hearing-Preservation Cochlear Implants

    ERIC Educational Resources Information Center

    Loiselle, Louise H.; Dorman, Michael F.; Yost, William A.; Cook, Sarah J.; Gifford, Rene H.

    2016-01-01

    Purpose: To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Methods: Eleven bilateral listeners with MED-EL…

  10. Room temperature acoustic transducers for high-temperature thermometry

    NASA Astrophysics Data System (ADS)

    Ripple, D. C.; Murdock, W. E.; Strouse, G. F.; Gillis, K. A.; Moldover, M. R.

    2013-09-01

    We have successfully conducted highly-accurate, primary acoustic thermometry at 600 K using a sound source and a sound detector located outside the thermostat, at room temperature. We describe the source, the detector, and the ducts that connected them to our cavity resonator. This transducer system preserved the purity of the argon gas, generated small, predictable perturbations to the acoustic resonance frequencies, and can be used well above 600 K.

  11. Modeling effectiveness of gradual increases in source level to mitigate effects of sonar on marine mammals.

    PubMed

    Von Benda-Beckmann, Alexander M; Wensveen, Paul J; Kvadsheim, Petter H; Lam, Frans-Peter A; Miller, Patrick J O; Tyack, Peter L; Ainslie, Michael A

    2014-02-01

    Ramp-up or soft-start procedures (i.e., gradual increase in the source level) are used to mitigate the effect of sonar sound on marine mammals, although no one to date has tested whether ramp-up procedures are effective at reducing the effect of sound on marine mammals. We investigated the effectiveness of ramp-up procedures in reducing the area within which changes in hearing thresholds can occur. We modeled the level of sound killer whales (Orcinus orca) were exposed to from a generic sonar operation preceded by different ramp-up schemes. In our model, ramp-up procedures reduced the risk of killer whales receiving sounds of sufficient intensity to affect their hearing. The effectiveness of the ramp-up procedure depended strongly on the assumed response threshold and differed with ramp-up duration, although extending the duration of the ramp up beyond 5 min did not add much to its predicted mitigating effect. The main factors that limited effectiveness of ramp up in a typical antisubmarine warfare scenario were high source level, rapid moving sonar source, and long silences between consecutive sonar transmissions. Our exposure modeling approach can be used to evaluate and optimize mitigation procedures. © 2013 Society for Conservation Biology.

  12. Expansions for infinite or finite plane circular time-reversal mirrors and acoustic curtains for wave-field-synthesis.

    PubMed

    Mellow, Tim; Kärkkäinen, Leo

    2014-03-01

    An acoustic curtain is an array of microphones used for recording sound which is subsequently reproduced through an array of loudspeakers in which each loudspeaker reproduces the signal from its corresponding microphone. Here the sound originates from a point source on the axis of symmetry of the circular array. The Kirchhoff-Helmholtz integral for a plane circular curtain is solved analytically as fast-converging expansions, assuming an ideal continuous array, to speed up computations and provide insight. By reversing the time sequence of the recording (or reversing the direction of propagation of the incident wave so that the point source becomes an "ideal" point sink), the curtain becomes a time reversal mirror and the analytical solution for this is given simultaneously. In the case of an infinite planar array, it is demonstrated that either a monopole or dipole curtain will reproduce the diverging sound field of the point source on the far side. However, although the real part of the sound field of the infinite time-reversal mirror is reproduced, the imaginary part is an approximation due to the missing singularity. It is shown that the approximation may be improved by using the appropriate combination of monopole and dipole sources in the mirror.

  13. On the Locality of Transient Electromagnetic Soundings with a Single-Loop Configuration

    NASA Astrophysics Data System (ADS)

    Barsukov, P. O.; Fainberg, E. B.

    2018-03-01

    The possibilities of reconstructing two-dimensional (2D) cross sections based on the data of the profile soundings by the transient electromagnetic method (TEM) with a single ungrounded loop are illustrated on three-dimensional (3D) models. The process of reconstruction includes three main steps: transformation of the responses in the depth dependence of resistivity ρ(h) measured along the profile, with their subsequent stitching into the 2D pseudo section; point-by-point one-dimensional (1D) inversion of the responses with the starting model constructed based on the transformations; and correction of the 2D cross section with the use of 2.5-dimensional (2.5D) block inversion. It is shown that single-loop TEM soundings allow studying the geological media within a local domain the lateral dimensions of which are commensurate with the depth of the investigation. The structure of the medium beyond this domain insignificantly affects the sounding results. This locality enables the TEM to reconstruct the geoelectrical structure of the medium from the 2D cross sections with the minimal distortions caused by the lack of information beyond the profile of the transient response measurements.

  14. Onomatopeya, Derivacion y el Sufijo -azo. (Onomatopeia, Derivation, and the Suffix -azo).

    ERIC Educational Resources Information Center

    Corro, Raymond L.

    1985-01-01

    The nature and source of onomatopeic words in Spanish are discussed in order of decreasing resemblance to the sound imitated. The first group of onomatopeic words are the interjections, in which sound effects and animal sounds are expressed. Repetition is often used to enhance the effect. The second group includes verbs and nouns derived from the…

  15. THE SOUNDS OF ENGLISH AND ITALIAN, A SYSTEMATIC ANALYSIS OF THE CONTRASTS BETWEEN THE SOUND SYSTEMS. CONTRASTIVE STRUCTURE SERIES.

    ERIC Educational Resources Information Center

    AGARD, FREDERICK B.; DI PIETRO, ROBERT J.

    DESIGNED AS A SOURCE OF INFORMATION FOR PROFESSIONALS PREPARING INSTRUCTIONAL MATERIALS, PLANNING COURSES, OR DEVELOPING CLASSROOM TECHNIQUES FOR FOREIGN LANGUAGE PROGRAMS, A SERIES OF STUDIES HAS BEEN PREPARED THAT CONTRASTS, IN TWO VOLUMES FOR EACH OF THE FIVE MOST COMMONLY TAUGHT FOREIGN LANGUAGES IN THE UNITED STATES, THE SOUND AND GRAMMATICAL…

  16. Sound. Physical Science in Action[TM]. Schlessinger Science Library. [Videotape].

    ERIC Educational Resources Information Center

    2000

    A door closes. A horn beeps. A crowd roars. Sound waves travel outward in all directions from the source. They can all be heard, but how? Did they travel directly to the ears? Perhaps they bounced off another object first or traveled through a different medium, changing speed along the way. Students learn how sound waves travel and about their…

  17. Direct Measurement of the Speed of Sound Using a Microphone and a Speaker

    ERIC Educational Resources Information Center

    Gómez-Tejedor, José A.; Castro-Palacio, Juan C.; Monsoriu, Juan A.

    2014-01-01

    We present a simple and accurate experiment to obtain the speed of sound in air using a conventional speaker and a microphone connected to a computer. A free open source digital audio editor and recording computer software application allows determination of the time-of-flight of the wave for different distances, from which the speed of sound is…

  18. Emphasis of spatial cues in the temporal fine structure during the rising segments of amplitude-modulated sounds

    PubMed Central

    Dietz, Mathias; Marquardt, Torsten; Salminen, Nelli H.; McAlpine, David

    2013-01-01

    The ability to locate the direction of a target sound in a background of competing sources is critical to the survival of many species and important for human communication. Nevertheless, brain mechanisms that provide for such accurate localization abilities remain poorly understood. In particular, it remains unclear how the auditory brain is able to extract reliable spatial information directly from the source when competing sounds and reflections dominate all but the earliest moments of the sound wave reaching each ear. We developed a stimulus mimicking the mutual relationship of sound amplitude and binaural cues, characteristic to reverberant speech. This stimulus, named amplitude modulated binaural beat, allows for a parametric and isolated change of modulation frequency and phase relations. Employing magnetoencephalography and psychoacoustics it is demonstrated that the auditory brain uses binaural information in the stimulus fine structure only during the rising portion of each modulation cycle, rendering spatial information recoverable in an otherwise unlocalizable sound. The data suggest that amplitude modulation provides a means of “glimpsing” low-frequency spatial cues in a manner that benefits listening in noisy or reverberant environments. PMID:23980161

  19. Method and Apparatus for Characterizing Pressure Sensors using Modulated Light Beam Pressure

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C. (Inventor)

    2003-01-01

    Embodiments of apparatuses and methods are provided that use light sources instead of sound sources for characterizing and calibrating sensors for measuring small pressures to mitigate many of the problems with using sound sources. In one embodiment an apparatus has a light source for directing a beam of light on a sensing surface of a pressure sensor for exerting a force on the sensing surface. The pressure sensor generates an electrical signal indicative of the force exerted on the sensing surface. A modulator modulates the beam of light. A signal processor is electrically coupled to the pressure sensor for receiving the electrical signal.

  20. Recent advances concerning an understanding of sound transmission through engine nozzles and jets

    NASA Technical Reports Server (NTRS)

    Bechert, D.; Michel, U.; Dfizenmaier, E.

    1978-01-01

    Experiments on the interaction between a turbulent jet and pure tone sound coming from inside the jet nozzle are reported. This is a model representing the sound transmission from sound sources in jet engines through the nozzle and the jet flow into the far field. It is shown that pure tone sound at low frequencies is considerably attenuated by the jet flow, whereas it is conserved at higher frequencies. On the other hand, broadband jet noise can be amplified considerably by a pure tone excitation. Both effects seem not to be interdependent. Knowledge on how they are created and on relevant parameter dependences allow new considerations for the development of sound attenuators.

Top