Source and listener directivity for interactive wave-based sound propagation.
Mehra, Ravish; Antani, Lakulish; Kim, Sujeong; Manocha, Dinesh
2014-04-01
We present an approach to model dynamic, data-driven source and listener directivity for interactive wave-based sound propagation in virtual environments and computer games. Our directional source representation is expressed as a linear combination of elementary spherical harmonic (SH) sources. In the preprocessing stage, we precompute and encode the propagated sound fields due to each SH source. At runtime, we perform the SH decomposition of the varying source directivity interactively and compute the total sound field at the listener position as a weighted sum of precomputed SH sound fields. We propose a novel plane-wave decomposition approach based on higher-order derivatives of the sound field that enables dynamic HRTF-based listener directivity at runtime. We provide a generic framework to incorporate our source and listener directivity in any offline or online frequency-domain wave-based sound propagation algorithm. We have integrated our sound propagation system in Valve's Source game engine and use it to demonstrate realistic acoustic effects such as sound amplification, diffraction low-passing, scattering, localization, externalization, and spatial sound, generated by wave-based propagation of directional sources and listener in complex scenarios. We also present results from our preliminary user study.
Yost, William A; Zhong, Xuan; Najam, Anbar
2015-11-01
In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.
A Flexible 360-Degree Thermal Sound Source Based on Laser Induced Graphene
Tao, Lu-Qi; Liu, Ying; Ju, Zhen-Yi; Tian, He; Xie, Qian-Yi; Yang, Yi; Ren, Tian-Ling
2016-01-01
A flexible sound source is essential in a whole flexible system. It’s hard to integrate a conventional sound source based on a piezoelectric part into a whole flexible system. Moreover, the sound pressure from the back side of a sound source is usually weaker than that from the front side. With the help of direct laser writing (DLW) technology, the fabrication of a flexible 360-degree thermal sound source becomes possible. A 650-nm low-power laser was used to reduce the graphene oxide (GO). The stripped laser induced graphene thermal sound source was then attached to the surface of a cylindrical bottle so that it could emit sound in a 360-degree direction. The sound pressure level and directivity of the sound source were tested, and the results were in good agreement with the theoretical results. Because of its 360-degree sound field, high flexibility, high efficiency, low cost, and good reliability, the 360-degree thermal acoustic sound source will be widely applied in consumer electronics, multi-media systems, and ultrasonic detection and imaging. PMID:28335239
NASA Astrophysics Data System (ADS)
Nishiura, Takanobu; Nakamura, Satoshi
2002-11-01
It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.
Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology
NASA Astrophysics Data System (ADS)
Niwa, Kenta; Nishino, Takanori; Takeda, Kazuya
A sound field reproduction method is proposed that uses blind source separation and a head-related transfer function. In the proposed system, multichannel acoustic signals captured at distant microphones are decomposed to a set of location/signal pairs of virtual sound sources based on frequency-domain independent component analysis. After estimating the locations and the signals of the virtual sources by convolving the controlled acoustic transfer functions with each signal, the spatial sound is constructed at the selected point. In experiments, a sound field made by six sound sources is captured using 48 distant microphones and decomposed into sets of virtual sound sources. Since subjective evaluation shows no significant difference between natural and reconstructed sound when six virtual sources and are used, the effectiveness of the decomposing algorithm as well as the virtual source representation are confirmed.
Active room compensation for sound reinforcement using sound field separation techniques.
Heuchel, Franz M; Fernandez-Grande, Efren; Agerkvist, Finn T; Shabalina, Elena
2018-03-01
This work investigates how the sound field created by a sound reinforcement system can be controlled at low frequencies. An indoor control method is proposed which actively absorbs the sound incident on a reflecting boundary using an array of secondary sources. The sound field is separated into incident and reflected components by a microphone array close to the secondary sources, enabling the minimization of reflected components by means of optimal signals for the secondary sources. The method is purely feed-forward and assumes constant room conditions. Three different sound field separation techniques for the modeling of the reflections are investigated based on plane wave decomposition, equivalent sources, and the Spatial Fourier transform. Simulations and an experimental validation are presented, showing that the control method performs similarly well at enhancing low frequency responses with the three sound separation techniques. Resonances in the entire room are reduced, although the microphone array and secondary sources are confined to a small region close to the reflecting wall. Unlike previous control methods based on the creation of a plane wave sound field, the investigated method works in arbitrary room geometries and primary source positions.
The silent base flow and the sound sources in a laminar jet.
Sinayoko, Samuel; Agarwal, Anurag
2012-03-01
An algorithm to compute the silent base flow sources of sound in a jet is introduced. The algorithm is based on spatiotemporal filtering of the flow field and is applicable to multifrequency sources. It is applied to an axisymmetric laminar jet and the resulting sources are validated successfully. The sources are compared to those obtained from two classical acoustic analogies, based on quiescent and time-averaged base flows. The comparison demonstrates how the silent base flow sources shed light on the sound generation process. It is shown that the dominant source mechanism in the axisymmetric laminar jet is "shear-noise," which is a linear mechanism. The algorithm presented here could be applied to fully turbulent flows to understand the aerodynamic noise-generation mechanism. © 2012 Acoustical Society of America
A Numerical Experiment on the Role of Surface Shear Stress in the Generation of Sound
NASA Technical Reports Server (NTRS)
Shariff, Karim; Wang, Meng; Merriam, Marshal (Technical Monitor)
1996-01-01
The sound generated due to a localized flow over an infinite flat surface is considered. It is known that the unsteady surface pressure, while appearing in a formal solution to the Lighthill equation, does not constitute a source of sound but rather represents the effect of image quadrupoles. The question of whether a similar surface shear stress term constitutes a true source of dipole sound is less settled. Some have boldly assumed it is a true source while others have argued that, like the surface pressure, it depends on the sound field (via an acoustic boundary layer) and is therefore not a true source. A numerical experiment based on the viscous, compressible Navier-Stokes equations was undertaken to investigate the issue. A small region of a wall was oscillated tangentially. The directly computed sound field was found to to agree with an acoustic analogy based calculation which regards the surface shear as an acoustically compact dipole source of sound.
Zhang, Lanyue; Ding, Dandan; Yang, Desen; Wang, Jia; Shi, Jie
2017-01-01
Spherical microphone arrays have been paid increasing attention for their ability to locate a sound source with arbitrary incident angle in three-dimensional space. Low-frequency sound sources are usually located by using spherical near-field acoustic holography. The reconstruction surface and holography surface are conformal surfaces in the conventional sound field transformation based on generalized Fourier transform. When the sound source is on the cylindrical surface, it is difficult to locate by using spherical surface conformal transform. The non-conformal sound field transformation by making a transfer matrix based on spherical harmonic wave decomposition is proposed in this paper, which can achieve the transformation of a spherical surface into a cylindrical surface by using spherical array data. The theoretical expressions of the proposed method are deduced, and the performance of the method is simulated. Moreover, the experiment of sound source localization by using a spherical array with randomly and uniformly distributed elements is carried out. Results show that the non-conformal surface sound field transformation from a spherical surface to a cylindrical surface is realized by using the proposed method. The localization deviation is around 0.01 m, and the resolution is around 0.3 m. The application of the spherical array is extended, and the localization ability of the spherical array is improved. PMID:28489065
Efficient techniques for wave-based sound propagation in interactive applications
NASA Astrophysics Data System (ADS)
Mehra, Ravish
Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data-driven, rotating or time-varying directivity function at runtime. Unlike previous approaches, the listener directivity approach can be used to compute spatial audio (3D audio) for a moving, rotating listener at interactive rates. Lastly, we propose an efficient GPU-based time-domain solver for the wave equation that enables wave simulation up to the mid-frequency range in tens of minutes on a desktop computer. It is demonstrated that by carefully mapping all the components of the wave simulator to match the parallel processing capabilities of the graphics processors, significant improvement in performance can be achieved compared to the CPU-based simulators, while maintaining numerical accuracy. We validate these techniques with offline numerical simulations and measured data recorded in an outdoor scene. We present results of preliminary user evaluations conducted to study the impact of these techniques on user's immersion in virtual environment. We have integrated these techniques with the Half-Life 2 game engine, Oculus Rift head-mounted display, and Xbox game controller to enable users to experience high-quality acoustics effects and spatial audio in the virtual environment.
Sound reduction by metamaterial-based acoustic enclosure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Shanshan; Li, Pei; Zhou, Xiaoming
In many practical systems, acoustic radiation control on noise sources contained within a finite volume by an acoustic enclosure is of great importance, but difficult to be accomplished at low frequencies due to the enhanced acoustic-structure interaction. In this work, we propose to use acoustic metamaterials as the enclosure to efficiently reduce sound radiation at their negative-mass frequencies. Based on a circularly-shaped metamaterial model, sound radiation properties by either central or eccentric sources are analyzed by numerical simulations for structured metamaterials. The parametric analyses demonstrate that the barrier thickness, the cavity size, the source type, and the eccentricity of themore » source have a profound effect on the sound reduction. It is found that increasing the thickness of the metamaterial barrier is an efficient approach to achieve large sound reduction over the negative-mass frequencies. These results are helpful in designing highly efficient acoustic enclosures for blockage of sound in low frequencies.« less
Bai, Mingsian R; Li, Yi; Chiang, Yi-Hao
2017-10-01
A unified framework is proposed for analysis and synthesis of two-dimensional spatial sound field in reverberant environments. In the sound field analysis (SFA) phase, an unbaffled 24-element circular microphone array is utilized to encode the sound field based on the plane-wave decomposition. Depending on the sparsity of the sound sources, the SFA stage can be implemented in two manners. For sparse-source scenarios, a one-stage algorithm based on compressive sensing algorithm is utilized. Alternatively, a two-stage algorithm can be used, where the minimum power distortionless response beamformer is used to localize the sources and Tikhonov regularization algorithm is used to extract the source amplitudes. In the sound field synthesis (SFS), a 32-element rectangular loudspeaker array is employed to decode the target sound field using pressure matching technique. To establish the room response model, as required in the pressure matching step of the SFS phase, an SFA technique for nonsparse-source scenarios is utilized. Choice of regularization parameters is vital to the reproduced sound field. In the SFS phase, three SFS approaches are compared in terms of localization performance and voice reproduction quality. Experimental results obtained in a reverberant room are presented and reveal that an accurate room response model is vital to immersive rendering of the reproduced sound field.
Localizing the sources of two independent noises: Role of time varying amplitude differences
Yost, William A.; Brown, Christopher A.
2013-01-01
Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region. PMID:23556597
Localizing the sources of two independent noises: role of time varying amplitude differences.
Yost, William A; Brown, Christopher A
2013-04-01
Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region.
SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization
Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah
2014-01-01
Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431
Development of an ICT-Based Air Column Resonance Learning Media
NASA Astrophysics Data System (ADS)
Purjiyanta, Eka; Handayani, Langlang; Marwoto, Putut
2016-08-01
Commonly, the sound source used in the air column resonance experiment is the tuning fork having disadvantage of unoptimal resonance results due to the sound produced which is getting weaker. In this study we made tones with varying frequency using the Audacity software which were, then, stored in a mobile phone as a source of sound. One advantage of this sound source is the stability of the resulting sound enabling it to produce the same powerful sound. The movement of water in a glass tube mounted on the tool resonance and the tone sound that comes out from the mobile phone were recorded by using a video camera. Sound resonances recorded were first, second, and third resonance, for each tone frequency mentioned. The resulting sound stays longer, so it can be used for the first, second, third and next resonance experiments. This study aimed to (1) explain how to create tones that can substitute tuning forks sound used in air column resonance experiments, (2) illustrate the sound wave that occurred in the first, second, and third resonance in the experiment, and (3) determine the speed of sound in the air. This study used an experimental method. It was concluded that; (1) substitute tones of a tuning fork sound can be made by using the Audacity software; (2) the form of sound waves that occured in the first, second, and third resonance in the air column resonance can be drawn based on the results of video recording of the air column resonance; and (3) based on the experiment result, the speed of sound in the air is 346.5 m/s, while based on the chart analysis with logger pro software, the speed of sound in the air is 343.9 ± 0.3171 m/s.
Series expansions of rotating two and three dimensional sound fields.
Poletti, M A
2010-12-01
The cylindrical and spherical harmonic expansions of oscillating sound fields rotating at a constant rate are derived. These expansions are a generalized form of the stationary sound field expansions. The derivations are based on the representation of interior and exterior sound fields using the simple source approach and determination of the simple source solutions with uniform rotation. Numerical simulations of rotating sound fields are presented to verify the theory.
The effect of brain lesions on sound localization in complex acoustic environments.
Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg
2014-05-01
Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.
Sound source localization method in an environment with flow based on Amiet-IMACS
NASA Astrophysics Data System (ADS)
Wei, Long; Li, Min; Qin, Sheng; Fu, Qiang; Yang, Debin
2017-05-01
A sound source localization method is proposed to localize and analyze the sound source in an environment with airflow. It combines the improved mapping of acoustic correlated sources (IMACS) method and Amiet's method, and is called Amiet-IMACS. It can localize uncorrelated and correlated sound sources with airflow. To implement this approach, Amiet's method is used to correct the sound propagation path in 3D, which improves the accuracy of the array manifold matrix and decreases the position error of the localized source. Then, the mapping of acoustic correlated sources (MACS) method, which is as a high-resolution sound source localization algorithm, is improved by self-adjusting the constraint parameter at each irritation process to increase convergence speed. A sound source localization experiment using a pair of loud speakers in an anechoic wind tunnel under different flow speeds is conducted. The experiment exhibits the advantage of Amiet-IMACS in localizing a more accurate sound source position compared with implementing IMACS alone in an environment with flow. Moreover, the aerodynamic noise produced by a NASA EPPLER 862 STRUT airfoil model in airflow with a velocity of 80 m/s is localized using the proposed method, which further proves its effectiveness in a flow environment. Finally, the relationship between the source position of this airfoil model and its frequency, along with its generation mechanism, is determined and interpreted.
NASA Astrophysics Data System (ADS)
Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng
2016-05-01
In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.
Localization of sound sources in a room with one microphone
NASA Astrophysics Data System (ADS)
Peić Tukuljac, Helena; Lissek, Hervé; Vandergheynst, Pierre
2017-08-01
Estimation of the location of sound sources is usually done using microphone arrays. Such settings provide an environment where we know the difference between the received signals among different microphones in the terms of phase or attenuation, which enables localization of the sound sources. In our solution we exploit the properties of the room transfer function in order to localize a sound source inside a room with only one microphone. The shape of the room and the position of the microphone are assumed to be known. The design guidelines and limitations of the sensing matrix are given. Implementation is based on the sparsity in the terms of voxels in a room that are occupied by a source. What is especially interesting about our solution is that we provide localization of the sound sources not only in the horizontal plane, but in the terms of the 3D coordinates inside the room.
Kastelein, Ronald A; van der Heul, Sander; Verboom, Willem C; Triesscheijn, Rob J V; Jennings, Nancy V
2006-02-01
To prevent grounding of ships and collisions between ships in shallow coastal waters, an underwater data collection and communication network (ACME) using underwater sounds to encode and transmit data is currently under development. Marine mammals might be affected by ACME sounds since they may use sound of a similar frequency (around 12 kHz) for communication, orientation, and prey location. If marine mammals tend to avoid the vicinity of the acoustic transmitters, they may be kept away from ecologically important areas by ACME sounds. One marine mammal species that may be affected in the North Sea is the harbour seal (Phoca vitulina). No information is available on the effects of ACME-like sounds on harbour seals, so this study was carried out as part of an environmental impact assessment program. Nine captive harbour seals were subjected to four sound types, three of which may be used in the underwater acoustic data communication network. The effect of each sound was judged by comparing the animals' location in a pool during test periods to that during baseline periods, during which no sound was produced. Each of the four sounds could be made into a deterrent by increasing its amplitude. The seals reacted by swimming away from the sound source. The sound pressure level (SPL) at the acoustic discomfort threshold was established for each of the four sounds. The acoustic discomfort threshold is defined as the boundary between the areas that the animals generally occupied during the transmission of the sounds and the areas that they generally did not enter during transmission. The SPLs at the acoustic discomfort thresholds were similar for each of the sounds (107 dB re 1 microPa). Based on this discomfort threshold SPL, discomfort zones at sea for several source levels (130-180 dB re 1 microPa) of the sounds were calculated, using a guideline sound propagation model for shallow water. The discomfort zone is defined as the area around a sound source that harbour seals are expected to avoid. The definition of the discomfort zone is based on behavioural discomfort, and does not necessarily coincide with the physical discomfort zone. Based on these results, source levels can be selected that have an acceptable effect on harbour seals in particular areas. The discomfort zone of a communication sound depends on the sound, the source level, and the propagation characteristics of the area in which the sound system is operational. The source level of the communication system should be adapted to each area (taking into account the width of a sea arm, the local sound propagation, and the importance of an area to the affected species). The discomfort zone should not coincide with ecologically important areas (for instance resting, breeding, suckling, and feeding areas), or routes between these areas.
NASA Astrophysics Data System (ADS)
Gauthier, P.-A.; Camier, C.; Lebel, F.-A.; Pasco, Y.; Berry, A.; Langlois, J.; Verron, C.; Guastavino, C.
2016-08-01
Sound environment reproduction of various flight conditions in aircraft mock-ups is a valuable tool for the study, prediction, demonstration and jury testing of interior aircraft sound quality and annoyance. To provide a faithful reproduced sound environment, time, frequency and spatial characteristics should be preserved. Physical sound field reproduction methods for spatial sound reproduction are mandatory to immerse the listener's body in the proper sound fields so that localization cues are recreated at the listener's ears. Vehicle mock-ups pose specific problems for sound field reproduction. Confined spaces, needs for invisible sound sources and very specific acoustical environment make the use of open-loop sound field reproduction technologies such as wave field synthesis (based on free-field models of monopole sources) not ideal. In this paper, experiments in an aircraft mock-up with multichannel least-square methods and equalization are reported. The novelty is the actual implementation of sound field reproduction with 3180 transfer paths and trim panel reproduction sources in laboratory conditions with a synthetic target sound field. The paper presents objective evaluations of reproduced sound fields using various metrics as well as sound field extrapolation and sound field characterization.
Sound source tracking device for telematic spatial sound field reproduction
NASA Astrophysics Data System (ADS)
Cardenas, Bruno
This research describes an algorithm that localizes sound sources for use in telematic applications. The localization algorithm is based on amplitude differences between various channels of a microphone array of directional shotgun microphones. The amplitude differences will be used to locate multiple performers and reproduce their voices, which were recorded at close distance with lavalier microphones, spatially corrected using a loudspeaker rendering system. In order to track multiple sound sources in parallel the information gained from the lavalier microphones will be utilized to estimate the signal-to-noise ratio between each performer and the concurrent performers.
NASA Astrophysics Data System (ADS)
Chen, Huaiyu; Cao, Li
2017-06-01
In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.
The low-frequency sound power measuring technique for an underwater source in a non-anechoic tank
NASA Astrophysics Data System (ADS)
Zhang, Yi-Ming; Tang, Rui; Li, Qi; Shang, Da-Jing
2018-03-01
In order to determine the radiated sound power of an underwater source below the Schroeder cut-off frequency in a non-anechoic tank, a low-frequency extension measuring technique is proposed. This technique is based on a unique relationship between the transmission characteristics of the enclosed field and those of the free field, which can be obtained as a correction term based on previous measurements of a known simple source. The radiated sound power of an unknown underwater source in the free field can thereby be obtained accurately from measurements in a non-anechoic tank. To verify the validity of the proposed technique, a mathematical model of the enclosed field is established using normal-mode theory, and the relationship between the transmission characteristics of the enclosed and free fields is obtained. The radiated sound power of an underwater transducer source is tested in a glass tank using the proposed low-frequency extension measuring technique. Compared with the free field, the radiated sound power level of the narrowband spectrum deviation is found to be less than 3 dB, and the 1/3 octave spectrum deviation is found to be less than 1 dB. The proposed testing technique can be used not only to extend the low-frequency applications of non-anechoic tanks, but also for measurement of radiated sound power from complicated sources in non-anechoic tanks.
Position-dependent hearing in three species of bushcrickets (Tettigoniidae, Orthoptera)
Lakes-Harlan, Reinhard; Scherberich, Jan
2015-01-01
A primary task of auditory systems is the localization of sound sources in space. Sound source localization in azimuth is usually based on temporal or intensity differences of sounds between the bilaterally arranged ears. In mammals, localization in elevation is possible by transfer functions at the ear, especially the pinnae. Although insects are able to locate sound sources, little attention is given to the mechanisms of acoustic orientation to elevated positions. Here we comparatively analyse the peripheral hearing thresholds of three species of bushcrickets in respect to sound source positions in space. The hearing thresholds across frequencies depend on the location of a sound source in the three-dimensional hearing space in front of the animal. Thresholds differ for different azimuthal positions and for different positions in elevation. This position-dependent frequency tuning is species specific. Largest differences in thresholds between positions are found in Ancylecha fenestrata. Correspondingly, A. fenestrata has a rather complex ear morphology including cuticular folds covering the anterior tympanal membrane. The position-dependent tuning might contribute to sound source localization in the habitats. Acoustic orientation might be a selective factor for the evolution of morphological structures at the bushcricket ear and, speculatively, even for frequency fractioning in the ear. PMID:26543574
Position-dependent hearing in three species of bushcrickets (Tettigoniidae, Orthoptera).
Lakes-Harlan, Reinhard; Scherberich, Jan
2015-06-01
A primary task of auditory systems is the localization of sound sources in space. Sound source localization in azimuth is usually based on temporal or intensity differences of sounds between the bilaterally arranged ears. In mammals, localization in elevation is possible by transfer functions at the ear, especially the pinnae. Although insects are able to locate sound sources, little attention is given to the mechanisms of acoustic orientation to elevated positions. Here we comparatively analyse the peripheral hearing thresholds of three species of bushcrickets in respect to sound source positions in space. The hearing thresholds across frequencies depend on the location of a sound source in the three-dimensional hearing space in front of the animal. Thresholds differ for different azimuthal positions and for different positions in elevation. This position-dependent frequency tuning is species specific. Largest differences in thresholds between positions are found in Ancylecha fenestrata. Correspondingly, A. fenestrata has a rather complex ear morphology including cuticular folds covering the anterior tympanal membrane. The position-dependent tuning might contribute to sound source localization in the habitats. Acoustic orientation might be a selective factor for the evolution of morphological structures at the bushcricket ear and, speculatively, even for frequency fractioning in the ear.
Perceptual constancy in auditory perception of distance to railway tracks.
De Coensel, Bert; Nilsson, Mats E; Berglund, Birgitta; Brown, A L
2013-07-01
Distance to a sound source can be accurately estimated solely from auditory information. With a sound source such as a train that is passing by at a relatively large distance, the most important auditory information for the listener for estimating its distance consists of the intensity of the sound, spectral changes in the sound caused by air absorption, and the motion-induced rate of change of intensity. However, these cues are relative because prior information/experience of the sound source-its source power, its spectrum and the typical speed at which it moves-is required for such distance estimates. This paper describes two listening experiments that allow investigation of further prior contextual information taken into account by listeners-viz., whether they are indoors or outdoors. Asked to estimate the distance to the track of a railway, it is shown that listeners assessing sounds heard inside the dwelling based their distance estimates on the expected train passby sound level outdoors rather than on the passby sound level actually experienced indoors. This form of perceptual constancy may have consequences for the assessment of annoyance caused by railway noise.
Optical and Acoustic Sensor-Based 3D Ball Motion Estimation for Ball Sport Simulators †.
Seo, Sang-Woo; Kim, Myunggyu; Kim, Yejin
2018-04-25
Estimation of the motion of ball-shaped objects is essential for the operation of ball sport simulators. In this paper, we propose an estimation system for 3D ball motion, including speed and angle of projection, by using acoustic vector and infrared (IR) scanning sensors. Our system is comprised of three steps to estimate a ball motion: sound-based ball firing detection, sound source localization, and IR scanning for motion analysis. First, an impulsive sound classification based on the mel-frequency cepstrum and feed-forward neural network is introduced to detect the ball launch sound. An impulsive sound source localization using a 2D microelectromechanical system (MEMS) microphones and delay-and-sum beamforming is presented to estimate the firing position. The time and position of a ball in 3D space is determined from a high-speed infrared scanning method. Our experimental results demonstrate that the estimation of ball motion based on sound allows a wider activity area than similar camera-based methods. Thus, it can be practically applied to various simulations in sports such as soccer and baseball.
Feasibility of making sound power measurements in the NASA Langley V/STOL tunnel test section
NASA Technical Reports Server (NTRS)
Brooks, T. F.; Scheiman, J.; Silcox, R. J.
1976-01-01
Based on exploratory acoustic measurements in Langley's V/STOL wind tunnel, recommendations are made on the methodology for making sound power measurements of aircraft components in the closed tunnel test section. During airflow, tunnel self-noise and microphone flow-induced noise place restrictions on the amplitude and spectrum of the sound source to be measured. Models of aircraft components with high sound level sources, such as thrust engines and powered lift systems, seem likely candidates for acoustic testing.
Visual Presentation Effects on Identification of Multiple Environmental Sounds
Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio
2016-01-01
This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478
Design of laser monitoring and sound localization system
NASA Astrophysics Data System (ADS)
Liu, Yu-long; Xu, Xi-ping; Dai, Yu-ming; Qiao, Yang
2013-08-01
In this paper, a novel design of laser monitoring and sound localization system is proposed. It utilizes laser to monitor and locate the position of the indoor conversation. In China most of the laser monitors no matter used in labor in an instrument uses photodiode or phototransistor as a detector at present. At the laser receivers of those facilities, light beams are adjusted to ensure that only part of the window in photodiodes or phototransistors received the beams. The reflection would deviate from its original path because of the vibration of the detected window, which would cause the changing of imaging spots in photodiode or phototransistor. However, such method is limited not only because it could bring in much stray light in receivers but also merely single output of photocurrent could be obtained. Therefore a new method based on quadrant detector is proposed. It utilizes the relation of the optical integral among quadrants to locate the position of imaging spots. This method could eliminate background disturbance and acquired two-dimensional spots vibrating data pacifically. The principle of this whole system could be described as follows. Collimated laser beams are reflected from vibrate-window caused by the vibration of sound source. Therefore reflected beams are modulated by vibration source. Such optical signals are collected by quadrant detectors and then are processed by photoelectric converters and corresponding circuits. Speech signals are eventually reconstructed. In addition, sound source localization is implemented by the means of detecting three different reflected light sources simultaneously. Indoor mathematical models based on the principle of Time Difference Of Arrival (TDOA) are established to calculate the twodimensional coordinate of sound source. Experiments showed that this system is able to monitor the indoor sound source beyond 15 meters with a high quality of speech reconstruction and to locate the sound source position accurately.
NASA Astrophysics Data System (ADS)
Nishiura, Takanobu; Nakamura, Satoshi
2003-10-01
Humans communicate with each other through speech by focusing on the target speech among environmental sounds in real acoustic environments. We can easily identify the target sound from other environmental sounds. For hands-free speech recognition, the identification of the target speech from environmental sounds is imperative. This mechanism may also be important for a self-moving robot to sense the acoustic environments and communicate with humans. Therefore, this paper first proposes hidden Markov model (HMM)-based environmental sound source identification. Environmental sounds are modeled by three states of HMMs and evaluated using 92 kinds of environmental sounds. The identification accuracy was 95.4%. This paper also proposes a new HMM composition method that composes speech HMMs and an HMM of categorized environmental sounds for robust environmental sound-added speech recognition. As a result of the evaluation experiments, we confirmed that the proposed HMM composition outperforms the conventional HMM composition with speech HMMs and a noise (environmental sound) HMM trained using noise periods prior to the target speech in a captured signal. [Work supported by Ministry of Public Management, Home Affairs, Posts and Telecommunications of Japan.
Ambient Sound-Based Collaborative Localization of Indeterministic Devices
Kamminga, Jacob; Le, Duc; Havinga, Paul
2016-01-01
Localization is essential in wireless sensor networks. To our knowledge, no prior work has utilized low-cost devices for collaborative localization based on only ambient sound, without the support of local infrastructure. The reason may be the fact that most low-cost devices are indeterministic and suffer from uncertain input latencies. This uncertainty makes accurate localization challenging. Therefore, we present a collaborative localization algorithm (Cooperative Localization on Android with ambient Sound Sources (CLASS)) that simultaneously localizes the position of indeterministic devices and ambient sound sources without local infrastructure. The CLASS algorithm deals with the uncertainty by splitting the devices into subsets so that outliers can be removed from the time difference of arrival values and localization results. Since Android is indeterministic, we select Android devices to evaluate our approach. The algorithm is evaluated with an outdoor experiment and achieves a mean Root Mean Square Error (RMSE) of 2.18 m with a standard deviation of 0.22 m. Estimated directions towards the sound sources have a mean RMSE of 17.5° and a standard deviation of 2.3°. These results show that it is feasible to simultaneously achieve a relative positioning of both devices and sound sources with sufficient accuracy, even when using non-deterministic devices and platforms, such as Android. PMID:27649176
Investigation of the sound generation mechanisms for in-duct orifice plates.
Tao, Fuyang; Joseph, Phillip; Zhang, Xin; Stalnov, Oksana; Siercke, Matthias; Scheel, Henning
2017-08-01
Sound generation due to an orifice plate in a hard-walled flow duct which is commonly used in air distribution systems (ADS) and flow meters is investigated. The aim is to provide an understanding of this noise generation mechanism based on measurements of the source pressure distribution over the orifice plate. A simple model based on Curle's acoustic analogy is described that relates the broadband in-duct sound field to the surface pressure cross spectrum on both sides of the orifice plate. This work describes careful measurements of the surface pressure cross spectrum over the orifice plate from which the surface pressure distribution and correlation length is deduced. This information is then used to predict the radiated in-duct sound field. Agreement within 3 dB between the predicted and directly measured sound fields is obtained, providing direct confirmation that the surface pressure fluctuations acting over the orifice plates are the main noise sources. Based on the developed model, the contributions to the sound field from different radial locations of the orifice plate are calculated. The surface pressure is shown to follow a U 3.9 velocity scaling law and the area over which the surface sources are correlated follows a U 1.8 velocity scaling law.
Spacecraft Internal Acoustic Environment Modeling
NASA Technical Reports Server (NTRS)
Chu, SShao-sheng R.; Allen, Christopher S.
2009-01-01
Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. In FY09, the physical mockup developed in FY08, with interior geometric shape similar to Orion CM (Crew Module) IML (Interior Mode Line), was used to validate SEA (Statistical Energy Analysis) acoustic model development with realistic ventilation fan sources. The sound power levels of these sources were unknown a priori, as opposed to previous studies that RSS (Reference Sound Source) with known sound power level was used. The modeling results were evaluated based on comparisons to measurements of sound pressure levels over a wide frequency range, including the frequency range where SEA gives good results. Sound intensity measurement was performed over a rectangular-shaped grid system enclosing the ventilation fan source. Sound intensities were measured at the top, front, back, right, and left surfaces of the and system. Sound intensity at the bottom surface was not measured, but sound blocking material was placed tinder the bottom surface to reflect most of the incident sound energy back to the remaining measured surfaces. Integrating measured sound intensities over measured surfaces renders estimated sound power of the source. The reverberation time T6o of the mockup interior had been modified to match reverberation levels of ISS US Lab interior for speech frequency bands, i.e., 0.5k, 1k, 2k, 4 kHz, by attaching appropriately sized Thinsulate sound absorption material to the interior wall of the mockup. Sound absorption of Thinsulate was modeled in three methods: Sabine equation with measured mockup interior reverberation time T60, layup model based on past impedance tube testing, and layup model plus air absorption correction. The evaluation/validation was carried out by acquiring octave band microphone data simultaneously at ten fixed locations throughout the mockup. SPLs (Sound Pressure Levels) predicted by our SEA model match well with measurements for our CM mockup, with a more complicated shape. Additionally in FY09, background NC noise (Noise Criterion) simulation and MRT (Modified Rhyme Test) were developed and performed in the mockup to determine the maximum noise level in CM habitable volume for fair crew voice communications. Numerous demonstrations of simulated noise environment in the mockup and associated SIL (Speech Interference Level) via MRT were performed for various communities, including members from NASA and Orion prime-/sub-contractors. Also, a new HSIR (Human-Systems Integration Requirement) for limiting pre- and post-landing SIL was proposed.
NASA Astrophysics Data System (ADS)
Bi, ChuanXing; Jing, WenQian; Zhang, YongBin; Xu, Liang
2015-02-01
The conventional nearfield acoustic holography (NAH) is usually based on the assumption of free-field conditions, and it also requires that the measurement aperture should be larger than the actual source. This paper is to focus on the problem that neither of the above-mentioned requirements can be met, and to examine the feasibility of reconstructing the sound field radiated by partial source, based on double-layer pressure measurements made in a non-free field by using patch NAH combined with sound field separation technique. And also, the sensitivity of the reconstructed result to the measurement error is analyzed in detail. Two experiments involving two speakers in an exterior space and one speaker inside a car cabin are presented. The experimental results demonstrate that the patch NAH based on single-layer pressure measurement cannot obtain a satisfied result due to the influences of disturbing sources and reflections, while the patch NAH based on double-layer pressure measurements can successfully remove these influences and reconstruct the patch sound field effectively.
Understanding auditory distance estimation by humpback whales: a computational approach.
Mercado, E; Green, S R; Schneider, J N
2008-02-01
Ranging, the ability to judge the distance to a sound source, depends on the presence of predictable patterns of attenuation. We measured long-range sound propagation in coastal waters to assess whether humpback whales might use frequency degradation cues to range singing whales. Two types of neural networks, a multi-layer and a single-layer perceptron, were trained to classify recorded sounds by distance traveled based on their frequency content. The multi-layer network successfully classified received sounds, demonstrating that the distorting effects of underwater propagation on frequency content provide sufficient cues to estimate source distance. Normalizing received sounds with respect to ambient noise levels increased the accuracy of distance estimates by single-layer perceptrons, indicating that familiarity with background noise can potentially improve a listening whale's ability to range. To assess whether frequency patterns predictive of source distance were likely to be perceived by whales, recordings were pre-processed using a computational model of the humpback whale's peripheral auditory system. Although signals processed with this model contained less information than the original recordings, neural networks trained with these physiologically based representations estimated source distance more accurately, suggesting that listening whales should be able to range singers using distance-dependent changes in frequency content.
Grimm, Giso; Hohmann, Volker; Laugesen, Søren; Neher, Tobias
2017-01-01
In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far (‘radial’) and left-right (‘angular’) movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup. PMID:28675088
Lundbeck, Micha; Grimm, Giso; Hohmann, Volker; Laugesen, Søren; Neher, Tobias
2017-01-01
In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far ('radial') and left-right ('angular') movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup.
Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian
2016-03-22
Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches.
Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian
2016-01-01
Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches. PMID:27011187
The Problems with "Noise Numbers" for Wind Farm Noise Assessment
ERIC Educational Resources Information Center
Thorne, Bob
2011-01-01
Human perception responds primarily to sound character rather than sound level. Wind farms are unique sound sources and exhibit special audible and inaudible characteristics that can be described as modulating sound or as a tonal complex. Wind farm compliance measures based on a specified noise number alone will fail to address problems with noise…
Spatial sound field synthesis and upmixing based on the equivalent source method.
Bai, Mingsian R; Hsu, Hoshen; Wen, Jheng-Ciang
2014-01-01
Given scarce number of recorded signals, spatial sound field synthesis with an extended sweet spot is a challenging problem in acoustic array signal processing. To address the problem, a synthesis and upmixing approach inspired by the equivalent source method (ESM) is proposed. The synthesis procedure is based on the pressure signals recorded by a microphone array and requires no source model. The array geometry can also be arbitrary. Four upmixing strategies are adopted to enhance the resolution of the reproduced sound field when there are more channels of loudspeakers than the microphones. Multi-channel inverse filtering with regularization is exploited to deal with the ill-posedness in the reconstruction process. The distance between the microphone and loudspeaker arrays is optimized to achieve the best synthesis quality. To validate the proposed system, numerical simulations and subjective listening experiments are performed. The results demonstrated that all upmixing methods improved the quality of reproduced target sound field over the original reproduction. In particular, the underdetermined ESM interpolation method yielded the best spatial sound field synthesis in terms of the reproduction error, timbral quality, and spatial quality.
Sound quality indicators for urban places in Paris cross-validated by Milan data.
Ricciardi, Paola; Delaitre, Pauline; Lavandier, Catherine; Torchia, Francesca; Aumond, Pierre
2015-10-01
A specific smartphone application was developed to collect perceptive and acoustic data in Paris. About 3400 questionnaires were analyzed, regarding the global sound environment characterization, the perceived loudness of some emergent sources and the presence time ratio of sources that do not emerge from the background. Sound pressure level was recorded each second from the mobile phone's microphone during a 10-min period. The aim of this study is to propose indicators of urban sound quality based on linear regressions with perceptive variables. A cross validation of the quality models extracted from Paris data was carried out by conducting the same survey in Milan. The proposed sound quality general model is correlated with the real perceived sound quality (72%). Another model without visual amenity and familiarity is 58% correlated with perceived sound quality. In order to improve the sound quality indicator, a site classification was performed by Kohonen's Artificial Neural Network algorithm, and seven specific class models were developed. These specific models attribute more importance on source events and are slightly closer to the individual data than the global model. In general, the Parisian models underestimate the sound quality of Milan environments assessed by Italian people.
Psychophysical investigation of an auditory spatial illusion in cats: the precedence effect.
Tollin, Daniel J; Yin, Tom C T
2003-10-01
The precedence effect (PE) describes several spatial perceptual phenomena that occur when similar sounds are presented from two different locations and separated by a delay. The mechanisms that produce the effect are thought to be responsible for the ability to localize sounds in reverberant environments. Although the physiological bases for the PE have been studied, little is known about how these sounds are localized by species other than humans. Here we used the search coil technique to measure the eye positions of cats trained to saccade to the apparent locations of sounds. To study the PE, brief broadband stimuli were presented from two locations, with a delay between their onsets; the delayed sound meant to simulate a single reflection. Although the cats accurately localized single sources, the apparent locations of the paired sources depended on the delay. First, the cats exhibited summing localization, the perception of a "phantom" sound located between the sources, for delays < +/-400 micros for sources positioned in azimuth along the horizontal plane, but not for sources positioned in elevation along the sagittal plane. Second, consistent with localization dominance, for delays from 400 micros to about 10 ms, the cats oriented toward the leading source location only, with little influence of the lagging source, both for horizontally and vertically placed sources. Finally, the echo threshold was reached for delays >10 ms, where the cats first began to orient to the lagging source on some trials. These data reveal that cats experience the PE phenomena similarly to humans.
Aeroacoustic analysis of the human phonation process based on a hybrid acoustic PIV approach
NASA Astrophysics Data System (ADS)
Lodermeyer, Alexander; Tautz, Matthias; Becker, Stefan; Döllinger, Michael; Birk, Veronika; Kniesburges, Stefan
2018-01-01
The detailed analysis of sound generation in human phonation is severely limited as the accessibility to the laryngeal flow region is highly restricted. Consequently, the physical basis of the underlying fluid-structure-acoustic interaction that describes the primary mechanism of sound production is not yet fully understood. Therefore, we propose the implementation of a hybrid acoustic PIV procedure to evaluate aeroacoustic sound generation during voice production within a synthetic larynx model. Focusing on the flow field downstream of synthetic, aerodynamically driven vocal folds, we calculated acoustic source terms based on the velocity fields obtained by time-resolved high-speed PIV applied to the mid-coronal plane. The radiation of these sources into the acoustic far field was numerically simulated and the resulting acoustic pressure was finally compared with experimental microphone measurements. We identified the tonal sound to be generated downstream in a small region close to the vocal folds. The simulation of the sound propagation underestimated the tonal components, whereas the broadband sound was well reproduced. Our results demonstrate the feasibility to locate aeroacoustic sound sources inside a synthetic larynx using a hybrid acoustic PIV approach. Although the technique employs a 2D-limited flow field, it accurately reproduces the basic characteristics of the aeroacoustic field in our larynx model. In future studies, not only the aeroacoustic mechanisms of normal phonation will be assessable, but also the sound generation of voice disorders can be investigated more profoundly.
The Robustness of Acoustic Analogies
NASA Technical Reports Server (NTRS)
Freund, J. B.; Lele, S. K.; Wei, M.
2004-01-01
Acoustic analogies for the prediction of flow noise are exact rearrangements of the flow equations N(right arrow q) = 0 into a nominal sound source S(right arrow q) and sound propagation operator L such that L(right arrow q) = S(right arrow q). In practice, the sound source is typically modeled and the propagation operator inverted to make predictions. Since the rearrangement is exact, any sufficiently accurate model of the source will yield the correct sound, so other factors must determine the merits of any particular formulation. Using data from a two-dimensional mixing layer direct numerical simulation (DNS), we evaluate the robustness of two analogy formulations to different errors intentionally introduced into the source. The motivation is that since S can not be perfectly modeled, analogies that are less sensitive to errors in S are preferable. Our assessment is made within the framework of Goldstein's generalized acoustic analogy, in which different choices of a base flow used in constructing L give different sources S and thus different analogies. A uniform base flow yields a Lighthill-like analogy, which we evaluate against a formulation in which the base flow is the actual mean flow of the DNS. The more complex mean flow formulation is found to be significantly more robust to errors in the energetic turbulent fluctuations, but its advantage is less pronounced when errors are made in the smaller scales.
Beck, Christoph; Garreau, Guillaume; Georgiou, Julius
2016-01-01
Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.
Neuromorphic audio-visual sensor fusion on a sound-localizing robot.
Chan, Vincent Yue-Sek; Jin, Craig T; van Schaik, André
2012-01-01
This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment.
Blind separation of incoherent and spatially disjoint sound sources
NASA Astrophysics Data System (ADS)
Dong, Bin; Antoni, Jérôme; Pereira, Antonio; Kellermann, Walter
2016-11-01
Blind separation of sound sources aims at reconstructing the individual sources which contribute to the overall radiation of an acoustical field. The challenge is to reach this goal using distant measurements when all sources are operating concurrently. The working assumption is usually that the sources of interest are incoherent - i.e. statistically orthogonal - so that their separation can be approached by decorrelating a set of simultaneous measurements, which amounts to diagonalizing the cross-spectral matrix. Principal Component Analysis (PCA) is traditionally used to this end. This paper reports two new findings in this context. First, a sufficient condition is established under which "virtual" sources returned by PCA coincide with true sources; it stipulates that the sources of interest should be not only incoherent but also spatially orthogonal. A particular case of this instance is met by spatially disjoint sources - i.e. with non-overlapping support sets. Second, based on this finding, a criterion that enforces both statistical and spatial orthogonality is proposed to blindly separate incoherent sound sources which radiate from disjoint domains. This criterion can be easily incorporated into acoustic imaging algorithms such as beamforming or acoustical holography to identify sound sources of different origins. The proposed methodology is validated on laboratory experiments. In particular, the separation of aeroacoustic sources is demonstrated in a wind tunnel.
Mathematically trivial control of sound using a parametric beam focusing source.
Tanaka, Nobuo; Tanaka, Motoki
2011-01-01
By exploiting a case regarded as trivial, this paper presents global active noise control using a parametric beam focusing source (PBFS). As with a dipole model, one is used for a primary sound source and the other for a control sound source, the control effect for minimizing a total acoustic power depends on the distance between the two. When the distance becomes zero, the total acoustic power becomes null, hence nothing less than a trivial case. Because of the constraints in practice, there exist difficulties in placing a control source close enough to a primary source. However, by projecting a sound beam of a parametric array loudspeaker onto the target sound source (primary source), a virtual sound source may be created on the target sound source, thereby enabling the collocation of the sources. In order to further ensure feasibility of the trivial case, a PBFS is then introduced in an effort to meet the size of the two sources. Reflected sound wave of the PBFS, which is tantamount to the virtual sound source output, aims to suppress the primary sound. Finally, a numerical analysis as well as an experiment is conducted, verifying the validity of the proposed methodology.
Personal sound zone reproduction with room reflections
NASA Astrophysics Data System (ADS)
Olik, Marek
Loudspeaker-based sound systems, capable of a convincing reproduction of different audio streams to listeners in the same acoustic enclosure, are a convenient alternative to headphones. Such systems aim to generate "sound zones" in which target sound programmes are to be reproduced with minimum interference from any alternative programmes. This can be achieved with appropriate filtering of the source (loudspeaker) signals, so that the target sound's energy is directed to the chosen zone while being attenuated elsewhere. The existing methods are unable to produce the required sound energy ratio (acoustic contrast) between the zones with a small number of sources when strong room reflections are present. Optimization of parameters is therefore required for systems with practical limitations to improve their performance in reflective acoustic environments. One important parameter is positioning of sources with respect to the zones and room boundaries. The first contribution of this thesis is a comparison of the key sound zoning methods implemented on compact and distributed geometrical source arrangements. The study presents previously unpublished detailed evaluation and ranking of such arrangements for systems with a limited number of sources in a reflective acoustic environment similar to a domestic room. Motivated by the requirement to investigate the relationship between source positioning and performance in detail, the central contribution of this thesis is a study on optimizing source arrangements when strong individual room reflections occur. Small sound zone systems are studied analytically and numerically to reveal relationships between the geometry of source arrays and performance in terms of acoustic contrast and array effort (related to system efficiency). Three novel source position optimization techniques are proposed to increase the contrast, and geometrical means of reducing the effort are determined. Contrary to previously published case studies, this work presents a systematic examination of the key problem of first order reflections and proposes general optimization techniques, thus forming an important contribution. The remaining contribution considers evaluation and comparison of the proposed techniques with two alternative approaches to sound zone generation under reflective conditions: acoustic contrast control (ACC) combined with anechoic source optimization and sound power minimization (SPM). The study provides a ranking of the examined approaches which could serve as a guideline for method selection for rooms with strong individual reflections.
Physics of thermo-acoustic sound generation
NASA Astrophysics Data System (ADS)
Daschewski, M.; Boehm, R.; Prager, J.; Kreutzbruck, M.; Harrer, A.
2013-09-01
We present a generalized analytical model of thermo-acoustic sound generation based on the analysis of thermally induced energy density fluctuations and their propagation into the adjacent matter. The model provides exact analytical prediction of the sound pressure generated in fluids and solids; consequently, it can be applied to arbitrary thermal power sources such as thermophones, plasma firings, laser beams, and chemical reactions. Unlike existing approaches, our description also includes acoustic near-field effects and sound-field attenuation. Analytical results are compared with measurements of sound pressures generated by thermo-acoustic transducers in air for frequencies up to 1 MHz. The tested transducers consist of titanium and indium tin oxide coatings on quartz glass and polycarbonate substrates. The model reveals that thermo-acoustic efficiency increases linearly with the supplied thermal power and quadratically with thermal excitation frequency. Comparison of the efficiency of our thermo-acoustic transducers with those of piezoelectric-based airborne ultrasound transducers using impulse excitation showed comparable sound pressure values. The present results show that thermo-acoustic transducers can be applied as broadband, non-resonant, high-performance ultrasound sources.
Focusing and directional beaming effects of airborne sound through a planar lens with zigzag slits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Kun; Qiu, Chunyin, E-mail: cyqiu@whu.edu.cn; Lu, Jiuyang
2015-01-14
Based on the Huygens-Fresnel principle, we design a planar lens to efficiently realize the interconversion between the point-like sound source and Gaussian beam in ambient air. The lens is constructed by a planar plate perforated elaborately with a nonuniform array of zigzag slits, where the slit exits act as subwavelength-sized secondary sources carrying desired sound responses. The experiments operated at audible regime agree well with the theoretical predictions. This compact device could be useful in daily life applications, such as for medical and detection purposes.
A Robust Sound Source Localization Approach for Microphone Array with Model Errors
NASA Astrophysics Data System (ADS)
Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong
In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.
Sound field reproduction as an equivalent acoustical scattering problem.
Fazi, Filippo Maria; Nelson, Philip A
2013-11-01
Given a continuous distribution of acoustic sources, the determination of the source strength that ensures the synthesis of a desired sound field is shown to be identical to the solution of an equivalent acoustic scattering problem. The paper begins with the presentation of the general theory that underpins sound field reproduction with secondary sources continuously arranged on the boundary of the reproduction region. The process of reproduction by a continuous source distribution is modeled by means of an integral operator (the single layer potential). It is then shown how the solution of the sound reproduction problem corresponds to that of an equivalent scattering problem. Analytical solutions are computed for two specific instances of this problem, involving, respectively, the use of a secondary source distribution in spherical and planar geometries. The results are shown to be the same as those obtained with analyses based on High Order Ambisonics and Wave Field Synthesis, respectively, thus bringing to light a fundamental analogy between these two methods of sound reproduction. Finally, it is shown how the physical optics (Kirchhoff) approximation enables the derivation of a high-frequency simplification for the problem under consideration, this in turn being related to the secondary source selection criterion reported in the literature on Wave Field Synthesis.
1987-07-01
fields (see also Chapter 4 of Ref. 22). Like our investigation, theirs is based on the Khokhlov-Zabolotskaya-Kuznetsov ( KZK ) equa- tion [23,24...25,26], also based on the KZK e(iualiou, is limited to weakly nonlinear systems. However, the practical case of a focused circular source with gain of...iment. The demonstrated abihty of the KZK equation to accurately model focused sound fields from reahstic sources [i.e., having abrupt edges and
A New Mechanism of Sound Generation in Songbirds
NASA Astrophysics Data System (ADS)
Goller, Franz; Larsen, Ole N.
1997-12-01
Our current understanding of the sound-generating mechanism in the songbird vocal organ, the syrinx, is based on indirect evidence and theoretical treatments. The classical avian model of sound production postulates that the medial tympaniform membranes (MTM) are the principal sound generators. We tested the role of the MTM in sound generation and studied the songbird syrinx more directly by filming it endoscopically. After we surgically incapacitated the MTM as a vibratory source, zebra finches and cardinals were not only able to vocalize, but sang nearly normal song. This result shows clearly that the MTM are not the principal sound source. The endoscopic images of the intact songbird syrinx during spontaneous and brain stimulation-induced vocalizations illustrate the dynamics of syringeal reconfiguration before phonation and suggest a different model for sound production. Phonation is initiated by rostrad movement and stretching of the syrinx. At the same time, the syrinx is closed through movement of two soft tissue masses, the medial and lateral labia, into the bronchial lumen. Sound production always is accompanied by vibratory motions of both labia, indicating that these vibrations may be the sound source. However, because of the low temporal resolution of the imaging system, the frequency and phase of labial vibrations could not be assessed in relation to that of the generated sound. Nevertheless, in contrast to the previous model, these observations show that both labia contribute to aperture control and strongly suggest that they play an important role as principal sound generators.
Characterizing, synthesizing, and/or canceling out acoustic signals from sound sources
Holzrichter, John F [Berkeley, CA; Ng, Lawrence C [Danville, CA
2007-03-13
A system for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate and animate sound sources. Electromagnetic sensors monitor excitation sources in sound producing systems, such as animate sound sources such as the human voice, or from machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The systems disclosed enable accurate calculation of transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid.
Kidd, Gerald
2017-10-17
Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired "target" talker while ignoring the speech from unwanted "masker" talkers and other sources of sound. This listening situation forms the classic "cocktail party problem" described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. This approach, embodied in a prototype "visually guided hearing aid" (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based "spatial filter" operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in "informational masking." The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in "energetic masking." Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation. http://cred.pubs.asha.org/article.aspx?articleid=2601621.
Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid
2017-01-01
Purpose Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired “target” talker while ignoring the speech from unwanted “masker” talkers and other sources of sound. This listening situation forms the classic “cocktail party problem” described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. Method This approach, embodied in a prototype “visually guided hearing aid” (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. Results The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based “spatial filter” operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in “informational masking.” The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in “energetic masking.” Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Conclusions Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601621 PMID:29049603
Graphene-on-paper sound source devices.
Tian, He; Ren, Tian-Ling; Xie, Dan; Wang, Yu-Feng; Zhou, Chang-Jian; Feng, Ting-Ting; Fu, Di; Yang, Yi; Peng, Ping-Gang; Wang, Li-Gang; Liu, Li-Tian
2011-06-28
We demonstrate an interesting phenomenon that graphene can emit sound. The application of graphene can be expanded in the acoustic field. Graphene-on-paper sound source devices are made by patterning graphene on paper substrates. Three graphene sheet samples with the thickness of 100, 60, and 20 nm were fabricated. Sound emission from graphene is measured as a function of power, distance, angle, and frequency in the far-field. The theoretical model of air/graphene/paper/PCB board multilayer structure is established to analyze the sound directivity, frequency response, and efficiency. Measured sound pressure level (SPL) and efficiency are in good agreement with theoretical results. It is found that graphene has a significant flat frequency response in the wide ultrasound range 20-50 kHz. In addition, the thinner graphene sheets can produce higher SPL due to its lower heat capacity per unit area (HCPUA). The infrared thermal images reveal that a thermoacoustic effect is the working principle. We find that the sound performance mainly depends on the HCPUA of the conductor and the thermal properties of the substrate. The paper-based graphene sound source devices have highly reliable, flexible, no mechanical vibration, simple structure and high performance characteristics. It could open wide applications in multimedia, consumer electronics, biological, medical, and many other areas.
Sound field separation with sound pressure and particle velocity measurements.
Fernandez-Grande, Efren; Jacobsen, Finn; Leclère, Quentin
2012-12-01
In conventional near-field acoustic holography (NAH) it is not possible to distinguish between sound from the two sides of the array, thus, it is a requirement that all the sources are confined to only one side and radiate into a free field. When this requirement cannot be fulfilled, sound field separation techniques make it possible to distinguish between outgoing and incoming waves from the two sides, and thus NAH can be applied. In this paper, a separation method based on the measurement of the particle velocity in two layers and another method based on the measurement of the pressure and the velocity in a single layer are proposed. The two methods use an equivalent source formulation with separate transfer matrices for the outgoing and incoming waves, so that the sound from the two sides of the array can be modeled independently. A weighting scheme is proposed to account for the distance between the equivalent sources and measurement surfaces and for the difference in magnitude between pressure and velocity. Experimental and numerical studies have been conducted to examine the methods. The double layer velocity method seems to be more robust to noise and flanking sound than the combined pressure-velocity method, although it requires an additional measurement surface. On the whole, the separation methods can be useful when the disturbance of the incoming field is significant. Otherwise the direct reconstruction is more accurate and straightforward.
Performance of an open-source heart sound segmentation algorithm on eight independent databases.
Liu, Chengyu; Springer, David; Clifford, Gari D
2017-08-01
Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for evaluators who need to test their algorithms with realistic data and share reproducible results.
Directional Hearing and Sound Source Localization in Fishes.
Sisneros, Joseph A; Rogers, Peter H
2016-01-01
Evidence suggests that the capacity for sound source localization is common to mammals, birds, reptiles, and amphibians, but surprisingly it is not known whether fish locate sound sources in the same manner (e.g., combining binaural and monaural cues) or what computational strategies they use for successful source localization. Directional hearing and sound source localization in fishes continues to be important topics in neuroethology and in the hearing sciences, but the empirical and theoretical work on these topics have been contradictory and obscure for decades. This chapter reviews the previous behavioral work on directional hearing and sound source localization in fishes including the most recent experiments on sound source localization by the plainfin midshipman fish (Porichthys notatus), which has proven to be an exceptional species for fish studies of sound localization. In addition, the theoretical models of directional hearing and sound source localization for fishes are reviewed including a new model that uses a time-averaged intensity approach for source localization that has wide applicability with regard to source type, acoustic environment, and time waveform.
NASA Astrophysics Data System (ADS)
Ovsiannikov, Mikhail; Ovsiannikov, Sergei
2017-01-01
The paper presents the combined approach to noise mapping and visualizing of industrial facilities sound pollution using forward ray tracing method and thin-plate spline interpolation. It is suggested to cauterize industrial area in separate zones with similar sound levels. Equivalent local source is defined for range computation of sanitary zones based on ray tracing algorithm. Computation of sound pressure levels within clustered zones are based on two-dimension spline interpolation of measured data on perimeter and inside the zone.
Opo lidar sounding of trace atmospheric gases in the 3 - 4 μm spectral range
NASA Astrophysics Data System (ADS)
Romanovskii, Oleg A.; Sadovnikov, Sergey A.; Kharchenko, Olga V.; Yakovlev, Semen V.
2018-04-01
The applicability of a KTA crystal-based laser system with optical parametric oscillators (OPO) generation to lidar sounding of the atmosphere in the spectral range 3-4 μm is studied in this work. A technique developed for lidar sounding of trace atmospheric gases (TAG) is based on differential absorption lidar (DIAL) method and differential optical absorption spectroscopy (DOAS). The DIAL-DOAS technique is tested to estimate its efficiency for lidar sounding of atmospheric trace gases. The numerical simulation performed shows that a KTA-based OPO laser is a promising source of radiation for remote DIAL-DOAS sounding of the TAGs under study along surface tropospheric paths. A possibility of using a PD38-03-PR photodiode for the DIAL gas analysis of the atmosphere is shown.
Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang
2015-05-01
Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.
Replacing the Orchestra? – The Discernibility of Sample Library and Live Orchestra Sounds
Wolf, Anna; Platz, Friedrich; Mons, Jan
2016-01-01
Recently, musical sounds from pre-recorded orchestra sample libraries (OSL) have become indispensable in music production for the stage or popular charts. Surprisingly, it is unknown whether human listeners can identify sounds as stemming from real orchestras or OSLs. Thus, an internet-based experiment was conducted to investigate whether a classic orchestral work, produced with sounds from a state-of-the-art OSL, could be reliably discerned from a live orchestra recording of the piece. It could be shown that the entire sample of listeners (N = 602) on average identified the correct sound source at 72.5%. This rate slightly exceeded Alan Turing's well-known upper threshold of 70% for a convincing, simulated performance. However, while sound experts tended to correctly identify the sound source, participants with lower listening expertise, who resembled the majority of music consumers, only achieved 68.6%. As non-expert listeners in the experiment were virtually unable to tell the real-life and OSL sounds apart, it is assumed that OSLs will become more common in music production for economic reasons. PMID:27382932
On the sound insulation of acoustic metasurface using a sub-structuring approach
NASA Astrophysics Data System (ADS)
Yu, Xiang; Lu, Zhenbo; Cheng, Li; Cui, Fangsen
2017-08-01
The feasibility of using an acoustic metasurface (AMS) with acoustic stop-band property to realize sound insulation with ventilation function is investigated. An efficient numerical approach is proposed to evaluate its sound insulation performance. The AMS is excited by a reverberant sound source and the standardized sound reduction index (SRI) is numerically investigated. To facilitate the modeling, the coupling between the AMS and the adjacent acoustic fields is formulated using a sub-structuring approach. A modal based formulation is applied to both the source and receiving room, enabling an efficient calculation in the frequency range from 125 Hz to 2000 Hz. The sound pressures and the velocities at the interface are matched by using a transfer function relation based on ;patches;. For illustration purposes, numerical examples are investigated using the proposed approach. The unit cell constituting the AMS is constructed in the shape of a thin acoustic chamber with tailored inner structures, whose stop-band property is numerically analyzed and experimentally demonstrated. The AMS is shown to provide effective sound insulation of over 30 dB in the stop-band frequencies from 600 to 1600 Hz. It is also shown that the proposed approach has the potential to be applied to a broad range of AMS studies and optimization problems.
Characterisation of structure-borne sound source using reception plate method.
Putra, A; Saari, N F; Bakri, H; Ramlan, R; Dan, R M
2013-01-01
A laboratory-based experiment procedure of reception plate method for structure-borne sound source characterisation is reported in this paper. The method uses the assumption that the input power from the source installed on the plate is equal to the power dissipated by the plate. In this experiment, rectangular plates having high and low mobility relative to that of the source were used as the reception plates and a small electric fan motor was acting as the structure-borne source. The data representing the source characteristics, namely, the free velocity and the source mobility, were obtained and compared with those from direct measurement. Assumptions and constraints employing this method are discussed.
Je, Yub; Lee, Haksue; Park, Jongkyu; Moon, Wonkyu
2010-06-01
An ultrasonic radiator is developed to generate a difference frequency sound from two frequencies of ultrasound in air with a parametric array. A design method is proposed for an ultrasonic radiator capable of generating highly directive, high-amplitude ultrasonic sound beams at two different frequencies in air based on a modification of the stepped-plate ultrasonic radiator. The stepped-plate ultrasonic radiator was introduced by Gallego-Juarez et al. [Ultrasonics 16, 267-271 (1978)] in their previous study and can effectively generate highly directive, large-amplitude ultrasonic sounds in air, but only at a single frequency. Because parametric array sources must be able to generate sounds at more than one frequency, a design modification is crucial to the application of a stepped-plate ultrasonic radiator as a parametric array source in air. The aforementioned method was employed to design a parametric radiator for use in air. A prototype of this design was constructed and tested to determine whether it could successfully generate a difference frequency sound with a parametric array. The results confirmed that the proposed single small-area transducer was suitable as a parametric radiator in air.
A method for evaluating the relation between sound source segregation and masking
Lutfi, Robert A.; Liu, Ching-Ju
2011-01-01
Sound source segregation refers to the ability to hear as separate entities two or more sound sources comprising a mixture. Masking refers to the ability of one sound to make another sound difficult to hear. Often in studies, masking is assumed to result from a failure of segregation, but this assumption may not always be correct. Here a method is offered to identify the relation between masking and sound source segregation in studies and an example is given of its application. PMID:21302979
Simplified Rotation In Acoustic Levitation
NASA Technical Reports Server (NTRS)
Barmatz, M. B.; Gaspar, M. S.; Trinh, E. H.
1989-01-01
New technique based on old discovery used to control orientation of object levitated acoustically in axisymmetric chamber. Method does not require expensive equipment like additional acoustic drivers of precisely adjustable amplitude, phase, and frequency. Reflecting object acts as second source of sound. If reflecting object large enough, close enough to levitated object, or focuses reflected sound sufficiently, Rayleigh torque exerted on levitated object by reflected sound controls orientation of object.
The role of long-term familiarity and attentional maintenance in short-term memory for timbre.
Siedenburg, Kai; McAdams, Stephen
2017-04-01
We study short-term recognition of timbre using familiar recorded tones from acoustic instruments and unfamiliar transformed tones that do not readily evoke sound-source categories. Participants indicated whether the timbre of a probe sound matched with one of three previously presented sounds (item recognition). In Exp. 1, musicians better recognised familiar acoustic compared to unfamiliar synthetic sounds, and this advantage was particularly large in the medial serial position. There was a strong correlation between correct rejection rate and the mean perceptual dissimilarity of the probe to the tones from the sequence. Exp. 2 compared musicians' and non-musicians' performance with concurrent articulatory suppression, visual interference, and with a silent control condition. Both suppression tasks disrupted performance by a similar margin, regardless of musical training of participants or type of sounds. Our results suggest that familiarity with sound source categories and attention play important roles in short-term memory for timbre, which rules out accounts solely based on sensory persistence.
Sound source localization identification accuracy: Envelope dependencies.
Yost, William A
2017-07-01
Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.
Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.
2003-01-01
A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Holzrichter, John F; Burnett, Greg C; Ng, Lawrence C
2013-05-21
A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.
2007-10-16
A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
NASA Astrophysics Data System (ADS)
Bi, Chuan-Xing; Hu, Ding-Yu; Zhang, Yong-Bin; Jing, Wen-Qian
2015-06-01
In previous studies, an equivalent source method (ESM)-based technique for recovering the free sound field in a noisy environment has been successfully applied to exterior problems. In order to evaluate its performance when applied to a more general noisy environment, that technique is used to identify active sources inside cavities where the sound field is composed of the field radiated by active sources and that reflected by walls. A patch approach with two semi-closed surfaces covering the target active sources is presented to perform the measurements, and the field that would be radiated by these target active sources into free space is extracted from the mixed field by using the proposed technique, which will be further used as the input of nearfield acoustic holography for source identification. Simulation and experimental results validate the effectiveness of the proposed technique for source identification in cavities, and show the feasibility of performing the measurements with a double layer planar array.
Adaptive near-field beamforming techniques for sound source imaging.
Cho, Yong Thung; Roan, Michael J
2009-02-01
Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.
The effect of spatial distribution on the annoyance caused by simultaneous sounds
NASA Astrophysics Data System (ADS)
Vos, Joos; Bronkhorst, Adelbert W.; Fedtke, Thomas
2004-05-01
A considerable part of the population is exposed to simultaneous and/or successive environmental sounds from different sources. In many cases, these sources are different with respect to their locations also. In a laboratory study, it was investigated whether the annoyance caused by the multiple sounds is affected by the spatial distribution of the sources. There were four independent variables: (1) sound category (stationary or moving), (2) sound type (stationary: lawn-mower, leaf-blower, and chain saw; moving: road traffic, railway, and motorbike), (3) spatial location (left, right, and combinations), and (4) A-weighted sound exposure level (ASEL of single sources equal to 50, 60, or 70 dB). In addition to the individual sounds in isolation, various combinations of two or three different sources within each sound category and sound level were presented for rating. The annoyance was mainly determined by sound level and sound source type. In most cases there were neither significant main effects of spatial distribution nor significant interaction effects between spatial distribution and the other variables. It was concluded that for rating the spatially distrib- uted sounds investigated, the noise dose can simply be determined by a summation of the levels for the left and right channels. [Work supported by CEU.
Spherical loudspeaker array for local active control of sound.
Rafaely, Boaz
2009-05-01
Active control of sound has been employed to reduce noise levels around listeners' head using destructive interference from noise-canceling sound sources. Recently, spherical loudspeaker arrays have been studied as multiple-channel sound sources, capable of generating sound fields with high complexity. In this paper, the potential use of a spherical loudspeaker array for local active control of sound is investigated. A theoretical analysis of the primary and secondary sound fields around a spherical sound source reveals that the natural quiet zones for the spherical source have a shell-shape. Using numerical optimization, quiet zones with other shapes are designed, showing potential for quiet zones with extents that are significantly larger than the well-known limit of a tenth of a wavelength for monopole sources. The paper presents several simulation examples showing quiet zones in various configurations.
Spherical harmonic analysis of the sound radiation from omnidirectional loudspeaker arrays
NASA Astrophysics Data System (ADS)
Pasqual, A. M.
2014-09-01
Omnidirectional sound sources are widely used in room acoustics. These devices are made up of loudspeakers mounted on a spherical or polyhedral cabinet, where the dodecahedral shape prevails. Although such electroacoustic sources have been made readily available to acousticians by many manufacturers, an in-depth investigation of their vibroacoustic behavior has not been provided yet. In order to fulfill this lack, this paper presents a theoretical study of the sound radiation from omnidirectional loudspeaker arrays, which is carried out by using a mathematical model based on the spherical harmonic analysis. Eight different loudspeaker arrangements on the sphere are considered: the well-known five Platonic solid layouts and three extremal system layouts. The latter possess useful properties for spherical loudspeaker arrays used as directivity controlled sound sources, so that these layouts are included here in order to investigate whether or not they could be of interest as omnidirectional sources as well. It is shown through a comparative analysis that the dodecahedral array leads to the lowest error in producing an omnidirectional sound field and to the highest acoustic power, which corroborates the prevalence of such a layout. In addition, if a source with less than 12 loudspeakers is required, it is shown that tetrahedra or hexahedra can be used alternatively, whereas the extremal system layouts are not interesting choices for omnidirectional loudspeaker arrays.
Theory of acoustic design of opera house and a design proposal
NASA Astrophysics Data System (ADS)
Ando, Yoichi
2004-05-01
First of all, the theory of subjective preference for sound fields based on the model of auditory-brain system is briefly mentioned. It consists of the temporal factors and spatial factors associated with the left and right cerebral hemispheres, respectively. The temporal criteria are the initial time delay gap between the direct sound and the first Reflection (Dt1) and the subsequent reverberation time (Tsub). These preferred conditions are related to the minimum value of effective duration of the running autocorrelation function of source signals (te)min. The spatial criteria are binaural listening level (LL) and the IACC, which may be extracted from the interaural crosscorrelation function. In the opera house, there are two different kind of sound sources, i.e., the vocal source of relatively short values of (te)min in the stage and the orchestra music of long values of (te)min in the pit. For these sources, a proposal is made here.
Seismic and Biological Sources of Ambient Ocean Sound
NASA Astrophysics Data System (ADS)
Freeman, Simon Eric
Sound is the most efficient radiation in the ocean. Sounds of seismic and biological origin contain information regarding the underlying processes that created them. A single hydrophone records summary time-frequency information from the volume within acoustic range. Beamforming using a hydrophone array additionally produces azimuthal estimates of sound sources. A two-dimensional array and acoustic focusing produce an unambiguous two-dimensional `image' of sources. This dissertation describes the application of these techniques in three cases. The first utilizes hydrophone arrays to investigate T-phases (water-borne seismic waves) in the Philippine Sea. Ninety T-phases were recorded over a 12-day period, implying a greater number of seismic events occur than are detected by terrestrial seismic monitoring in the region. Observation of an azimuthally migrating T-phase suggests that reverberation of such sounds from bathymetric features can occur over megameter scales. In the second case, single hydrophone recordings from coral reefs in the Line Islands archipelago reveal that local ambient reef sound is spectrally similar to sounds produced by small, hard-shelled benthic invertebrates in captivity. Time-lapse photography of the reef reveals an increase in benthic invertebrate activity at sundown, consistent with an increase in sound level. The dominant acoustic phenomenon on these reefs may thus originate from the interaction between a large number of small invertebrates and the substrate. Such sounds could be used to take census of hard-shelled benthic invertebrates that are otherwise extremely difficult to survey. A two-dimensional `map' of sound production over a coral reef in the Hawaiian Islands was obtained using two-dimensional hydrophone array in the third case. Heterogeneously distributed bio-acoustic sources were generally co-located with rocky reef areas. Acoustically dominant snapping shrimp were largely restricted to one location within the area surveyed. This distribution of sources could reveal small-scale spatial ecological limitations, such as the availability of food and shelter. While array-based passive acoustic sensing is well established in seismoacoustics, the technique is little utilized in the study of ambient biological sound. With the continuance of Moore's law and advances in battery and memory technology, inferring biological processes from ambient sound may become a more accessible tool in underwater ecological evaluation and monitoring.
An acoustic glottal source for vocal tract physical models
NASA Astrophysics Data System (ADS)
Hannukainen, Antti; Kuortti, Juha; Malinen, Jarmo; Ojalammi, Antti
2017-11-01
A sound source is proposed for the acoustic measurement of physical models of the human vocal tract. The physical models are produced by fast prototyping, based on magnetic resonance imaging during prolonged vowel production. The sound source, accompanied by custom signal processing algorithms, is used for two kinds of measurements from physical models of the vocal tract: (i) amplitude frequency response and resonant frequency measurements, and (ii) signal reconstructions at the source output according to a target pressure waveform with measurements at the mouth position. The proposed source and the software are validated by computational acoustics experiments and measurements on a physical model of the vocal tract corresponding to the vowels [] of a male speaker.
Acoustic centering of sources measured by surrounding spherical microphone arrays.
Hagai, Ilan Ben; Pollow, Martin; Vorländer, Michael; Rafaely, Boaz
2011-10-01
The radiation patterns of acoustic sources have great significance in a wide range of applications, such as measuring the directivity of loudspeakers and investigating the radiation of musical instruments for auralization. Recently, surrounding spherical microphone arrays have been studied for sound field analysis, facilitating measurement of the pressure around a sphere and the computation of the spherical harmonics spectrum of the sound source. However, the sound radiation pattern may be affected by the location of the source inside the microphone array, which is an undesirable property when aiming to characterize source radiation in a unique manner. This paper presents a theoretical analysis of the spherical harmonics spectrum of spatially translated sources and defines four measures for the misalignment of the acoustic center of a radiating source. Optimization is used to promote optimal alignment based on the proposed measures and the errors caused by numerical and array-order limitations are investigated. This methodology is examined using both simulated and experimental data in order to investigate the performance and limitations of the different alignment methods. © 2011 Acoustical Society of America
Sound Source Identification Through Flow Density Measurement and Correlation With Far Field Noise
NASA Technical Reports Server (NTRS)
Panda, J.; Seasholtz, R. G.
2001-01-01
Sound sources in the plumes of unheated round jets, in the Mach number range 0.6 to 1.8, were investigated experimentally using "casuality" approach, where air density fluctuations in the plumes were correlated with the far field noise. The air density was measured using a newly developed Molecular Rayleigh scattering based technique, which did not require any seeding. The reference at the end provides a detailed description of the measurement technique.
NASA Technical Reports Server (NTRS)
Conner, David A.; Page, Juliet A.
2002-01-01
To improve aircraft noise impact modeling capabilities and to provide a tool to aid in the development of low noise terminal area operations for rotorcraft and tiltrotors, the Rotorcraft Noise Model (RNM) was developed by the NASA Langley Research Center and Wyle Laboratories. RNM is a simulation program that predicts how sound will propagate through the atmosphere and accumulate at receiver locations located on flat ground or varying terrain, for single and multiple vehicle flight operations. At the core of RNM are the vehicle noise sources, input as sound hemispheres. As the vehicle "flies" along its prescribed flight trajectory, the source sound propagation is simulated and accumulated at the receiver locations (single points of interest or multiple grid points) in a systematic time-based manner. These sound signals at the receiver locations may then be analyzed to obtain single event footprints, integrated noise contours, time histories, or numerous other features. RNM may also be used to generate spectral time history data over a ground mesh for the creation of single event sound animation videos. Acoustic properties of the noise source(s) are defined in terms of sound hemispheres that may be obtained from theoretical predictions, wind tunnel experimental results, flight test measurements, or a combination of the three. The sound hemispheres may contain broadband data (source levels as a function of one-third octave band) and pure-tone data (in the form of specific frequency sound pressure levels and phase). A PC executable version of RNM is publicly available and has been adopted by a number of organizations for Environmental Impact Assessment studies of rotorcraft noise. This paper provides a review of the required input data, the theoretical framework of RNM's propagation model and the output results. Code validation results are provided from a NATO helicopter noise flight test as well as a tiltrotor flight test program that used the RNM as a tool to aid in the development of low noise approach profiles.
Issues in Humanoid Audition and Sound Source Localization by Active Audition
NASA Astrophysics Data System (ADS)
Nakadai, Kazuhiro; Okuno, Hiroshi G.; Kitano, Hiroaki
In this paper, we present an active audition system which is implemented on the humanoid robot "SIG the humanoid". The audition system for highly intelligent humanoids localizes sound sources and recognizes auditory events in the auditory scene. Active audition reported in this paper enables SIG to track sources by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. However, such an active head movement inevitably creates motor noises.The system adaptively cancels motor noises using motor control signals and the cover acoustics. The experimental result demonstrates that active audition by integration of audition, vision, and motor control attains sound source tracking in variety of conditions.onditions.
Imaging of heart acoustic based on the sub-space methods using a microphone array.
Moghaddasi, Hanie; Almasganj, Farshad; Zoroufian, Arezoo
2017-07-01
Heart disease is one of the leading causes of death around the world. Phonocardiogram (PCG) is an important bio-signal which represents the acoustic activity of heart, typically without any spatiotemporal information of the involved acoustic sources. The aim of this study is to analyze the PCG by employing a microphone array by which the heart internal sound sources could be localized, too. In this paper, it is intended to propose a modality by which the locations of the active sources in the heart could also be investigated, during a cardiac cycle. In this way, a microphone array with six microphones is employed as the recording set up to be put on the human chest. In the following, the Group Delay MUSIC algorithm which is a sub-space based localization method is used to estimate the location of the heart sources in different phases of the PCG. We achieved to 0.14cm mean error for the sources of first heart sound (S 1 ) simulator and 0.21cm mean error for the sources of second heart sound (S 2 ) simulator with Group Delay MUSIC algorithm. The acoustical diagrams created for human subjects show distinct patterns in various phases of the cardiac cycles such as the first and second heart sounds. Moreover, the evaluated source locations for the heart valves are matched with the ones that are obtained via the 4-dimensional (4D) echocardiography applied, to a real human case. Imaging of heart acoustic map presents a new outlook to indicate the acoustic properties of cardiovascular system and disorders of valves and thereby, in the future, could be used as a new diagnostic tool. Copyright © 2017. Published by Elsevier B.V.
Developing a system for blind acoustic source localization and separation
NASA Astrophysics Data System (ADS)
Kulkarni, Raghavendra
This dissertation presents innovate methodologies for locating, extracting, and separating multiple incoherent sound sources in three-dimensional (3D) space; and applications of the time reversal (TR) algorithm to pinpoint the hyper active neural activities inside the brain auditory structure that are correlated to the tinnitus pathology. Specifically, an acoustic modeling based method is developed for locating arbitrary and incoherent sound sources in 3D space in real time by using a minimal number of microphones, and the Point Source Separation (PSS) method is developed for extracting target signals from directly measured mixed signals. Combining these two approaches leads to a novel technology known as Blind Sources Localization and Separation (BSLS) that enables one to locate multiple incoherent sound signals in 3D space and separate original individual sources simultaneously, based on the directly measured mixed signals. These technologies have been validated through numerical simulations and experiments conducted in various non-ideal environments where there are non-negligible, unspecified sound reflections and reverberation as well as interferences from random background noise. Another innovation presented in this dissertation is concerned with applications of the TR algorithm to pinpoint the exact locations of hyper-active neurons in the brain auditory structure that are directly correlated to the tinnitus perception. Benchmark tests conducted on normal rats have confirmed the localization results provided by the TR algorithm. Results demonstrate that the spatial resolution of this source localization can be as high as the micrometer level. This high precision localization may lead to a paradigm shift in tinnitus diagnosis, which may in turn produce a more cost-effective treatment for tinnitus than any of the existing ones.
Interactive physically-based sound simulation
NASA Astrophysics Data System (ADS)
Raghuvanshi, Nikunj
The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation behind obstructions, reverberation, scattering from complex geometry and sound focusing. This is enabled by a novel compact representation that takes a thousand times less memory than a direct scheme, thus reducing memory footprints to fit within available main memory. To the best of my knowledge, this is the only technique and system in existence to demonstrate auralization of physical wave-based effects in real-time on large, complex 3D scenes.
Development of an Acoustic Signal Analysis Tool “Auto-F” Based on the Temperament Scale
NASA Astrophysics Data System (ADS)
Modegi, Toshio
The MIDI interface is originally designed for electronic musical instruments but we consider this music-note based coding concept can be extended for general acoustic signal description. We proposed applying the MIDI technology to coding of bio-medical auscultation sound signals such as heart sounds for retrieving medical records and performing telemedicine. Then we have tried to extend our encoding targets including vocal sounds, natural sounds and electronic bio-signals such as ECG, using Generalized Harmonic Analysis method. Currently, we are trying to separate vocal sounds included in popular songs and encode both vocal sounds and background instrumental sounds into separate MIDI channels. And also, we are trying to extract articulation parameters such as MIDI pitch-bend parameters in order to reproduce natural acoustic sounds using a GM-standard MIDI tone generator. In this paper, we present an overall algorithm of our developed acoustic signal analysis tool, based on those research works, which can analyze given time-based signals on the musical temperament scale. The prominent feature of this tool is producing high-precision MIDI codes, which reproduce the similar signals as the given source signal using a GM-standard MIDI tone generator, and also providing analyzed texts in the XML format.
Modeling the utility of binaural cues for underwater sound localization.
Schneider, Jennifer N; Lloyd, David R; Banks, Patchouly N; Mercado, Eduardo
2014-06-01
The binaural cues used by terrestrial animals for sound localization in azimuth may not always suffice for accurate sound localization underwater. The purpose of this research was to examine the theoretical limits of interaural timing and level differences available underwater using computational and physical models. A paired-hydrophone system was used to record sounds transmitted underwater and recordings were analyzed using neural networks calibrated to reflect the auditory capabilities of terrestrial mammals. Estimates of source direction based on temporal differences were most accurate for frequencies between 0.5 and 1.75 kHz, with greater resolution toward the midline (2°), and lower resolution toward the periphery (9°). Level cues also changed systematically with source azimuth, even at lower frequencies than expected from theoretical calculations, suggesting that binaural mechanical coupling (e.g., through bone conduction) might, in principle, facilitate underwater sound localization. Overall, the relatively limited ability of the model to estimate source position using temporal and level difference cues underwater suggests that animals such as whales may use additional cues to accurately localize conspecifics and predators at long distances. Copyright © 2014 Elsevier B.V. All rights reserved.
Prediction of sound fields in acoustical cavities using the boundary element method. M.S. Thesis
NASA Technical Reports Server (NTRS)
Kipp, C. R.; Bernhard, R. J.
1985-01-01
A method was developed to predict sound fields in acoustical cavities. The method is based on the indirect boundary element method. An isoparametric quadratic boundary element is incorporated. Pressure, velocity and/or impedance boundary conditions may be applied to a cavity by using this method. The capability to include acoustic point sources within the cavity is implemented. The method is applied to the prediction of sound fields in spherical and rectangular cavities. All three boundary condition types are verified. Cases with a point source within the cavity domain are also studied. Numerically determined cavity pressure distributions and responses are presented. The numerical results correlate well with available analytical results.
NASA Technical Reports Server (NTRS)
Goldstein, Marvin E.; Leib, Stewart J.
1999-01-01
An approximate method for calculating the noise generated by a turbulent flow within a semi-infinite duct of arbitrary cross section is developed. It is based on a previously derived high-frequency solution to Lilley's equation, which describes the sound propagation in a transversely-sheared mean flow. The source term is simplified by assuming the turbulence to be axisymmetric about the mean flow direction. Numerical results are presented for the special case of a ring source in a circular duct with an axisymmetric mean flow. They show that the internally generated noise is suppressed at sufficiently large upstream angles in a hard walled duct, and that acoustic liners can significantly reduce the sound radiated in both the upstream and downstream regions, depending upon the source location and Mach number of the flow.
NASA Technical Reports Server (NTRS)
Goldstein, Marvin E.; Leib, Stewart J.
1999-01-01
An approximate method for calculating the noise generated by a turbulent flow within a semi-infinite duct of arbitrary cross section is developed. It is based on a previously derived high-frequency solution to Lilley's equation, which describes the sound propagation in transversely-sheared mean flow. The source term is simplified by assuming the turbulence to be axisymmetric about the mean flow direction. Numerical results are presented for the special case of a ring source in a circular duct with an axisymmetric mean flow. They show that the internally generated noise is suppressed at sufficiently large upstream angles in a hard walled duct, and that acoustic liners can significantly reduce the sound radiated in both the upstream and downstream regions, depending upon the source location and Mach number of the flow.
Sound source localization and segregation with internally coupled ears: the treefrog model
Christensen-Dalsgaard, Jakob
2016-01-01
Acoustic signaling plays key roles in mediating many of the reproductive and social behaviors of anurans (frogs and toads). Moreover, acoustic signaling often occurs at night, in structurally complex habitats, such as densely vegetated ponds, and in dense breeding choruses characterized by high levels of background noise and acoustic clutter. Fundamental to anuran behavior is the ability of the auditory system to determine accurately the location from where sounds originate in space (sound source localization) and to assign specific sounds in the complex acoustic milieu of a chorus to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating directions for future research on these animals that will require the collaborative efforts of biologists, physicists, and roboticists. PMID:27730384
Application of acoustic radiosity methods to noise propagation within buildings
NASA Astrophysics Data System (ADS)
Muehleisen, Ralph T.; Beamer, C. Walter
2005-09-01
The prediction of sound pressure levels in rooms from transmitted sound is a difficult problem. The sound energy in the source room incident on the common wall must be accurately predicted. In the receiving room, the propagation of sound from the planar wall source must also be accurately predicted. The radiosity method naturally computes the spatial distribution of sound energy incident on a wall and also naturally predicts the propagation of sound from a planar area source. In this paper, the application of the radiosity method to sound transmission problems is introduced and explained.
Active noise control using a steerable parametric array loudspeaker.
Tanaka, Nobuo; Tanaka, Motoki
2010-06-01
Arguably active noise control enables the sound suppression at the designated control points, while the sound pressure except the targeted locations is likely to augment. The reason is clear; a control source normally radiates the sound omnidirectionally. To cope with this problem, this paper introduces a parametric array loudspeaker (PAL) which produces a spatially focused sound beam due to the attribute of ultrasound used for carrier waves, thereby allowing one to suppress the sound pressure at the designated point without causing spillover in the whole sound field. First the fundamental characteristics of PAL are overviewed. The scattered pressure in the near field contributed by source strength of PAL is then described, which is needed for the design of an active noise control system. Furthermore, the optimal control law for minimizing the sound pressure at control points is derived, the control effect being investigated analytically and experimentally. With a view to tracking a moving target point, a steerable PAL based upon a phased array scheme is presented, with the result that the generation of a moving zone of quiet becomes possible without mechanically rotating the PAL. An experiment is finally conducted, demonstrating the validity of the proposed method.
Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search
Song, Kai; Liu, Qi; Wang, Qi
2011-01-01
Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401
200 kHz Commercial Sonar Systems Generate Lower Frequency Side Lobes Audible to Some Marine Mammals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Zhiqun; Southall, Brandon; Carlson, Thomas J.
2014-04-15
The spectral properties of pulses transmitted by three commercially available 200 kHz echo sounders were measured to assess the possibility that sound energy in below the center (carrier) frequency might be heard by marine mammals. The study found that all three sounders generated sound at frequencies below the center frequency and within the hearing range of some marine mammals and that this sound was likely detectable by the animals over limited ranges. However, at standard operating source levels for the sounders, the sound below the center frequency was well below potentially harmful levels. It was concluded that the sounds generatedmore » by the sounders could affect the behavior of marine mammals within fairly close proximity to the sources and that that the blanket exclusion of echo sounders from environmental impact analysis based solely on the center frequency output in relation to the range of marine mammal hearing should be reconsidered.« less
Ejectable underwater sound source recovery assembly
NASA Technical Reports Server (NTRS)
Irick, S. C. (Inventor)
1974-01-01
An underwater sound source is described that may be ejectably mounted on any mobile device that travels over water, to facilitate in the location and recovery of the device when submerged. A length of flexible line maintains a connection between the mobile device and the sound source. During recovery, the sound source is located be particularly useful in the recovery of spent rocket motors that bury in the ocean floor upon impact.
A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing
NASA Astrophysics Data System (ADS)
Cobos, Maximo; Lopez, JoseJ; Spors, Sascha
2010-12-01
Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.
Korucu, M Kemal; Kaplan, Özgür; Büyük, Osman; Güllü, M Kemal
2016-10-01
In this study, we investigate the usability of sound recognition for source separation of packaging wastes in reverse vending machines (RVMs). For this purpose, an experimental setup equipped with a sound recording mechanism was prepared. Packaging waste sounds generated by three physical impacts such as free falling, pneumatic hitting and hydraulic crushing were separately recorded using two different microphones. To classify the waste types and sizes based on sound features of the wastes, a support vector machine (SVM) and a hidden Markov model (HMM) based sound classification systems were developed. In the basic experimental setup in which only free falling impact type was considered, SVM and HMM systems provided 100% classification accuracy for both microphones. In the expanded experimental setup which includes all three impact types, material type classification accuracies were 96.5% for dynamic microphone and 97.7% for condenser microphone. When both the material type and the size of the wastes were classified, the accuracy was 88.6% for the microphones. The modeling studies indicated that hydraulic crushing impact type recordings were very noisy for an effective sound recognition application. In the detailed analysis of the recognition errors, it was observed that most of the errors occurred in the hitting impact type. According to the experimental results, it can be said that the proposed novel approach for the separation of packaging wastes could provide a high classification performance for RVMs. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dong, Junzi; Colburn, H. Steven
2016-01-01
In multisource, “cocktail party” sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem. PMID:26866056
Dong, Junzi; Colburn, H Steven; Sen, Kamal
2016-01-01
In multisource, "cocktail party" sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem.
Nystuen, Jeffrey A; Moore, Sue E; Stabeno, Phyllis J
2010-07-01
Ambient sound in the ocean contains quantifiable information about the marine environment. A passive aquatic listener (PAL) was deployed at a long-term mooring site in the southeastern Bering Sea from 27 April through 28 September 2004. This was a chain mooring with lots of clanking. However, the sampling strategy of the PAL filtered through this noise and allowed the background sound field to be quantified for natural signals. Distinctive signals include the sound from wind, drizzle and rain. These sources dominate the sound budget and their intensity can be used to quantify wind speed and rainfall rate. The wind speed measurement has an accuracy of +/-0.4 m s(-1) when compared to a buoy-mounted anemometer. The rainfall rate measurement is consistent with a land-based measurement in the Aleutian chain at Cold Bay, AK (170 km south of the mooring location). Other identifiable sounds include ships and short transient tones. The PAL was designed to reject transients in the range important for quantification of wind speed and rainfall, but serendipitously recorded peaks in the sound spectrum between 200 Hz and 3 kHz. Some of these tones are consistent with whale calls, but most are apparently associated with mooring self-noise.
Assessment of sound levels in a neonatal intensive care unit in tabriz, iran.
Valizadeh, Sousan; Bagher Hosseini, Mohammad; Alavi, Nasrinsadat; Asadollahi, Malihe; Kashefimehr, Siamak
2013-03-01
High levels of sound have several negative effects, such as noise-induced hearing loss and delayed growth and development, on premature infants in neonatal intensive care units (NICUs). In order to reduce sound levels, they should first be measured. This study was performed to assess sound levels and determine sources of noise in the NICU of Alzahra Teaching Hospital (Tabriz, Iran). In a descriptive study, 24 hours in 4 workdays were randomly selected. Equivalent continuous sound level (Leq), sound level that is exceeded only 10% of the time (L10), maximum sound level (Lmax), and peak instantaneous sound pressure level (Lzpeak) were measured by CEL-440 sound level meter (SLM) at 6 fixed locations in the NICU. Data was collected using a questionnaire. SPSS13 was then used for data analysis. Mean values of Leq, L10, and Lmax were determined as 63.46 dBA, 65.81 dBA, and 71.30 dBA, respectively. They were all higher than standard levels (Leq < 45 dB, L10 ≤50 dB, and Lmax ≤65 dB). The highest Leq was measured at the time of nurse rounds. Leq was directly correlated with the number of staff members present in the ward. Finally, sources of noise were ordered based on their intensity. Considering that sound levels were higher than standard levels in our studied NICU, it is necessary to adopt policies to reduce sound.
Assessment of Sound Levels in a Neonatal Intensive Care Unit in Tabriz, Iran
Valizadeh, Sousan; Bagher Hosseini, Mohammad; Alavi, Nasrinsadat; Asadollahi, Malihe; Kashefimehr, Siamak
2013-01-01
Introduction: High levels of sound have several negative effects, such as noise-induced hearing loss and delayed growth and development, on premature infants in neonatal intensive care units (NICUs). In order to reduce sound levels, they should first be measured. This study was performed to assess sound levels and determine sources of noise in the NICU of Alzahra Teaching Hospital (Tabriz, Iran). Methods: In a descriptive study, 24 hours in 4 workdays were randomly selected. Equivalent continuous sound level (Leq), sound level that is exceeded only 10% of the time (L10), maximum sound level (Lmax), and peak instantaneous sound pressure level (Lzpeak) were measured by CEL-440 sound level meter (SLM) at 6 fixed locations in the NICU. Data was collected using a questionnaire. SPSS13 was then used for data analysis. Results: Mean values of Leq, L10, and Lmax were determined as 63.46 dBA, 65.81 dBA, and 71.30 dBA, respectively. They were all higher than standard levels (Leq < 45 dB, L10 ≤50 dB, and Lmax ≤65 dB). The highest Leq was measured at the time of nurse rounds. Leq was directly correlated with the number of staff members present in the ward. Finally, sources of noise were ordered based on their intensity. Conclusion: Considering that sound levels were higher than standard levels in our studied NICU, it is necessary to adopt policies to reduce sound. PMID:25276706
Source levels of social sounds in migrating humpback whales (Megaptera novaeangliae).
Dunlop, Rebecca A; Cato, Douglas H; Noad, Michael J; Stokes, Dale M
2013-07-01
The source level of an animal sound is important in communication, since it affects the distance over which the sound is audible. Several measurements of source levels of whale sounds have been reported, but the accuracy of many is limited because the distance to the source and the acoustic transmission loss were estimated rather than measured. This paper presents measurements of source levels of social sounds (surface-generated and vocal sounds) of humpback whales from a sample of 998 sounds recorded from 49 migrating humpback whale groups. Sources were localized using a wide baseline five hydrophone array and transmission loss was measured for the site. Social vocalization source levels were found to range from 123 to 183 dB re 1 μPa @ 1 m with a median of 158 dB re 1 μPa @ 1 m. Source levels of surface-generated social sounds ("breaches" and "slaps") were narrower in range (133 to 171 dB re 1 μPa @ 1 m) but slightly higher in level (median of 162 dB re 1 μPa @ 1 m) compared to vocalizations. The data suggest that group composition has an effect on group vocalization source levels in that singletons and mother-calf-singing escort groups tend to vocalize at higher levels compared to other group compositions.
Dynamic Spatial Hearing by Human and Robot Listeners
NASA Astrophysics Data System (ADS)
Zhong, Xuan
This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.
Wave field synthesis of moving virtual sound sources with complex radiation properties.
Ahrens, Jens; Spors, Sascha
2011-11-01
An approach to the synthesis of moving virtual sound sources with complex radiation properties in wave field synthesis is presented. The approach exploits the fact that any stationary sound source of finite spatial extent radiates spherical waves at sufficient distance. The angular dependency of the radiation properties of the source under consideration is reflected by the amplitude and phase distribution on the spherical wave fronts. The sound field emitted by a uniformly moving monopole source is derived and the far-field radiation properties of the complex virtual source under consideration are incorporated in order to derive a closed-form expression for the loudspeaker driving signal. The results are illustrated via numerical simulations of the synthesis of the sound field of a sample moving complex virtual source.
NASA Astrophysics Data System (ADS)
Sato, Shin-Ichi; Prodi, Nicola; Sakai, Hiroyuki
2004-05-01
To clarify the relationship of the sound fields between the stage and the orchestra pit, we conducted acoustical measurements in a typical historical opera house, the Teatro Comunale of Ferrara, Italy. Orthogonal factors based on the theory of subjective preference and other related factors were analyzed. First, the sound fields for a singer on the stage in relation to the musicians in the pit were analyzed. And then, the sound fields for performers in the pit in relation to the singers on the stage were considered. Because physical factors vary depending on the location of the sound source, performers can move on the stage or in the pit to find the preferred sound field.
Lercher, Peter; De Coensel, Bert; Dekonink, Luc; Botteldooren, Dick
2017-01-01
Sufficient data refer to the relevant prevalence of sound exposure by mixed traffic sources in many nations. Furthermore, consideration of the potential effects of combined sound exposure is required in legal procedures such as environmental health impact assessments. Nevertheless, current practice still uses single exposure response functions. It is silently assumed that those standard exposure-response curves accommodate also for mixed exposures—although some evidence from experimental and field studies casts doubt on this practice. The ALPNAP-study population (N = 1641) shows sufficient subgroups with combinations of rail-highway, highway-main road and rail-highway-main road sound exposure. In this paper we apply a few suggested approaches of the literature to investigate exposure-response curves and its major determinants in the case of exposure to multiple traffic sources. Highly/moderate annoyance and full scale mean annoyance served as outcome. The results show several limitations of the current approaches. Even facing the inherent methodological limitations (energy equivalent summation of sound, rating of overall annoyance) the consideration of main contextual factors jointly occurring with the sources (such as vibration, air pollution) or coping activities and judgments of the wider area soundscape increases the variance explanation from up to 8% (bivariate), up to 15% (base adjustments) up to 55% (full contextual model). The added predictors vary significantly, depending on the source combination. (e.g., significant vibration effects with main road/railway, not highway). Although no significant interactions were found, the observed additive effects are of public health importance. Especially in the case of a three source exposure situation the overall annoyance is already high at lower levels and the contribution of the acoustic indicators is small compared with the non-acoustic and contextual predictors. Noise mapping needs to go down to levels of 40 dBA,Lden to ensure the protection of quiet areas and prohibit the silent “filling up” of these areas with new sound sources. Eventually, to better predict the annoyance in the exposure range between 40 and 60 dBA and support the protection of quiet areas in city and rural areas in planning sound indicators need to be oriented at the noticeability of sound and consider other traffic related by-products (air quality, vibration, coping strain) in future studies and environmental impact assessments. PMID:28632198
A real-time biomimetic acoustic localizing system using time-shared architecture
NASA Astrophysics Data System (ADS)
Nourzad Karl, Marianne; Karl, Christian; Hubbard, Allyn
2008-04-01
In this paper a real-time sound source localizing system is proposed, which is based on previously developed mammalian auditory models. Traditionally, following the models, which use interaural time delay (ITD) estimates, the amount of parallel computations needed by a system to achieve real-time sound source localization is a limiting factor and a design challenge for hardware implementations. Therefore a new approach using a time-shared architecture implementation is introduced. The proposed architecture is a purely sample-base-driven digital system, and it follows closely the continuous-time approach described in the models. Rather than having dedicated hardware on a per frequency channel basis, a specialized core channel, shared for all frequency bands is used. Having an optimized execution time, which is much less than the system's sample rate, the proposed time-shared solution allows the same number of virtual channels to be processed as the dedicated channels in the traditional approach. Hence, the time-shared approach achieves a highly economical and flexible implementation using minimal silicon area. These aspects are particularly important in efficient hardware implementation of a real time biomimetic sound source localization system.
Complete de-Dopplerization and acoustic holography for external noise of a high-speed train.
Yang, Diange; Wen, Junjie; Miao, Feng; Wang, Ziteng; Gu, Xiaoan; Lian, Xiaomin
2016-09-01
Identification and measurement of moving sound sources are the bases for vehicle noise control. Acoustic holography has been applied in successfully identifying the moving sound source since the 1990s. However, due to the high demand for the accuracy of holographic data, currently the maximum velocity achieved by acoustic holography is just above 100 km/h. The objective of this study was to establish a method based on the complete Morse acoustic model to restore the measured signal in high-speed situations, and to propose a far-field acoustic holography method applicable for high-speed moving sound sources. Simulated comparisons of the proposed far-field acoustic holography with complete Morse model, the acoustic holography with simplified Morse model and traditional delay-and-sum beamforming were conducted. Experiments with a high-speed train running at the speed of 278 km/h validated the proposed far-field acoustic holography. This study extended the applications of acoustic holography to high-speed situations and established the basis for quantitative measurements of far-field acoustic holography.
Sounds and source levels from bowhead whales off Pt. Barrow, Alaska.
Cummings, W C; Holliday, D V
1987-09-01
Sounds were recorded from bowhead whales migrating past Pt. Barrow, AK, to the Canadian Beaufort Sea. They mainly consisted of various low-frequency (25- to 900-Hz) moans and well-defined sound sequences organized into "song" (20-5000 Hz) recorded with our 2.46-km hydrophone array suspended from the ice. Songs were composed of up to 20 repeated phrases (mean, 10) which lasted up to 146 s (mean, 66.3). Several bowhead whales often were within acoustic range of the array at once, but usually only one sang at a time. Vocalizations exhibited diurnal peaks of occurrence (0600-0800, 1600-1800 h). Sounds which were located in the horizontal plane had peak source spectrum levels as follows--44 moans: 129-178 dB re: 1 microPa, 1 m (median, 159); 3 garglelike utterances: 152, 155, and 169 dB; 33 songs: 158-189 dB (median, 177), all presumably from different whales. Based on ambient noise levels, measured total propagation loss, and whale sound source levels, our detection of whale sounds was theoretically noise-limited beyond 2.5 km (moans) and beyond 10.7 km (songs), a model supported by actual localizations. This study showed that over much of the shallow Arctic and sub-Arctic waters, underwater communications of the bowhead whale would be limited to much shorter ranges than for other large whales in lower latitude, deep-water regions.
Wang, Chong
2018-03-01
In the case of a point source in front of a panel, the wavefront of the incident wave is spherical. This paper discusses spherical sound waves transmitting through a finite sized panel. The forced sound transmission performance that predominates in the frequency range below the coincidence frequency is the focus. Given the point source located along the centerline of the panel, forced sound transmission coefficient is derived through introducing the sound radiation impedance for spherical incident waves. It is found that in addition to the panel mass, forced sound transmission loss also depends on the distance from the source to the panel as determined by the radiation impedance. Unlike the case of plane incident waves, sound transmission performance of a finite sized panel does not necessarily converge to that of an infinite panel, especially when the source is away from the panel. For practical applications, the normal incidence sound transmission loss expression of plane incident waves can be used if the distance between the source and panel d and the panel surface area S satisfy d/S>0.5. When d/S ≈0.1, the diffuse field sound transmission loss expression may be a good approximation. An empirical expression for d/S=0 is also given.
Reciprocity-based experimental determination of dynamic forces and moments: A feasibility study
NASA Technical Reports Server (NTRS)
Ver, Istvan L.; Howe, Michael S.
1994-01-01
BBN Systems and Technologies has been tasked by the Georgia Tech Research Center to carry Task Assignment No. 7 for the NASA Langley Research Center to explore the feasibility of 'In-Situ Experimental Evaluation of the Source Strength of Complex Vibration Sources Utilizing Reciprocity.' The task was carried out under NASA Contract No. NAS1-19061. In flight it is not feasible to connect the vibration sources to their mounting points on the fuselage through force gauges to measure dynamic forces and moments directly. However, it is possible to measure the interior sound field or vibration response caused by these structureborne sound sources at many locations and invoke principle of reciprocity to predict the dynamic forces and moments. The work carried out in the framework of Task 7 was directed to explore the feasibility of reciprocity-based measurements of vibration forces and moments.
NASA Astrophysics Data System (ADS)
Blanc, Elisabeth; Rickel, Dwight
1989-06-01
Different wave fronts affected by significant nonlinearities have been observed in the ionosphere by a pulsed HF sounding experiment at a distance of 38 km from the source point of a 4800-kg ammonium nitrate and fuel oil (ANFO) explosion on the ground. These wave fronts are revealed by partial reflections of the radio sounding waves. A small-scale irregular structure has been generated by a first wave front at the level of a sporadic E layer which characterized the ionosphere at the time of the experiment. The time scale of these fluctuations is about 1 to 2 s; its lifetime is about 2 min. Similar irregularities were also observed at the level of a second wave front in the F region. This structure appears also as diffusion on a continuous wave sounding at horizontal distances of the order of 200 km from the source. In contrast, a third front unaffected by irregularities may originate from the lowest layers of the ionosphere or from a supersonic wave front propagating at the base of the thermosphere. The origin of these structures is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanson, C.E.; Abbot, P.; Dyer, I.
1993-01-01
Noise levels from magnetically-levitated trains (maglev) at very high speed may be high enough to cause environmental noise impact in residential areas. Aeroacoustic sources dominate the sound at high speeds and guideway vibrations generate noticeable sound at low speed. In addition to high noise levels, the startle effect as a result of sudden onset of sound from a rapidly moving nearby maglev vehicle may lead to increased annoyance to neighbors of a maglev system. The report provides a base for determining the noise consequences and potential mitigation for a high speed maglev system in populated areas of the United States.more » Four areas are included in the study: (1) definition of noise sources; (2) development of noise criteria; (3) development of design guidelines; and (4) recommendations for a noise testing facility.« less
An experimental comparison of various methods of nearfield acoustic holography
Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.
2017-05-19
An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less
An experimental comparison of various methods of nearfield acoustic holography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.
An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less
Noise Source Identification in a Reverberant Field Using Spherical Beamforming
NASA Astrophysics Data System (ADS)
Choi, Young-Chul; Park, Jin-Ho; Yoon, Doo-Byung; Kwon, Hyu-Sang
Identification of noise sources, their locations and strengths, has been taken great attention. The method that can identify noise sources normally assumes that noise sources are located at a free field. However, the sound in a reverberant field consists of that coming directly from the source plus sound reflected or scattered by the walls or objects in the field. In contrast to the exterior sound field, reflections are added to sound field. Therefore, the source location estimated by the conventional methods may give unacceptable error. In this paper, we explain the effects of reverberant field on interior source identification process and propose the method that can identify noise sources in the reverberant field.
Electrophysiological correlates of cocktail-party listening.
Lewald, Jörg; Getzmann, Stephan
2015-10-01
Detecting, localizing, and selectively attending to a particular sound source of interest in complex auditory scenes composed of multiple competing sources is a remarkable capacity of the human auditory system. The neural basis of this so-called "cocktail-party effect" has remained largely unknown. Here, we studied the cortical network engaged in solving the "cocktail-party" problem, using event-related potentials (ERPs) in combination with two tasks demanding horizontal localization of a naturalistic target sound presented either in silence or in the presence of multiple competing sound sources. Presentation of multiple sound sources, as compared to single sources, induced an increased P1 amplitude, a reduction in N1, and a strong N2 component, resulting in a pronounced negativity in the ERP difference waveform (N2d) around 260 ms after stimulus onset. About 100 ms later, the anterior contralateral N2 subcomponent (N2ac) occurred in the multiple-sources condition, as computed from the amplitude difference for targets in the left minus right hemispaces. Cortical source analyses of the ERP modulation, resulting from the contrast of multiple vs. single sources, generally revealed an initial enhancement of electrical activity in right temporo-parietal areas, including auditory cortex, by multiple sources (at P1) that is followed by a reduction, with the primary sources shifting from right inferior parietal lobule (at N1) to left dorso-frontal cortex (at N2d). Thus, cocktail-party listening, as compared to single-source localization, appears to be based on a complex chronology of successive electrical activities within a specific cortical network involved in spatial hearing in complex situations. Copyright © 2015 Elsevier B.V. All rights reserved.
Locating arbitrarily time-dependent sound sources in three dimensional space in real time.
Wu, Sean F; Zhu, Na
2010-08-01
This paper presents a method for locating arbitrarily time-dependent acoustic sources in a free field in real time by using only four microphones. This method is capable of handling a wide variety of acoustic signals, including broadband, narrowband, impulsive, and continuous sound over the entire audible frequency range, produced by multiple sources in three dimensional (3D) space. Locations of acoustic sources are indicated by the Cartesian coordinates. The underlying principle of this method is a hybrid approach that consists of modeling of acoustic radiation from a point source in a free field, triangulation, and de-noising to enhance the signal to noise ratio (SNR). Numerical simulations are conducted to study the impacts of SNR, microphone spacing, source distance and frequency on spatial resolution and accuracy of source localizations. Based on these results, a simple device that consists of four microphones mounted on three mutually orthogonal axes at an optimal distance, a four-channel signal conditioner, and a camera is fabricated. Experiments are conducted in different environments to assess its effectiveness in locating sources that produce arbitrarily time-dependent acoustic signals, regardless whether a sound source is stationary or moves in space, even toward behind measurement microphones. Practical limitations on this method are discussed.
Bionic Modeling of Knowledge-Based Guidance in Automated Underwater Vehicles.
1987-06-24
bugs and their foraging movements are heard by the sound of rustling leaves or rhythmic wing beats . ASYMMETRY OF EARS The faces of owls have captured...sound source without moving. The barn owl has binaural and monaural cues as well as cues that operate in relative motion when either the target or the...owl moves. Table 1 lists the cues. 7 TM No. 87- 2068 fTable 1. Sound Localization Parameters Used by the Barn Owl I BINAURAL PARAMETERS: 1. the
Aircraft noise propagation. [sound diffraction by wings
NASA Technical Reports Server (NTRS)
Hadden, W. J.; Pierce, A. D.
1978-01-01
Sound diffraction experiments conducted at NASA Langley Research Center to study the acoustical implications of the engine over wing configuration (noise-shielding by wing) and to provide a data base for assessing various theoretical approaches to the problem of aircraft noise reduction are described. Topics explored include the theory of sound diffraction around screens and wedges; the scattering of spherical waves by rectangular patches; plane wave diffraction by a wedge with finite impedence; and the effects of ambient flow and distribution sources.
2007-12-01
except for the dive zero time which needed to be programmed during the cruise when the deployment schedule dates were confirmed. _ ACM - Aanderaa ACM...guards bolted on to complete the frame prior to deployment. Sound Source - Sound sources were scheduled to be redeployed. Sound sources were originally...battery voltages and a vacuum. A +27 second time drift was noted and the time was reset. The sound source was scheduled to go to full power on November
Spacecraft Internal Acoustic Environment Modeling
NASA Technical Reports Server (NTRS)
Chu, Shao-Sheng R.; Allen Christopher S.
2010-01-01
Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. This paper describes the implementation of acoustic modeling for design purposes by incrementally increasing model fidelity and validating the accuracy of the model while predicting the noise of sources under various conditions. During FY 07, a simple-geometry Statistical Energy Analysis (SEA) model was developed and validated using a physical mockup and acoustic measurements. A process for modeling the effects of absorptive wall treatments and the resulting reverberation environment were developed. During FY 08, a model with more complex and representative geometry of the Orion Crew Module (CM) interior was built, and noise predictions based on input noise sources were made. A corresponding physical mockup was also built. Measurements were made inside this mockup, and comparisons were made with the model and showed excellent agreement. During FY 09, the fidelity of the mockup and corresponding model were increased incrementally by including a simple ventilation system. The airborne noise contribution of the fans was measured using a sound intensity technique, since the sound power levels were not known beforehand. This is opposed to earlier studies where Reference Sound Sources (RSS) with known sound power level were used. Comparisons of the modeling result with the measurements in the mockup showed excellent results. During FY 10, the fidelity of the mockup and the model were further increased by including an ECLSS (Environmental Control and Life Support System) wall, associated closeout panels, and the gap between ECLSS wall and mockup wall. The effect of sealing the gap and adding sound absorptive treatment to ECLSS wall were also modeled and validated.
Statistics of natural reverberation enable perceptual separation of sound and space
Traer, James; McDermott, Josh H.
2016-01-01
In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us. PMID:27834730
Statistics of natural reverberation enable perceptual separation of sound and space.
Traer, James; McDermott, Josh H
2016-11-29
In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us.
NASA Technical Reports Server (NTRS)
Wilson, L. N.
1970-01-01
The mathematical bases for the direct measurement of sound source intensities in turbulent jets using the crossed-beam technique are discussed in detail. It is found that the problems associated with such measurements lie in three main areas: (1) measurement of the correct flow covariance, (2) accounting for retarded time effects in the measurements, and (3) transformation of measurements to a moving frame of reference. The determination of the particular conditions under which these problems can be circumvented is the main goal of the study.
Zhao, Sipei; Qiu, Xiaojun; Cheng, Jianchun
2015-09-01
This paper proposes a different method for calculating a sound field diffracted by a rigid barrier based on the integral equation method, where a virtual boundary is assumed above the rigid barrier to divide the whole space into two subspaces. Based on the Kirchhoff-Helmholtz equation, the sound field in each subspace is determined with the source inside and the boundary conditions on the surface, and then the diffracted sound field is obtained by using the continuation conditions on the virtual boundary. Simulations are carried out to verify the feasibility of the proposed method. Compared to the MacDonald method and other existing methods, the proposed method is a rigorous solution for whole space and is also much easier to understand.
NASA Astrophysics Data System (ADS)
Nur Farid, Mifta; Arifianto, Dhany
2016-11-01
A person who is suffering from hearing loss can be helped by using hearing aids and the most optimal performance of hearing aids are binaural hearing aids because it has similarities to human auditory system. In a conversation at a cocktail party, a person can focus on a single conversation even though the background sound and other people conversation is quite loud. This phenomenon is known as the cocktail party effect. In an early study, has been explained that binaural hearing have an important contribution to the cocktail party effect. So in this study, will be performed separation on the input binaural sound with 2 microphone sensors of two sound sources based on both the binaural cue, interaural time difference (ITD) and interaural level difference (ILD) using binary mask. To estimate value of ITD, is used cross-correlation method which the value of ITD represented as time delay of peak shifting at time-frequency unit. Binary mask is estimated based on pattern of ITD and ILD to relative strength of target that computed statistically using probability density estimation. Results of sound source separation performing well with the value of speech intelligibility using the percent correct word by 86% and 3 dB by SNR.
Numerical Modelling of the Sound Fields in Urban Streets with Diffusely Reflecting Boundaries
NASA Astrophysics Data System (ADS)
KANG, J.
2002-12-01
A radiosity-based theoretical/computer model has been developed to study the fundamental characteristics of the sound fields in urban streets resulting from diffusely reflecting boundaries, and to investigate the effectiveness of architectural changes and urban design options on noise reduction. Comparison between the theoretical prediction and the measurement in a scale model of an urban street shows very good agreement. Computations using the model in hypothetical rectangular streets demonstrate that though the boundaries are diffusely reflective, the sound attenuation along the length is significant, typically at 20-30 dB/100 m. The sound distribution in a cross-section is generally even unless the cross-section is very close to the source. In terms of the effectiveness of architectural changes and urban design options, it has been shown that over 2-4 dB extra attenuation can be obtained either by increasing boundary absorption evenly or by adding absorbent patches on the façades or the ground. Reducing building height has a similar effect. A gap between buildings can provide about 2-3 dB extra sound attenuation, especially in the vicinity of the gap. The effectiveness of air absorption on increasing sound attenuation along the length could be 3-9 dB at high frequencies. If a treatment is effective with a single source, it is also effective with multiple sources. In addition, it has been demonstrated that if the façades in a street are diffusely reflective, the sound field of the street does not change significantly whether the ground is diffusely or geometrically reflective.
An integrated system for dynamic control of auditory perspective in a multichannel sound field
NASA Astrophysics Data System (ADS)
Corey, Jason Andrew
An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to create a perceptually strong impression of source location and movement within a simulated space.
Sound localization and auditory response capabilities in round goby (Neogobius melanostomus)
NASA Astrophysics Data System (ADS)
Rollo, Audrey K.; Higgs, Dennis M.
2005-04-01
A fundamental role in vertebrate auditory systems is determining the direction of a sound source. While fish show directional responses to sound, sound localization remains in dispute. The species used in the current study, Neogobius melanostomus (round goby) uses sound in reproductive contexts, with both male and female gobies showing directed movement towards a calling male. The two-choice laboratory experiment was used (active versus quiet speaker) to analyze behavior of gobies in response to sound stimuli. When conspecific male spawning sounds were played, gobies moved in a direct path to the active speaker, suggesting true localization to sound. Of the animals that responded to conspecific sounds, 85% of the females and 66% of the males moved directly to the sound source. Auditory playback of natural and synthetic sounds showed differential behavioral specificity. Of gobies that responded, 89% were attracted to the speaker playing Padogobius martensii sounds, 87% to 100 Hz tone, 62% to white noise, and 56% to Gobius niger sounds. Swimming speed, as well as mean path angle to the speaker, will be presented during the presentation. Results suggest a strong localization of the round goby to a sound source, with some differential sound specificity.
Minke whale song, spacing, and acoustic communication on the Great Barrier Reef, Australia
NASA Astrophysics Data System (ADS)
Gedamke, Jason
An inquisitive population of minke whale (Balaenoptera acutorostrata ) that concentrates on the Great Barrier Reef during its suspected breeding season offered a unique opportunity to conduct a multi-faceted study of a little-known Balaenopteran species' acoustic behavior. Chapter one investigates whether the minke whale is the source of an unusual, complex, and stereotyped sound recorded, the "star-wars" vocalization. A hydrophone array was towed from a vessel to record sounds from circling whales for subsequent localization of sound sources. These acoustic locations were matched with shipboard and in-water observations of the minke whale, demonstrating the minke whale was the source of this unusual sound. Spectral and temporal features of this sound and the source levels at which it is produced are described. The repetitive "star-wars" vocalization appears similar to the songs of other whale species and has characteristics consistent with reproductive advertisement displays. Chapter two investigates whether song (i.e. the "star-wars" vocalization) has a spacing function through passive monitoring of singer spatial patterns with a moored five-sonobuoy array. Active song playback experiments to singers were also conducted to further test song function. This study demonstrated that singers naturally maintain spatial separations between them through a nearest-neighbor analysis and animated tracks of singer movements. In response to active song playbacks, singers generally moved away and repeated song more quickly suggesting that song repetition interval may help regulate spatial interaction and singer separation. These results further indicate the Great Barrier Reef may be an important reproductive habitat for this species. Chapter three investigates whether song is part of a potentially graded repertoire of acoustic signals. Utilizing both vessel-based recordings and remote recordings from the sonobuoy array, temporal and spectral features, source levels, and associated contextual data of recorded sounds were analyzed. Two categories of sound are described here: (1) patterned song, which was regularly repeated in one of three patterns: slow, fast, and rapid-clustered repetition, and (2) non-patterned "social" sounds recorded from gregarious assemblages of whales. These discrete acoustic signals may comprise a graded system of communication (Slow/fast song → Rapid-clustered song → Social sounds) that is related to the spacing between whales.
A theoretical study for the propagation of rolling noise over a porous road pavement
NASA Astrophysics Data System (ADS)
Keung Lui, Wai; Ming Li, Kai
2004-07-01
A simplified model based on the study of sound diffracted by a sphere is proposed for investigating the propagation of noise in a hornlike geometry between porous road surfaces and rolling tires. The simplified model is verified by comparing its predictions with the published numerical and experimental results of studies on the horn amplification of sound over a road pavement. In a parametric study, a point monopole source is assumed to be localized on the surface of a tire. In the frequency range of interest, a porous road pavement can effectively reduce the level of amplified sound due to the horn effect. It has been shown that an increase in the thickness and porosity of a porous layer, or the use of a double layer of porous road pavement, attenuates the horn amplification of sound. However, a decrease in the flow resistivity of a porous road pavement does little to reduce the horn amplification of sound. It has also been demonstrated that the horn effect over a porous road pavement is less dependent on the angular position of the source on the surface of tires.
Turbine Sound May Influence the Metamorphosis Behaviour of Estuarine Crab Megalopae
Pine, Matthew K.; Jeffs, Andrew G.; Radford, Craig A.
2012-01-01
It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21–31% compared to silent control treatments, 38–47% compared to tidal turbine sound treatments, and 46–60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment. PMID:23240063
Turbine sound may influence the metamorphosis behaviour of estuarine crab megalopae.
Pine, Matthew K; Jeffs, Andrew G; Radford, Craig A
2012-01-01
It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21-31% compared to silent control treatments, 38-47% compared to tidal turbine sound treatments, and 46-60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment.
Zeitler, Daniel M; Dorman, Michael F; Natale, Sarah J; Loiselle, Louise; Yost, William A; Gifford, Rene H
2015-09-01
To assess improvements in sound source localization and speech understanding in complex listening environments after unilateral cochlear implantation for single-sided deafness (SSD). Nonrandomized, open, prospective case series. Tertiary referral center. Nine subjects with a unilateral cochlear implant (CI) for SSD (SSD-CI) were tested. Reference groups for the task of sound source localization included young (n = 45) and older (n = 12) normal-hearing (NH) subjects and 27 bilateral CI (BCI) subjects. Unilateral cochlear implantation. Sound source localization was tested with 13 loudspeakers in a 180 arc in front of the subject. Speech understanding was tested with the subject seated in an 8-loudspeaker sound system arrayed in a 360-degree pattern. Directionally appropriate noise, originally recorded in a restaurant, was played from each loudspeaker. Speech understanding in noise was tested using the Azbio sentence test and sound source localization quantified using root mean square error. All CI subjects showed poorer-than-normal sound source localization. SSD-CI subjects showed a bimodal distribution of scores: six subjects had scores near the mean of those obtained by BCI subjects, whereas three had scores just outside the 95th percentile of NH listeners. Speech understanding improved significantly in the restaurant environment when the signal was presented to the side of the CI. Cochlear implantation for SSD can offer improved speech understanding in complex listening environments and improved sound source localization in both children and adults. On tasks of sound source localization, SSD-CI patients typically perform as well as BCI patients and, in some cases, achieve scores at the upper boundary of normal performance.
NASA Astrophysics Data System (ADS)
Zuo, Zhifeng; Maekawa, Hiroshi
2014-02-01
The interaction between a moderate-strength shock wave and a near-wall vortex is studied numerically by solving the two-dimensional, unsteady compressible Navier-Stokes equations using a weighted compact nonlinear scheme with a simple low-dissipation advection upstream splitting method for flux splitting. Our main purpose is to clarify the development of the flow field and the generation of sound waves resulting from the interaction. The effects of the vortex-wall distance on the sound generation associated with variations in the flow structures are also examined. The computational results show that three sound sources are involved in this problem: (i) a quadrupolar sound source due to the shock-vortex interaction; (ii) a dipolar sound source due to the vortex-wall interaction; and (iii) a dipolar sound source due to unsteady wall shear stress. The sound field is the combination of the sound waves produced by all three sound sources. In addition to the interaction of the incident shock with the vortex, a secondary shock-vortex interaction is caused by the reflection of the reflected shock (MR2) from the wall. The flow field is dominated by the primary and secondary shock-vortex interactions. The generation mechanism of the third sound, which is newly discovered, due to the MR2-vortex interaction is presented. The pressure variations generated by (ii) become significant with decreasing vortex-wall distance. The sound waves caused by (iii) are extremely weak compared with those caused by (i) and (ii) and are negligible in the computed sound field.
An intelligent artificial throat with sound-sensing ability based on laser induced graphene
Tao, Lu-Qi; Tian, He; Liu, Ying; Ju, Zhen-Yi; Pang, Yu; Chen, Yuan-Quan; Wang, Dan-Yang; Tian, Xiang-Guang; Yan, Jun-Chao; Deng, Ning-Qin; Yang, Yi; Ren, Tian-Ling
2017-01-01
Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas. PMID:28232739
An intelligent artificial throat with sound-sensing ability based on laser induced graphene.
Tao, Lu-Qi; Tian, He; Liu, Ying; Ju, Zhen-Yi; Pang, Yu; Chen, Yuan-Quan; Wang, Dan-Yang; Tian, Xiang-Guang; Yan, Jun-Chao; Deng, Ning-Qin; Yang, Yi; Ren, Tian-Ling
2017-02-24
Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas.
An intelligent artificial throat with sound-sensing ability based on laser induced graphene
NASA Astrophysics Data System (ADS)
Tao, Lu-Qi; Tian, He; Liu, Ying; Ju, Zhen-Yi; Pang, Yu; Chen, Yuan-Quan; Wang, Dan-Yang; Tian, Xiang-Guang; Yan, Jun-Chao; Deng, Ning-Qin; Yang, Yi; Ren, Tian-Ling
2017-02-01
Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas.
Reconstruction of sound source signal by analytical passive TR in the environment with airflow
NASA Astrophysics Data System (ADS)
Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu
2017-03-01
In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.
Lee, Kyoung-Ryul; Jang, Sung Hwan; Jung, Inhwa
2018-08-10
We investigated the acoustic performance of electrostatic sound-generating devices consisting of bi-layer graphene on polyimide film. The total sound pressure level (SPL) of the sound generated from the devices was measured as a function of source frequency by sweeping, and frequency spectra were measured at 1/3 octave band frequencies. The relationship between various operation conditions and total SPL was determined. In addition, the effects of changing voltage level, adding a DC offset, and using two pairs of electrodes were evaluated. It should be noted that two pairs of electrode operations improved sound generation by about 10 dB over all frequency ranges compared with conventional operation. As for the sound-generating capability, total SPL was 70 dBA at 4 kHz when an AC voltage of 100 V pp was applied with a DC offset of 100 V. Acoustic characteristics differed from other types of graphene-based sound generators, such as graphene thermoacoustic devices and graphene polyvinylidene fluoride devices. The effects of diameter and distance between electrodes were also studied, and we found that diameter greatly influenced the frequency response. We anticipate that the design information provided in this paper, in addition to describing key parameters of electrostatic sound-generating devices, will facilitate the commercial development of electrostatic sound-generating systems.
Glackin, Brendan; Wall, Julie A.; McGinnity, Thomas M.; Maguire, Liam P.; McDaid, Liam J.
2010-01-01
Sound localization can be defined as the ability to identify the position of an input sound source and is considered a powerful aspect of mammalian perception. For low frequency sounds, i.e., in the range 270 Hz–1.5 KHz, the mammalian auditory pathway achieves this by extracting the Interaural Time Difference between sound signals being received by the left and right ear. This processing is performed in a region of the brain known as the Medial Superior Olive (MSO). This paper presents a Spiking Neural Network (SNN) based model of the MSO. The network model is trained using the Spike Timing Dependent Plasticity learning rule using experimentally observed Head Related Transfer Function data in an adult domestic cat. The results presented demonstrate how the proposed SNN model is able to perform sound localization with an accuracy of 91.82% when an error tolerance of ±10° is used. For angular resolutions down to 2.5°, it will be demonstrated how software based simulations of the model incur significant computation times. The paper thus also addresses preliminary implementation on a Field Programmable Gate Array based hardware platform to accelerate system performance. PMID:20802855
NASA Astrophysics Data System (ADS)
Gupta, Navarun
2003-10-01
One of the most popular techniques for creating spatialized virtual sounds is based on the use of Head-Related Transfer Functions (HRTFs). HRTFs are signal processing models that represent the modifications undergone by the acoustic signal as it travels from a sound source to each of the listener's eardrums. These modifications are due to the interaction of the acoustic waves with the listener's torso, shoulders, head and pinnae, or outer ears. As such, HRTFs are somewhat different for each listener. For a listener to perceive synthesized 3-D sound cues correctly, the synthesized cues must be similar to the listener's own HRTFs. One can measure individual HRTFs using specialized recording systems, however, these systems are prohibitively expensive and restrict the portability of the 3-D sound system. HRTF-based systems also face several computational challenges. This dissertation presents an alternative method for the synthesis of binaural spatialized sounds. The sound entering the pinna undergoes several reflective, diffractive and resonant phenomena, which determine the HRTF. Using signal processing tools, such as Prony's signal modeling method, an appropriate set of time delays and a resonant frequency were used to approximate the measured Head-Related Impulse Responses (HRIRs). Statistical analysis was used to find out empirical equations describing how the reflections and resonances are determined by the shape and size of the pinna features obtained from 3D images of 15 experimental subjects modeled in the project. These equations were used to yield "Model HRTFs" that can create elevation effects. Listening tests conducted on 10 subjects show that these model HRTFs are 5% more effective than generic HRTFs when it comes to localizing sounds in the frontal plane. The number of reversals (perception of sound source above the horizontal plane when actually it is below the plane and vice versa) was also reduced by 5.7%, showing the perceptual effectiveness of this approach. The model is simple, yet versatile because it relies on easy to measure parameters to create an individualized HRTF. This low-order parameterized model also reduces the computational and storage demands, while maintaining a sufficient number of perceptually relevant spectral cues.
Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation
Oliva, Aude
2017-01-01
Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals. PMID:28451630
Converting a Monopole Emission into a Dipole Using a Subwavelength Structure
NASA Astrophysics Data System (ADS)
Fan, Xu-Dong; Zhu, Yi-Fan; Liang, Bin; Cheng, Jian-chun; Zhang, Likun
2018-03-01
High-efficiency emission of multipoles is unachievable by a source much smaller than the wavelength, preventing compact acoustic devices for generating directional sound beams. Here, we present a primary scheme towards solving this problem by numerically and experimentally enclosing a monopole sound source in a structure with a dimension of around 1 /10 sound wavelength to emit a dipolar field. The radiated sound power is found to be more than twice that of a bare dipole. Our study of efficient emission of directional low-frequency sound from a monopole source in a subwavelength space may have applications such as focused ultrasound for imaging, directional underwater sound beams, miniaturized sonar, etc.
Identification and tracking of particular speaker in noisy environment
NASA Astrophysics Data System (ADS)
Sawada, Hideyuki; Ohkado, Minoru
2004-10-01
Human is able to exchange information smoothly using voice under different situations such as noisy environment in a crowd and with the existence of plural speakers. We are able to detect the position of a source sound in 3D space, extract a particular sound from mixed sounds, and recognize who is talking. By realizing this mechanism with a computer, new applications will be presented for recording a sound with high quality by reducing noise, presenting a clarified sound, and realizing a microphone-free speech recognition by extracting particular sound. The paper will introduce a realtime detection and identification of particular speaker in noisy environment using a microphone array based on the location of a speaker and the individual voice characteristics. The study will be applied to develop an adaptive auditory system of a mobile robot which collaborates with a factory worker.
NASA Astrophysics Data System (ADS)
Heo, Seung; Cheong, Cheolung; Kim, Taehoon
2015-09-01
In this study, efficient numerical method is proposed for predicting tonal and broadband noises of a centrifugal fan unit. The proposed method is based on Hybrid Computational Aero-Acoustic (H-CAA) techniques combined with Unsteady Fast Random Particle Mesh (U-FRPM) method. The U-FRPM method is developed by extending the FRPM method proposed by Ewert et al. and is utilized to synthesize turbulence flow field from unsteady RANS solutions. The H-CAA technique combined with U-FRPM method is applied to predict broadband as well as tonal noises of a centrifugal fan unit in a household refrigerator. Firstly, unsteady flow field driven by a rotating fan is computed by solving the RANS equations with Computational Fluid Dynamic (CFD) techniques. Main source regions around the rotating fan are identified by examining the computed flow fields. Then, turbulence flow fields in the main source regions are synthesized by applying the U-FRPM method. The acoustic analogy is applied to model acoustic sources in the main source regions. Finally, the centrifugal fan noise is predicted by feeding the modeled acoustic sources into an acoustic solver based on the Boundary Element Method (BEM). The sound spectral levels predicted using the current numerical method show good agreements with the measured spectra at the Blade Pass Frequencies (BPFs) as well as in the high frequency range. On the more, the present method enables quantitative assessment of relative contributions of identified source regions to the sound field by comparing predicted sound pressure spectrum due to modeled sources.
NASA Astrophysics Data System (ADS)
Ipatov, M. S.; Ostroumov, M. N.; Sobolev, A. F.
2012-07-01
Experimental results are presented on the effect of both the sound pressure level and the type of spectrum of a sound source on the impedance of an acoustic lining. The spectra under study include those of white noise, a narrow-band signal, and a signal with a preset waveform. It is found that, to obtain reliable data on the impedance of an acoustic lining from the results of interferometric measurements, the total sound pressure level of white noise or the maximal sound pressure level of a pure tone (at every oscillation frequency) needs to be identical to the total sound pressure level of the actual source at the site of acoustic lining on the channel wall.
3D Sound Techniques for Sound Source Elevation in a Loudspeaker Listening Environment
NASA Astrophysics Data System (ADS)
Kim, Yong Guk; Jo, Sungdong; Kim, Hong Kook; Jang, Sei-Jin; Lee, Seok-Pil
In this paper, we propose several 3D sound techniques for sound source elevation in stereo loudspeaker listening environments. The proposed method integrates a head-related transfer function (HRTF) for sound positioning and early reflection for adding reverberant circumstance. In addition, spectral notch filtering and directional band boosting techniques are also included for increasing elevation perception capability. In order to evaluate the elevation performance of the proposed method, subjective listening tests are conducted using several kinds of sound sources such as white noise, sound effects, speech, and music samples. It is shown from the tests that the degrees of perceived elevation by the proposed method are around the 17º to 21º when the stereo loudspeakers are located on the horizontal plane.
Keefe, Douglas H.; Schairer, Kim S.
2011-01-01
An insert ear-canal probe including sound source and microphone can deliver a calibrated sound power level to the ear. The aural power absorbed is proportional to the product of mean-squared forward pressure, ear-canal area, and absorbance, in which the sound field is represented using forward (reverse) waves traveling toward (away from) the eardrum. Forward pressure is composed of incident pressure and its multiple internal reflections between eardrum and probe. Based on a database of measurements in normal-hearing adults from 0.22 to 8 kHz, the transfer-function level of forward relative to incident pressure is boosted below 0.7 kHz and within 4 dB above. The level of forward relative to total pressure is maximal close to 4 kHz with wide variability across ears. A spectrally flat incident-pressure level across frequency produces a nearly flat absorbed power level, in contrast to 19 dB changes in pressure level. Calibrating an ear-canal sound source based on absorbed power may be useful in audiological and research applications. Specifying the tip-to-tail level difference of the suppression tuning curve of stimulus frequency otoacoustic emissions in terms of absorbed power reveals increased cochlear gain at 8 kHz relative to the level difference measured using total pressure. PMID:21361437
Techniques and instrumentation for the measurement of transient sound energy flux
NASA Astrophysics Data System (ADS)
Watkinson, P. S.; Fahy, F. J.
1983-12-01
The evaluation of sound intensity distributions, and sound powers, of essentially continuous sources such as automotive engines, electric motors, production line machinery, furnaces, earth moving machinery and various types of process plants were studied. Although such systems are important sources of community disturbance and, to a lesser extent, of industrial health hazard, the most serious sources of hearing hazard in industry are machines operating on an impact principle, such as drop forges, hammers and punches. Controlled experiments to identify major noise source regions and mechanisms are difficult because it is normally impossible to install them in quiet, anechoic environments. The potential for sound intensity measurement to provide a means of overcoming these difficulties has given promising results, indicating the possibility of separation of directly radiated and reverberant sound fields. However, because of the complexity of transient sound fields, a fundamental investigation is necessary to establish the practicability of intensity field decomposition, which is basic to source characterization techniques.
Recent paleoseismicity record in Prince William Sound, Alaska, USA
NASA Astrophysics Data System (ADS)
Kuehl, Steven A.; Miller, Eric J.; Marshall, Nicole R.; Dellapenna, Timothy M.
2017-12-01
Sedimentological and geochemical investigation of sediment cores collected in the deep (>400 m) central basin of Prince William Sound, along with geochemical fingerprinting of sediment source areas, are used to identify earthquake-generated sediment gravity flows. Prince William Sound receives sediment from two distinct sources: from offshore (primarily Copper River) through Hinchinbrook Inlet, and from sources within the Sound (primarily Columbia Glacier). These sources are found to have diagnostic elemental ratios indicative of provenance; Copper River Basin sediments were significantly higher in Sr/Pb and Cu/Pb, whereas Prince William Sound sediments were significantly higher in K/Ca and Rb/Sr. Within the past century, sediment gravity flows deposited within the deep central channel of Prince William Sound have robust geochemical (provenance) signatures that can be correlated with known moderate to large earthquakes in the region. Given the thick Holocene sequence in the Sound ( 200 m) and correspondingly high sedimentation rates (>1 cm year-1), this relationship suggests that sediments within the central basin of Prince William Sound may contain an extraordinary high-resolution record of paleoseismicity in the region.
Acoustic analysis of trill sounds.
Dhananjaya, N; Yegnanarayana, B; Bhaskararao, Peri
2012-04-01
In this paper, the acoustic-phonetic characteristics of steady apical trills--trill sounds produced by the periodic vibration of the apex of the tongue--are studied. Signal processing methods, namely, zero-frequency filtering and zero-time liftering of speech signals, are used to analyze the excitation source and the resonance characteristics of the vocal tract system, respectively. Although it is natural to expect the effect of trilling on the resonances of the vocal tract system, it is interesting to note that trilling influences the glottal source of excitation as well. The excitation characteristics derived using zero-frequency filtering of speech signals are glottal epochs, strength of impulses at the glottal epochs, and instantaneous fundamental frequency of the glottal vibration. Analysis based on zero-time liftering of speech signals is used to study the dynamic resonance characteristics of vocal tract system during the production of trill sounds. Qualitative analysis of trill sounds in different vowel contexts, and the acoustic cues that may help spotting trills in continuous speech are discussed.
NASA Astrophysics Data System (ADS)
Breitzke, Monika; Bohlen, Thomas
2010-05-01
Modelling sound propagation in the ocean is an essential tool to assess the potential risk of air-gun shots on marine mammals. Based on a 2.5-D finite-difference code a full waveform modelling approach is presented, which determines both sound exposure levels of single shots and cumulative sound exposure levels of multiple shots fired along a seismic line. Band-limited point source approximations of compact air-gun clusters deployed by R/V Polarstern in polar regions are used as sound sources. Marine mammals are simulated as static receivers. Applications to deep and shallow water models including constant and depth-dependent sound velocity profiles of the Southern Ocean show dipole-like directivities in case of single shots and tubular cumulative sound exposure level fields beneath the seismic line in case of multiple shots. Compared to a semi-infinite model an incorporation of seafloor reflections enhances the seismically induced noise levels close to the sea surface. Refraction due to sound velocity gradients and sound channelling in near-surface ducts are evident, but affect only low to moderate levels. Hence, exposure zone radii derived for different hearing thresholds are almost independent of the sound velocity structure. With decreasing thresholds radii increase according to a spherical 20 log10 r law in case of single shots and according to a cylindrical 10 log10 r law in case of multiple shots. A doubling of the shot interval diminishes the cumulative sound exposure levels by -3 dB and halves the radii. The ocean bottom properties only slightly affect the radii in shallow waters, if the normal incidence reflection coefficient exceeds 0.2.
Acoustic investigation of wall jet over a backward-facing step using a microphone phased array
NASA Astrophysics Data System (ADS)
Perschke, Raimund F.; Ramachandran, Rakesh C.; Raman, Ganesh
2015-02-01
The acoustic properties of a wall jet over a hard-walled backward-facing step of aspect ratios 6, 3, 2, and 1.5 are studied using a 24-channel microphone phased array at Mach numbers up to M=0.6. The Reynolds number based on inflow velocity and step height assumes values from Reh = 3.0 ×104 to 7.2 ×105. Flow without and with side walls is considered. The experimental setup is open in the wall-normal direction and the expansion ratio is effectively 1. In case of flow through a duct, symmetry of the flow in the spanwise direction is lost downstream of separation at all but the largest aspect ratio as revealed by oil paint flow visualization. Hydrodynamic scattering of turbulence from the trailing edge of the step contributes significantly to the radiated sound. Reflection of acoustic waves from the bottom plate results in a modulation of power spectral densities. Acoustic source localization has been conducted using a 24-channel microphone phased array. Convective mean-flow effects on the apparent source origin have been assessed by placing a loudspeaker underneath a perforated flat plate and evaluating the displacement of the beamforming peak with inflow Mach number. Two source mechanisms are found near the step. One is due to interaction of the turbulent wall jet with the convex edge of the step. Free-stream turbulence sound is found to be peaked downstream of the step. Presence of the side walls increases free-stream sound. Results of the flow visualization are correlated with acoustic source maps. Trailing-edge sound and free-stream turbulence sound can be discriminated using source localization.
Interior sound field control using generalized singular value decomposition in the frequency domain.
Pasco, Yann; Gauthier, Philippe-Aubert; Berry, Alain; Moreau, Stéphane
2017-01-01
The problem of controlling a sound field inside a region surrounded by acoustic control sources is considered. Inspired by the Kirchhoff-Helmholtz integral, the use of double-layer source arrays allows such a control and avoids the modification of the external sound field by the control sources by the approximation of the sources as monopole and radial dipole transducers. However, the practical implementation of the Kirchhoff-Helmholtz integral in physical space leads to large numbers of control sources and error sensors along with excessive controller complexity in three dimensions. The present study investigates the potential of the Generalized Singular Value Decomposition (GSVD) to reduce the controller complexity and separate the effect of control sources on the interior and exterior sound fields, respectively. A proper truncation of the singular basis provided by the GSVD factorization is shown to lead to effective cancellation of the interior sound field at frequencies below the spatial Nyquist frequency of the control sources array while leaving the exterior sound field almost unchanged. Proofs of concept are provided through simulations achieved for interior problems by simulations in a free field scenario with circular arrays and in a reflective environment with square arrays.
Kuwada, Shigeyuki; Bishop, Brian; Kim, Duck O.
2012-01-01
The major functions of the auditory system are recognition (what is the sound) and localization (where is the sound). Although each of these has received considerable attention, rarely are they studied in combination. Furthermore, the stimuli used in the bulk of studies did not represent sound location in real environments and ignored the effects of reverberation. Another ignored dimension is the distance of a sound source. Finally, there is a scarcity of studies conducted in unanesthetized animals. We illustrate a set of efficient methods that overcome these shortcomings. We use the virtual auditory space method (VAS) to efficiently present sounds at different azimuths, different distances and in different environments. Additionally, this method allows for efficient switching between binaural and monaural stimulation and alteration of acoustic cues singly or in combination to elucidate neural mechanisms underlying localization and recognition. Such procedures cannot be performed with real sound field stimulation. Our research is designed to address the following questions: Are IC neurons specialized to process what and where auditory information? How does reverberation and distance of the sound source affect this processing? How do IC neurons represent sound source distance? Are neural mechanisms underlying envelope processing binaural or monaural? PMID:22754505
Bevelhimer, Mark S; Deng, Z Daniel; Scherelis, Constantin
2016-01-01
Underwater noise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels and natural underwater sound sources in static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines on the Mississippi River, where sounds created by flowing water are part of all measurements, both natural ambient and anthropogenic sources. Vessel sizes ranged from a small fishing boat with 60 hp outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, which were, on average, 40 dB greater than the sound created by an operating hydrokinetic turbine. A comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed sound attenuation.
NASA Technical Reports Server (NTRS)
Swift, G.; Mungur, P.
1979-01-01
General procedures for the prediction of component noise levels incident upon airframe surfaces during cruise are developed. Contributing noise sources are those associated with the propulsion system, the airframe and the laminar flow control (LFC) system. Transformation procedures from the best prediction base of each noise source to the transonic cruise condition are established. Two approaches to LFC/acoustic criteria are developed. The first is a semi-empirical extension of the X-21 LFC/acoustic criteria to include sensitivity to the spectrum and directionality of the sound field. In the second, the more fundamental problem of how sound excites boundary layer disturbances is analyzed by deriving and solving an inhomogeneous Orr-Sommerfeld equation in which the source terms are proportional to the production and dissipation of sound induced fluctuating vorticity. Numerical solutions are obtained and compared with corresponding measurements. Recommendations are made to improve and validate both the cruise noise prediction methods and the LFC/acoustic criteria.
Takashima, Ryoichi; Takiguchi, Tetsuya; Ariki, Yasuo
2013-02-01
This paper presents a method for discriminating the location of the sound source (talker) using only a single microphone. In a previous work, the single-channel approach for discriminating the location of the sound source was discussed, where the acoustic transfer function from a user's position is estimated by using a hidden Markov model of clean speech in the cepstral domain. In this paper, each cepstral dimension of the acoustic transfer function is newly weighted, in order to obtain the cepstral dimensions having information that is useful for classifying the user's position. Then, this paper proposes a feature-weighting method for the cepstral parameter using multiple kernel learning, defining the base kernels for each cepstral dimension of the acoustic transfer function. The user's position is trained and classified by support vector machine. The effectiveness of this method has been confirmed by sound source (talker) localization experiments performed in different room environments.
Freeman, Simon E; Buckingham, Michael J; Freeman, Lauren A; Lammers, Marc O; D'Spain, Gerald L
2015-01-01
A seven element, bi-linear hydrophone array was deployed over a coral reef in the Papahãnaumokuãkea Marine National Monument, Northwest Hawaiian Islands, in order to investigate the spatial, temporal, and spectral properties of biological sound in an environment free of anthropogenic influences. Local biological sound sources, including snapping shrimp and other organisms, produced curved-wavefront acoustic arrivals at the array, allowing source location via focusing to be performed over an area of 1600 m(2). Initially, however, a rough estimate of source location was obtained from triangulation of pair-wise cross-correlations of the sound. Refinements to these initial source locations, and source frequency information, were then obtained using two techniques, conventional and adaptive focusing. It was found that most of the sources were situated on or inside the reef structure itself, rather than over adjacent sandy areas. Snapping-shrimp-like sounds, all with similar spectral characteristics, originated from individual sources predominantly in one area to the east of the array. To the west, the spectral and spatial distributions of the sources were more varied, suggesting the presence of a multitude of heterogeneous biological processes. In addition to the biological sounds, some low-frequency noise due to distant breaking waves was received from end-fire north of the array.
Beranek, Leo L; Nishihara, Noriko
2014-01-01
The Eyring/Sabine equations assume that in a large irregular room a sound wave travels in straight lines from one surface to another, that the surfaces have an average sound absorption coefficient αav, and that the mean-free-path between reflections is 4 V/Stot where V is the volume of the room and Stot is the total area of all of its surfaces. No account is taken of diffusivity of the surfaces. The 4 V/Stot relation was originally based on experimental determinations made by Knudsen (Architectural Acoustics, 1932, pp. 132-141). This paper sets out to test the 4 V/Stot relation experimentally for a wide variety of unoccupied concert and chamber music halls with seating capacities from 200 to 5000, using the measured sound strengths Gmid and reverberation times RT60,mid. Computer simulations of the sound fields for nine of these rooms (of varying shapes) were also made to determine the mean-free-paths by that method. The study shows that 4 V/Stot is an acceptable relation for mean-free-paths in the Sabine/Eyring equations except for halls of unusual shape. Also demonstrated is the proper method for calibrating the dodecahedral sound source used for measuring the sound strength G, i.e., the reverberation chamber method.
Underwater Acoustic Source Localisation Among Blind and Sighted Scuba Divers: Comparative study.
Cambi, Jacopo; Livi, Ludovica; Livi, Walter
2017-05-01
Many blind individuals demonstrate enhanced auditory spatial discrimination or localisation of sound sources in comparison to sighted subjects. However, this hypothesis has not yet been confirmed with regards to underwater spatial localisation. This study therefore aimed to investigate underwater acoustic source localisation among blind and sighted scuba divers. This study took place between February and June 2015 in Elba, Italy, and involved two experimental groups of divers with either acquired (n = 20) or congenital (n = 10) blindness and a control group of 30 sighted divers. Each subject took part in five attempts at an under-water acoustic source localisation task, in which the divers were requested to swim to the source of a sound originating from one of 24 potential locations. The control group had their sight obscured during the task. The congenitally blind divers demonstrated significantly better underwater sound localisation compared to the control group or those with acquired blindness ( P = 0.0007). In addition, there was a significant correlation between years of blindness and underwater sound localisation ( P <0.0001). Congenital blindness was found to positively affect the ability of a diver to recognise the source of a sound in an underwater environment. As the correct localisation of sounds underwater may help individuals to avoid imminent danger, divers should perform sound localisation tests during training sessions.
Sight over sound in the judgment of music performance.
Tsay, Chia-Jung
2013-09-03
Social judgments are made on the basis of both visual and auditory information, with consequential implications for our decisions. To examine the impact of visual information on expert judgment and its predictive validity for performance outcomes, this set of seven experiments in the domain of music offers a conservative test of the relative influence of vision versus audition. People consistently report that sound is the most important source of information in evaluating performance in music. However, the findings demonstrate that people actually depend primarily on visual information when making judgments about music performance. People reliably select the actual winners of live music competitions based on silent video recordings, but neither musical novices nor professional musicians were able to identify the winners based on sound recordings or recordings with both video and sound. The results highlight our natural, automatic, and nonconscious dependence on visual cues. The dominance of visual information emerges to the degree that it is overweighted relative to auditory information, even when sound is consciously valued as the core domain content.
Sight over sound in the judgment of music performance
Tsay, Chia-Jung
2013-01-01
Social judgments are made on the basis of both visual and auditory information, with consequential implications for our decisions. To examine the impact of visual information on expert judgment and its predictive validity for performance outcomes, this set of seven experiments in the domain of music offers a conservative test of the relative influence of vision versus audition. People consistently report that sound is the most important source of information in evaluating performance in music. However, the findings demonstrate that people actually depend primarily on visual information when making judgments about music performance. People reliably select the actual winners of live music competitions based on silent video recordings, but neither musical novices nor professional musicians were able to identify the winners based on sound recordings or recordings with both video and sound. The results highlight our natural, automatic, and nonconscious dependence on visual cues. The dominance of visual information emerges to the degree that it is overweighted relative to auditory information, even when sound is consciously valued as the core domain content. PMID:23959902
[Functional anatomy of the cochlear nerve and the central auditory system].
Simon, E; Perrot, X; Mertens, P
2009-04-01
The auditory pathways are a system of afferent fibers (through the cochlear nerve) and efferent fibers (through the vestibular nerve), which are not limited to a simple information transmitting system but create a veritable integration of the sound stimulus at the different levels, by analyzing its three fundamental elements: frequency (pitch), intensity, and spatial localization of the sound source. From the cochlea to the primary auditory cortex, the auditory fibers are organized anatomically in relation to the characteristic frequency of the sound signal that they transmit (tonotopy). Coding the intensity of the sound signal is based on temporal recruitment (the number of action potentials) and spatial recruitment (the number of inner hair cells recruited near the cell of the frequency that is characteristic of the stimulus). Because of binaural hearing, commissural pathways at each level of the auditory system and integration of the phase shift and the difference in intensity between signals coming from both ears, spatial localization of the sound source is possible. Finally, through the efferent fibers in the vestibular nerve, higher centers exercise control over the activity of the cochlea and adjust the peripheral hearing organ to external sound conditions, thus protecting the auditory system or increasing sensitivity by the attention given to the signal.
The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl.
Baxter, Caitlin S; Nelson, Brian S; Takahashi, Terry T
2013-02-01
Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643-655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map.
A combined analytical and numerical analysis of the flow-acoustic coupling in a cavity-pipe system
NASA Astrophysics Data System (ADS)
Langthjem, Mikael A.; Nakano, Masami
2018-05-01
The generation of sound by flow through a closed, cylindrical cavity (expansion chamber) accommodated with a long tailpipe is investigated analytically and numerically. The sound generation is due to self-sustained flow oscillations in the cavity. These oscillations may, in turn, generate standing (resonant) acoustic waves in the tailpipe. The main interest of the paper is in the interaction between these two sound sources. An analytical, approximate solution of the acoustic part of the problem is obtained via the method of matched asymptotic expansions. The sound-generating flow is represented by a discrete vortex method, based on axisymmetric vortex rings. It is demonstrated through numerical examples that inclusion of acoustic feedback from the tailpipe is essential for a good representation of the sound characteristics.
NASA Astrophysics Data System (ADS)
Royston, Thomas J.; Yazicioglu, Yigit; Loth, Francis
2003-02-01
The response at the surface of an isotropic viscoelastic medium to buried fundamental acoustic sources is studied theoretically, computationally and experimentally. Finite and infinitesimal monopole and dipole sources within the low audible frequency range (40-400 Hz) are considered. Analytical and numerical integral solutions that account for compression, shear and surface wave response to the buried sources are formulated and compared with numerical finite element simulations and experimental studies on finite dimension phantom models. It is found that at low audible frequencies, compression and shear wave propagation from point sources can both be significant, with shear wave effects becoming less significant as frequency increases. Additionally, it is shown that simple closed-form analytical approximations based on an infinite medium model agree well with numerically obtained ``exact'' half-space solutions for the frequency range and material of interest in this study. The focus here is on developing a better understanding of how biological soft tissue affects the transmission of vibro-acoustic energy from biological acoustic sources below the skin surface, whose typical spectral content is in the low audible frequency range. Examples include sound radiated from pulmonary, gastro-intestinal and cardiovascular system functions, such as breath sounds, bowel sounds and vascular bruits, respectively.
Predictability effects in auditory scene analysis: a review
Bendixen, Alexandra
2014-01-01
Many sound sources emit signals in a predictable manner. The idea that predictability can be exploited to support the segregation of one source's signal emissions from the overlapping signals of other sources has been expressed for a long time. Yet experimental evidence for a strong role of predictability within auditory scene analysis (ASA) has been scarce. Recently, there has been an upsurge in experimental and theoretical work on this topic resulting from fundamental changes in our perspective on how the brain extracts predictability from series of sensory events. Based on effortless predictive processing in the auditory system, it becomes more plausible that predictability would be available as a cue for sound source decomposition. In the present contribution, empirical evidence for such a role of predictability in ASA will be reviewed. It will be shown that predictability affects ASA both when it is present in the sound source of interest (perceptual foreground) and when it is present in other sound sources that the listener wishes to ignore (perceptual background). First evidence pointing toward age-related impairments in the latter capacity will be addressed. Moreover, it will be illustrated how effects of predictability can be shown by means of objective listening tests as well as by subjective report procedures, with the latter approach typically exploiting the multi-stable nature of auditory perception. Critical aspects of study design will be delineated to ensure that predictability effects can be unambiguously interpreted. Possible mechanisms for a functional role of predictability within ASA will be discussed, and an analogy with the old-plus-new heuristic for grouping simultaneous acoustic signals will be suggested. PMID:24744695
On some nonlinear effects in ultrasonic fields
Tjotta
2000-03-01
Nonlinear effects associated with intense sound fields in fluids are considered theoretically. Special attention is directed to the study of higher effects that cannot be described within the standard propagation models of nonlinear acoustics (the KZK and Burgers equations). The analysis is based on the fundamental equations of motion for a thermoviscous fluid, for which thermal equations of state exist. Model equations are derived and used to analyze nonlinear sources for generation of flow and heat, and other changes in the ambient state of the fluid. Fluctuations in the coefficients of viscosity and thermal conductivity caused by the sound field, are accounted for. Also considered are nonlinear effects induced in the fluid by flexural vibrations. The intensity and absorption of finite amplitude sound waves are calculated, and related to the sources for generation of higher order effects.
Psychoacoustical evaluation of natural and urban sounds in soundscapes.
Yang, Ming; Kang, Jian
2013-07-01
Among various sounds in the environment, natural sounds, such as water sounds and birdsongs, have proven to be highly preferred by humans, but the reasons for these preferences have not been thoroughly researched. This paper explores differences between various natural and urban environmental sounds from the viewpoint of objective measures, especially psychoacoustical parameters. The sound samples used in this study include the recordings of single sound source categories of water, wind, birdsongs, and urban sounds including street music, mechanical sounds, and traffic noise. The samples are analyzed with a number of existing psychoacoustical parameter algorithmic models. Based on hierarchical cluster and principal components analyses of the calculated results, a series of differences has been shown among different sound types in terms of key psychoacoustical parameters. While different sound categories cannot be identified using any single acoustical and psychoacoustical parameter, identification can be made with a group of parameters, as analyzed with artificial neural networks and discriminant functions in this paper. For artificial neural networks, correlations between network predictions and targets using the average and standard deviation data of psychoacoustical parameters as inputs are above 0.95 for the three natural sound categories and above 0.90 for the urban sound category. For sound identification/classification, key parameters are fluctuation strength, loudness, and sharpness.
Sensing of Particular Speakers for the Construction of Voice Interface Utilized in Noisy Environment
NASA Astrophysics Data System (ADS)
Sawada, Hideyuki; Ohkado, Minoru
Human is able to exchange information smoothly using voice under different situations such as noisy environment in a crowd and with the existence of plural speakers. We are able to detect the position of a source sound in 3D space, extract a particular sound from mixed sounds, and recognize who is talking. By realizing this mechanism with a computer, new applications will be presented for recording a sound with high quality by reducing noise, presenting a clarified sound, and realizing a microphone-free speech recognition by extracting particular sound. The paper will introduce a realtime detection and identification of particular speaker in noisy environment using a microphone array based on the location of a speaker and the individual voice characteristics. The study will be applied to develop an adaptive auditory system of a mobile robot which collaborates with a factory worker.
Sound stream segregation: a neuromorphic approach to solve the “cocktail party problem” in real-time
Thakur, Chetan Singh; Wang, Runchun M.; Afshar, Saeed; Hamilton, Tara J.; Tapson, Jonathan C.; Shamma, Shihab A.; van Schaik, André
2015-01-01
The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the “cocktail party effect.” It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and speech recognition. PMID:26388721
Thakur, Chetan Singh; Wang, Runchun M; Afshar, Saeed; Hamilton, Tara J; Tapson, Jonathan C; Shamma, Shihab A; van Schaik, André
2015-01-01
The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the "cocktail party effect." It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and speech recognition.
Optimum sensor placement for microphone arrays
NASA Astrophysics Data System (ADS)
Rabinkin, Daniel V.
Microphone arrays can be used for high-quality sound pickup in reverberant and noisy environments. Sound capture using conventional single microphone methods suffers severe degradation under these conditions. The beamforming capabilities of microphone array systems allow highly directional sound capture, providing enhanced signal-to-noise ratio (SNR) when compared to single microphone performance. The overall performance of an array system is governed by its ability to locate and track sound sources and its ability to capture sound from desired spatial volumes. These abilities are strongly affected by the spatial placement of microphone sensors. A method is needed to optimize placement for a specified number of sensors in a given acoustical environment. The objective of the optimization is to obtain the greatest average system SNR for sound capture in the region of interest. A two-step sound source location method is presented. In the first step, time delay of arrival (TDOA) estimates for select microphone pairs are determined using a modified version of the Omologo-Svaizer cross-power spectrum phase expression. In the second step, the TDOA estimates are used in a least-mean-squares gradient descent search algorithm to obtain a location estimate. Statistics for TDOA estimate error as a function of microphone pair/sound source geometry and acoustic environment are gathered from a set of experiments. These statistics are used to model position estimation accuracy for a given array geometry. The effectiveness of sound source capture is also dependent on array geometry and the acoustical environment. Simple beamforming and time delay compensation (TDC) methods provide spatial selectivity but suffer performance degradation in reverberant environments. Matched filter array (MFA) processing can mitigate the effects of reverberation. The shape and gain advantage of the capture region for these techniques is described and shown to be highly influenced by the placement of array sensors. A procedure is developed to evaluate a given array configuration based on the above-mentioned metrics. Constrained placement optimizations are performed that maximize SNR for both TDC and MFA capture methods. Results are compared for various acoustic environments and various enclosure sizes. General guidelines are presented for placement strategy and bandwidth dependence, as they relate to reverberation levels, ambient noise, and enclosure geometry. An overall performance function is described based on these metrics. Performance of the microphone array system is also constrained by the design limitations of the supporting hardware. Two newly developed hardware architectures are presented that support the described algorithms. A low- cost 8-channel system with off-the-shelf componentry was designed and its performance evaluated. A massively parallel 512-channel custom-built system is in development-its capabilities and the rationale for its design are described.
Monaural Sound Localization Based on Structure-Induced Acoustic Resonance
Kim, Keonwook; Kim, Youngwoong
2015-01-01
A physical structure such as a cylindrical pipe controls the propagated sound spectrum in a predictable way that can be used to localize the sound source. This paper designs a monaural sound localization system based on multiple pyramidal horns around a single microphone. The acoustic resonance within the horn provides a periodicity in the spectral domain known as the fundamental frequency which is inversely proportional to the radial horn length. Once the system accurately estimates the fundamental frequency, the horn length and corresponding angle can be derived by the relationship. The modified Cepstrum algorithm is employed to evaluate the fundamental frequency. In an anechoic chamber, localization experiments over azimuthal configuration show that up to 61% of the proper signal is recognized correctly with 30% misfire. With a speculated detection threshold, the system estimates direction 52% in positive-to-positive and 34% in negative-to-positive decision rate, on average. PMID:25668214
Bevelhimer, Mark S.; Deng, Z. Daniel; Scherelis, Constantin C.
2016-01-06
Underwaternoise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels and natural underwater sound sources in static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines on the Mississippi River, where soundsmore » created by flowing water are part of all measurements, both natural ambient and anthropogenic sources. Vessel sizes ranged from a small fishing boat with 60 hp outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, which were, on average, 40 dB greater than the sound created by an operating hydrokinetic turbine. As a result, a comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed sound attenuation.« less
Underwater auditory localization by a swimming harbor seal (Phoca vitulina).
Bodson, Anais; Miersch, Lars; Mauck, Bjoern; Dehnhardt, Guido
2006-09-01
The underwater sound localization acuity of a swimming harbor seal (Phoca vitulina) was measured in the horizontal plane at 13 different positions. The stimulus was either a double sound (two 6-kHz pure tones lasting 0.5 s separated by an interval of 0.2 s) or a single continuous sound of 1.2 s. Testing was conducted in a 10-m-diam underwater half circle arena with hidden loudspeakers installed at the exterior perimeter. The animal was trained to swim along the diameter of the half circle and to change its course towards the sound source as soon as the signal was given. The seal indicated the sound source by touching its assumed position at the board of the half circle. The deviation of the seals choice from the actual sound source was measured by means of video analysis. In trials with the double sound the seal localized the sound sources with a mean deviation of 2.8 degrees and in trials with the single sound with a mean deviation of 4.5 degrees. In a second experiment minimum audible angles of the stationary animal were found to be 9.8 degrees in front and 9.7 degrees in the back of the seal's head.
Marine mammal audibility of selected shallow-water survey sources.
MacGillivray, Alexander O; Racca, Roberto; Li, Zizheng
2014-01-01
Most attention about the acoustic effects of marine survey sound sources on marine mammals has focused on airgun arrays, with other common sources receiving less scrutiny. Sound levels above hearing threshold (sensation levels) were modeled for six marine mammal species and seven different survey sources in shallow water. The model indicated that odontocetes were most likely to hear sounds from mid-frequency sources (fishery, communication, and hydrographic systems), mysticetes from low-frequency sources (sub-bottom profiler and airguns), and pinnipeds from both mid- and low-frequency sources. High-frequency sources (side-scan and multibeam) generated the lowest estimated sensation levels for all marine mammal species groups.
Performance of active feedforward control systems in non-ideal, synthesized diffuse sound fields.
Misol, Malte; Bloch, Christian; Monner, Hans Peter; Sinapius, Michael
2014-04-01
The acoustic performance of passive or active panel structures is usually tested in sound transmission loss facilities. A reverberant sending room, equipped with one or a number of independent sound sources, is used to generate a diffuse sound field excitation which acts as a disturbance source on the structure under investigation. The spatial correlation and coherence of such a synthesized non-ideal diffuse-sound-field excitation, however, might deviate significantly from the ideal case. This has consequences for the operation of an active feedforward control system which heavily relies on the acquisition of coherent disturbance source information. This work, therefore, evaluates the spatial correlation and coherence of ideal and non-ideal diffuse sound fields and considers the implications on the performance of a feedforward control system. The system under consideration is an aircraft-typical double panel system, equipped with an active sidewall panel (lining), which is realized in a transmission loss facility. Experimental results for different numbers of sound sources in the reverberation room are compared to simulation results of a comparable generic double panel system excited by an ideal diffuse sound field. It is shown that the number of statistically independent noise sources acting on the primary structure of the double panel system depends not only on the type of diffuse sound field but also on the sample lengths of the processed signals. The experimental results show that the number of reference sensors required for a defined control performance exhibits an inverse relationship to control filter length.
Tracking of Pacific walruses in the Chukchi Sea using a single hydrophone.
Mouy, Xavier; Hannay, David; Zykov, Mikhail; Martin, Bruce
2012-02-01
The vocal repertoire of Pacific walruses includes underwater sound pulses referred to as knocks and bell-like calls. An extended acoustic monitoring program was performed in summer 2007 over a large region of the eastern Chukchi Sea using autonomous seabed-mounted acoustic recorders. Walrus knocks were identified in many of the recordings and most of these sounds included multiple bottom and surface reflected signals. This paper investigates the use of a localization technique based on relative multipath arrival times (RMATs) for potential behavior studies. First, knocks are detected using a semi-automated kurtosis-based algorithm. Then RMATs are matched to values predicted by a ray-tracing model. Walrus tracks with vertical and horizontal movements were obtained. The tracks included repeated dives between 4.0 m and 15.5 m depth and a deep dive to the sea bottom (53 m). Depths at which bell-like sounds are produced, average knock production rate and source levels estimates of the knocks were determined. Bell sounds were produced at all depths throughout the dives. Average knock production rates varied from 59 to 75 knocks/min. Average source level of the knocks was estimated to 177.6 ± 7.5 dB re 1 μPa peak @ 1 m. © 2012 Acoustical Society of America
In-duct identification of a rotating sound source with high spatial resolution
NASA Astrophysics Data System (ADS)
Heo, Yong-Ho; Ih, Jeong-Guon; Bodén, Hans
2015-11-01
To understand and reduce the flow noise generation from in-duct fluid machines, it is necessary to identify the acoustic source characteristics precisely. In this work, a source identification technique, which can identify the strengths and positions of the major sound radiators in the source plane, is studied for an in-duct rotating source. A linear acoustic theory including the effects of evanescent modes and source rotation is formulated based on the modal summation method, which is the underlying theory for the inverse source reconstruction. A validation experiment is conducted on a duct system excited by a loudspeaker in static and rotating conditions, with two different speeds, in the absence of flow. Due to the source rotation, the measured pressure spectra reveal the Doppler effect, and the amount of frequency shift corresponds to the multiplication of the circumferential mode order and the rotation speed. Amplitudes of participating modes are estimated at the shifted frequencies in the stationary reference frame, and the modal amplitude set including the effect of source rotation is collected to investigate the source behavior in the rotating reference frame. By using the estimated modal amplitudes, the near-field pressure is re-calculated and compared with the measured pressure. The obtained maximum relative error is about -25 and -10 dB for rotation speeds at 300 and 600 rev/min, respectively. The spatial distribution of acoustic source parameters is restored from the estimated modal amplitude set. The result clearly shows that the position and magnitude of the main sound source can be identified with high spatial resolution in the rotating reference frame.
Underwater Acoustic Source Localisation Among Blind and Sighted Scuba Divers
Cambi, Jacopo; Livi, Ludovica; Livi, Walter
2017-01-01
Objectives Many blind individuals demonstrate enhanced auditory spatial discrimination or localisation of sound sources in comparison to sighted subjects. However, this hypothesis has not yet been confirmed with regards to underwater spatial localisation. This study therefore aimed to investigate underwater acoustic source localisation among blind and sighted scuba divers. Methods This study took place between February and June 2015 in Elba, Italy, and involved two experimental groups of divers with either acquired (n = 20) or congenital (n = 10) blindness and a control group of 30 sighted divers. Each subject took part in five attempts at an under-water acoustic source localisation task, in which the divers were requested to swim to the source of a sound originating from one of 24 potential locations. The control group had their sight obscured during the task. Results The congenitally blind divers demonstrated significantly better underwater sound localisation compared to the control group or those with acquired blindness (P = 0.0007). In addition, there was a significant correlation between years of blindness and underwater sound localisation (P <0.0001). Conclusion Congenital blindness was found to positively affect the ability of a diver to recognise the source of a sound in an underwater environment. As the correct localisation of sounds underwater may help individuals to avoid imminent danger, divers should perform sound localisation tests during training sessions. PMID:28690888
The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl
Baxter, Caitlin S.; Takahashi, Terry T.
2013-01-01
Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643–655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map. PMID:23175801
Poletti, Mark A; Betlehem, Terence; Abhayapala, Thushara D
2014-07-01
Higher order sound sources of Nth order can radiate sound with 2N + 1 orthogonal radiation patterns, which can be represented as phase modes or, equivalently, amplitude modes. This paper shows that each phase mode response produces a spiral wave front with a different spiral rate, and therefore a different direction of arrival of sound. Hence, for a given receiver position a higher order source is equivalent to a linear array of 2N + 1 monopole sources. This interpretation suggests performance similar to a circular array of higher order sources can be produced by an array of sources, each of which consists of a line array having monopoles at the apparent source locations of the corresponding phase modes. Simulations of higher order arrays and arrays of equivalent line sources are presented. It is shown that the interior fields produced by the two arrays are essentially the same, but that the exterior fields differ because the higher order sources produces different equivalent source locations for field positions outside the array. This work provides an explanation of the fact that an array of L Nth order sources can reproduce sound fields whose accuracy approaches the performance of (2N + 1)L monopoles.
Structure of supersonic jet flow and its radiated sound
NASA Technical Reports Server (NTRS)
Mankbadi, Reda R.; Hayer, M. Ehtesham; Povinelli, Louis A.
1994-01-01
The present paper explores the use of large-eddy simulations as a tool for predicting noise from first principles. A high-order numerical scheme is used to perform large-eddy simulations of a supersonic jet flow with emphasis on capturing the time-dependent flow structure representating the sound source. The wavelike nature of this structure under random inflow disturbances is demonstrated. This wavelike structure is then enhanced by taking the inflow disturbances to be purely harmonic. Application of Lighthill's theory to calculate the far-field noise, with the sound source obtained from the calculated time-dependent near field, is demonstrated. Alternative approaches to coupling the near-field sound source to the far-field sound are discussed.
Underwater sound of rigid-hulled inflatable boats.
Erbe, Christine; Liong, Syafrin; Koessler, Matthew Walter; Duncan, Alec J; Gourlay, Tim
2016-06-01
Underwater sound of rigid-hulled inflatable boats was recorded 142 times in total, over 3 sites: 2 in southern British Columbia, Canada, and 1 off Western Australia. Underwater sound peaked between 70 and 400 Hz, exhibiting strong tones in this frequency range related to engine and propeller rotation. Sound propagation models were applied to compute monopole source levels, with the source assumed 1 m below the sea surface. Broadband source levels (10-48 000 Hz) increased from 134 to 171 dB re 1 μPa @ 1 m with speed from 3 to 16 m/s (10-56 km/h). Source power spectral density percentile levels and 1/3 octave band levels are given for use in predictive modeling of underwater sound of these boats as part of environmental impact assessments.
2007-01-01
deposition directly to Puget Sound was an important source of PAHs, polybrominated diphenyl ethers (PBDEs), and heavy metals . In most cases, atmospheric...versus Atmospheric Fluxes ........................................................................66 PAH Source Apportionment ...temperature inversions) on air quality during the wet season. A semi-quantitative apportionment study permitted a first-order characterization of source
Binaural Processing of Multiple Sound Sources
2016-08-18
Sound Source Localization Identification, and Sound Source Localization When Listeners Move. The CI research was also supported by an NIH grant...8217Cochlear Implant Performance in Realistic Listening Environments,’ Dr. Michael Dorman, Principal Investigator, Dr. William Yost unpaid advisor. The other... Listeners Move. The CI research was also supported by an NIH grant (“Cochlear Implant Performance in Realistic Listening Environments,” Dr. Michael Dorman
Acoustic signatures of sound source-tract coupling.
Arneodo, Ezequiel M; Perl, Yonatan Sanz; Mindlin, Gabriel B
2011-04-01
Birdsong is a complex behavior, which results from the interaction between a nervous system and a biomechanical peripheral device. While much has been learned about how complex sounds are generated in the vocal organ, little has been learned about the signature on the vocalizations of the nonlinear effects introduced by the acoustic interactions between a sound source and the vocal tract. The variety of morphologies among bird species makes birdsong a most suitable model to study phenomena associated to the production of complex vocalizations. Inspired by the sound production mechanisms of songbirds, in this work we study a mathematical model of a vocal organ, in which a simple sound source interacts with a tract, leading to a delay differential equation. We explore the system numerically, and by taking it to the weakly nonlinear limit, we are able to examine its periodic solutions analytically. By these means we are able to explore the dynamics of oscillatory solutions of a sound source-tract coupled system, which are qualitatively different from those of a sound source-filter model of a vocal organ. Nonlinear features of the solutions are proposed as the underlying mechanisms of observed phenomena in birdsong, such as unilaterally produced "frequency jumps," enhancement of resonances, and the shift of the fundamental frequency observed in heliox experiments. ©2011 American Physical Society
Acoustic signatures of sound source-tract coupling
Arneodo, Ezequiel M.; Perl, Yonatan Sanz; Mindlin, Gabriel B.
2014-01-01
Birdsong is a complex behavior, which results from the interaction between a nervous system and a biomechanical peripheral device. While much has been learned about how complex sounds are generated in the vocal organ, little has been learned about the signature on the vocalizations of the nonlinear effects introduced by the acoustic interactions between a sound source and the vocal tract. The variety of morphologies among bird species makes birdsong a most suitable model to study phenomena associated to the production of complex vocalizations. Inspired by the sound production mechanisms of songbirds, in this work we study a mathematical model of a vocal organ, in which a simple sound source interacts with a tract, leading to a delay differential equation. We explore the system numerically, and by taking it to the weakly nonlinear limit, we are able to examine its periodic solutions analytically. By these means we are able to explore the dynamics of oscillatory solutions of a sound source-tract coupled system, which are qualitatively different from those of a sound source-filter model of a vocal organ. Nonlinear features of the solutions are proposed as the underlying mechanisms of observed phenomena in birdsong, such as unilaterally produced “frequency jumps,” enhancement of resonances, and the shift of the fundamental frequency observed in heliox experiments. PMID:21599213
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Alexandrov, B.
2014-12-01
The identification of the physical sources causing spatial and temporal fluctuations of state variables such as river stage levels and aquifer hydraulic heads is challenging. The fluctuations can be caused by variations in natural and anthropogenic sources such as precipitation events, infiltration, groundwater pumping, barometric pressures, etc. The source identification and separation can be crucial for conceptualization of the hydrological conditions and characterization of system properties. If the original signals that cause the observed state-variable transients can be successfully "unmixed", decoupled physics models may then be applied to analyze the propagation of each signal independently. We propose a new model-free inverse analysis of transient data based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS) coupled with k-means clustering algorithm, which we call NMFk. NMFk is capable of identifying a set of unique sources from a set of experimentally measured mixed signals, without any information about the sources, their transients, and the physical mechanisms and properties controlling the signal propagation through the system. A classical BSS conundrum is the so-called "cocktail-party" problem where several microphones are recording the sounds in a ballroom (music, conversations, noise, etc.). Each of the microphones is recording a mixture of the sounds. The goal of BSS is to "unmix'" and reconstruct the original sounds from the microphone records. Similarly to the "cocktail-party" problem, our model-freee analysis only requires information about state-variable transients at a number of observation points, m, where m > r, and r is the number of unknown unique sources causing the observed fluctuations. We apply the analysis on a dataset from the Los Alamos National Laboratory (LANL) site. We identify and estimate the impact and sources are barometric pressure and water-supply pumping effects. We also estimate the location of the water-supply pumping wells based on the available data. The possible applications of the NMFk algorithm are not limited to hydrology problems; NMFk can be applied to any problem where temporal system behavior is observed at multiple locations and an unknown number of physical sources are causing these fluctuations.
Simulation of Jet Noise with OVERFLOW CFD Code and Kirchhoff Surface Integral
NASA Technical Reports Server (NTRS)
Kandula, M.; Caimi, R.; Voska, N. (Technical Monitor)
2002-01-01
An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.
NASA Technical Reports Server (NTRS)
Kandula, Max; Caimi, Raoul; Steinrock, T. (Technical Monitor)
2001-01-01
An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.
Lung and Heart Sounds Analysis: State-of-the-Art and Future Trends.
Padilla-Ortiz, Ana L; Ibarra, David
2018-01-01
Lung sounds, which include all sounds that are produced during the mechanism of respiration, may be classified into normal breath sounds and adventitious sounds. Normal breath sounds occur when no respiratory problems exist, whereas adventitious lung sounds (wheeze, rhonchi, crackle, etc.) are usually associated with certain pulmonary pathologies. Heart and lung sounds that are heard using a stethoscope are the result of mechanical interactions that indicate operation of cardiac and respiratory systems, respectively. In this article, we review the research conducted during the last six years on lung and heart sounds, instrumentation and data sources (sensors and databases), technological advances, and perspectives in processing and data analysis. Our review suggests that chronic obstructive pulmonary disease (COPD) and asthma are the most common respiratory diseases reported on in the literature; related diseases that are less analyzed include chronic bronchitis, idiopathic pulmonary fibrosis, congestive heart failure, and parenchymal pathology. Some new findings regarding the methodologies associated with advances in the electronic stethoscope have been presented for the auscultatory heart sound signaling process, including analysis and clarification of resulting sounds to create a diagnosis based on a quantifiable medical assessment. The availability of automatic interpretation of high precision of heart and lung sounds opens interesting possibilities for cardiovascular diagnosis as well as potential for intelligent diagnosis of heart and lung diseases.
Intensity-invariant coding in the auditory system.
Barbour, Dennis L
2011-11-01
The auditory system faithfully represents sufficient details from sound sources such that downstream cognitive processes are capable of acting upon this information effectively even in the face of signal uncertainty, degradation or interference. This robust sound source representation leads to an invariance in perception vital for animals to interact effectively with their environment. Due to unique nonlinearities in the cochlea, sound representations early in the auditory system exhibit a large amount of variability as a function of stimulus intensity. In other words, changes in stimulus intensity, such as for sound sources at differing distances, create a unique challenge for the auditory system to encode sounds invariantly across the intensity dimension. This challenge and some strategies available to sensory systems to eliminate intensity as an encoding variable are discussed, with a special emphasis upon sound encoding. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Panda, Jayanta; Seasholtz, Richard G.; Elam, Kristie A.
2002-01-01
To locate noise sources in high-speed jets, the sound pressure fluctuations p', measured at far field locations, were correlated with each of radial velocity v, density rho, and phov(exp 2) fluctuations measured from various points in jet plumes. The experiments follow the cause-and-effect method of sound source identification, where
NASA Astrophysics Data System (ADS)
Rosenbaum, Joyce E.
2011-12-01
Commercial air traffic is anticipated to increase rapidly in the coming years. The impact of aviation noise on communities surrounding airports is, therefore, a growing concern. Accurate prediction of noise can help to mitigate the impact on communities and foster smoother integration of aerospace engineering advances. The problem of accurate sound level prediction requires careful inclusion of all mechanisms that affect propagation, in addition to correct source characterization. Terrain, ground type, meteorological effects, and source directivity can have a substantial influence on the noise level. Because they are difficult to model, these effects are often included only by rough approximation. This dissertation presents a model designed for sound propagation over uneven terrain, with mixed ground type and realistic meteorological conditions. The model is a hybrid of two numerical techniques: the parabolic equation (PE) and fast field program (FFP) methods, which allow for physics-based inclusion of propagation effects and ensure the low frequency content, a factor in community impact, is predicted accurately. Extension of the hybrid model to a pseudo-three-dimensional representation allows it to produce aviation noise contour maps in the standard form. In order for the model to correctly characterize aviation noise sources, a method of representing arbitrary source directivity patterns was developed for the unique form of the parabolic equation starting field. With this advancement, the model can represent broadband, directional moving sound sources, traveling along user-specified paths. This work was prepared for possible use in the research version of the sound propagation module in the Federal Aviation Administration's new standard predictive tool.
A finite difference solution for the propagation of sound in near sonic flows
NASA Technical Reports Server (NTRS)
Hariharan, S. I.; Lester, H. C.
1983-01-01
An explicit time/space finite difference procedure is used to model the propagation of sound in a quasi one-dimensional duct containing high Mach number subsonic flow. Nonlinear acoustic equations are derived by perturbing the time-dependent Euler equations about a steady, compressible mean flow. The governing difference relations are based on a fourth-order, two-step (predictor-corrector) MacCormack scheme. The solution algorithm functions by switching on a time harmonic source and allowing the difference equations to iterate to a steady state. The principal effect of the non-linearities was to shift acoustical energy to higher harmonics. With increased source strengths, wave steepening was observed. This phenomenon suggests that the acoustical response may approach a shock behavior at at higher sound pressure level as the throat Mach number aproaches unity. On a peak level basis, good agreement between the nonlinear finite difference and linear finite element solutions was observed, even through a peak sound pressure level of about 150 dB occurred in the throat region. Nonlinear steady state waveform solutions are shown to be in excellent agreement with a nonlinear asymptotic theory.
Experimental Investigation of Propagation and Reflection Phenomena in Finite Amplitude Sound Beams.
NASA Astrophysics Data System (ADS)
Averkiou, Michalakis Andrea
Measurements of finite amplitude sound beams are compared with theoretical predictions based on the KZK equation. Attention is devoted to harmonic generation and shock formation related to a variety of propagation and reflection phenomena. Both focused and unfocused piston sources were used in the experiments. The nominal source parameters are piston radii of 6-25 mm, frequencies of 1-5 MHz, and focal lengths of 10-20 cm. The research may be divided into two parts: propagation and reflection of continuous-wave focused sound beams, and propagation of pulsed sound beams. In the first part, measurements of propagation curves and beam patterns of focused pistons in water, both in the free field and following reflection from curved targets, are presented. The measurements are compared with predictions from a computer model that solves the KZK equation in the frequency domain. A novel method for using focused beams to measure target curvature is developed. In the second part, measurements of pulsed sound beams from plane pistons in both water and glycerin are presented. Very short pulses (less than 2 cycles), tone bursts (5-30 cycles), and frequency modulated (FM) pulses (10-30 cycles) were measured. Acoustic saturation of pulse propagation in water is investigated. Self-demodulation of tone bursts and FM pulses was measured in glycerin, both in the near and far fields, on and off axis. All pulse measurements are compared with numerical results from a computer code that solves the KZK equation in the time domain. A quasilinear analytical solution for the entire axial field of a self-demodulating pulse is derived in the limit of strong absorption. Taken as a whole, the measurements provide a broad data base for sound beams of finite amplitude. Overall, outstanding agreement is obtained between theory and experiment.
Numerical Models for Sound Propagation in Long Spaces
NASA Astrophysics Data System (ADS)
Lai, Chenly Yuen Cheung
Both reverberation time and steady-state sound field are the key elements for assessing the acoustic condition in an enclosed space. They affect the noise propagation, speech intelligibility, clarity index, and definition. Since the sound field in a long space is non diffuse, classical room acoustics theory does not apply in this situation. The ray tracing technique and the image source methods are two common models to fathom both reverberation time and steady-state sound field in long enclosures nowadays. Although both models can give an accurate estimate of reverberation times and steady-state sound field directly or indirectly, they often involve time-consuming calculations. In order to simplify the acoustic consideration, a theoretical formulation has been developed for predicting both steady-state sound fields and reverberation times in street canyons. The prediction model is further developed to predict the steady-state sound field in a long enclosure. Apart from the straight long enclosure, there are other variations such as a cross junction, a long enclosure with a T-intersection, an U-turn long enclosure. In the present study, an theoretical and experimental investigations were conducted to develop formulae for predicting reverberation times and steady-state sound fields in a junction of a street canyon and in a long enclosure with T-intersection. The theoretical models are validated by comparing the numerical predictions with published experimental results. The theoretical results are also compared with precise indoor measurements and large-scale outdoor experimental results. In all of previous acoustical studies related to long enclosure, most of the studies are focused on the monopole sound source. Besides non-directional noise source, many noise sources in long enclosure are dipole like, such as train noise and fan noise. In order to study the characteristics of directional noise sources, a review of available dipole source was conducted. A dipole was constructed which was subsequent used for experimental studies. In additional, a theoretical model was developed for predicting dipole sound fields. The theoretical model can be used to study the effect of a dipole source on the speech intelligibility in long enclosures.
A Corticothalamic Circuit Model for Sound Identification in Complex Scenes
Otazu, Gonzalo H.; Leibold, Christian
2011-01-01
The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668
Quantitative measurement of pass-by noise radiated by vehicles running at high speeds
NASA Astrophysics Data System (ADS)
Yang, Diange; Wang, Ziteng; Li, Bing; Luo, Yugong; Lian, Xiaomin
2011-03-01
It has been a challenge in the past to accurately locate and quantify the pass-by noise source radiated by the running vehicles. A system composed of a microphone array is developed in our current work to do this work. An acoustic-holography method for moving sound sources is designed to handle the Doppler effect effectively in the time domain. The effective sound pressure distribution is reconstructed on the surface of a running vehicle. The method has achieved a high calculation efficiency and is able to quantitatively measure the sound pressure at the sound source and identify the location of the main sound source. The method is also validated by the simulation experiments and the measurement tests with known moving speakers. Finally, the engine noise, tire noise, exhaust noise and wind noise of the vehicle running at different speeds are successfully identified by this method.
Humpback whale bioacoustics: From form to function
NASA Astrophysics Data System (ADS)
Mercado, Eduardo, III
This thesis investigates how humpback whales produce, perceive, and use sounds from a comparative and computational perspective. Biomimetic models are developed within a systems-theoretic framework and then used to analyze the properties of humpback whale sounds. First, sound transmission is considered in terms of possible production mechanisms and the propagation characteristics of shallow water environments frequented by humpback whales. A standard source-filter model (used to describe human sound production) is shown to be well suited for characterizing sound production by humpback whales. Simulations of sound propagation based on normal mode theory reveal that optimal frequencies for long range propagation are higher than the frequencies used most often by humpbacks, and that sounds may contain spectral information indicating how far they have propagated. Next, sound reception is discussed. A model of human auditory processing is modified to emulate humpback whale auditory processing as suggested by cochlear anatomical dimensions. This auditory model is used to generate visual representations of humpback whale sounds that more clearly reveal what features are likely to be salient to listening whales. Additionally, the possibility that an unusual sensory organ (the tubercle) plays a role in acoustic processing is assessed. Spatial distributions of tubercles are described that suggest tubercles may be useful for localizing sound sources. Finally, these models are integrated with self-organizing feature maps to create a biomimetic sound classification system, and a detailed analysis of individual sounds and sound patterns in humpback whale 'songs' is performed. This analysis provides evidence that song sounds and sound patterns vary substantially in terms of detectability and propagation potential, suggesting that they do not all serve the same function. New quantitative techniques are also presented that allow for more objective characterizations of the long term acoustic features of songs. The quantitative framework developed in this thesis provides a basis for theoretical consideration of how humpback whales (and other cetaceans) might use sound. Evidence is presented suggesting that vocalizing humpbacks could use sounds not only to convey information to other whales, but also to collect information about other whales. In particular, it is suggested that some sounds currently believed to be primarily used as communicative signals, might be primarily used as sonar signals. This theoretical framework is shown to be generalizable to other baleen whales and to toothed whales.
Auditory performance in an open sound field
NASA Astrophysics Data System (ADS)
Fluitt, Kim F.; Letowski, Tomasz; Mermagen, Timothy
2003-04-01
Detection and recognition of acoustic sources in an open field are important elements of situational awareness on the battlefield. They are affected by many technical and environmental conditions such as type of sound, distance to a sound source, terrain configuration, meteorological conditions, hearing capabilities of the listener, level of background noise, and the listener's familiarity with the sound source. A limited body of knowledge about auditory perception of sources located over long distances makes it difficult to develop models predicting auditory behavior on the battlefield. The purpose of the present study was to determine the listener's abilities to detect, recognize, localize, and estimate distances to sound sources from 25 to 800 m from the listing position. Data were also collected for meteorological conditions (wind direction and strength, temperature, atmospheric pressure, humidity) and background noise level for each experimental trial. Forty subjects (men and women, ages 18 to 25) participated in the study. Nine types of sounds were presented from six loudspeakers in random order; each series was presented four times. Partial results indicate that both detection and recognition declined at distances greater than approximately 200 m and distance estimation was grossly underestimated by listeners. Specific results will be presented.
Evolutionary trends in directional hearing
Carr, Catherine E.; Christensen-Dalsgaard, Jakob
2016-01-01
Tympanic hearing is a true evolutionary novelty that arose in parallel within early tetrapods. We propose that in these tetrapods, selection for sound localization in air acted upon pre-existing directionally sensitive brainstem circuits, similar to those in fishes. Auditory circuits in birds and lizards resemble this ancestral, directionally sensitive framework. Despite this anatomically similarity, coding of sound source location differs between birds and lizards. In birds, brainstem circuits compute sound location from interaural cues. Lizards, however, have coupled ears, and do not need to compute source location in the brain. Thus their neural processing of sound direction differs, although all show mechanisms for enhancing sound source directionality. Comparisons with mammals reveal similarly complex interactions between coding strategies and evolutionary history. PMID:27448850
Relation of sound intensity and accuracy of localization.
Farrimond, T
1989-08-01
Tests were carried out on 17 subjects to determine the accuracy of monaural sound localization when the head is not free to turn toward the sound source. Maximum accuracy of localization for a constant-volume sound source coincided with the position for maximum perceived intensity of the sound in the front quadrant. There was a tendency for sounds to be perceived more often as coming from a position directly toward the ear. That is, for sounds in the front quadrant, errors of localization tended to be predominantly clockwise (i.e., biased toward a line directly facing the ear). Errors for sounds occurring in the rear quadrant tended to be anticlockwise. The pinna's differential effect on sound intensity between front and rear quadrants would assist in identifying the direction of movement of objects, for example an insect, passing the ear.
Identifying equivalent sound sources from aeroacoustic simulations using a numerical phased array
NASA Astrophysics Data System (ADS)
Pignier, Nicolas J.; O'Reilly, Ciarán J.; Boij, Susann
2017-04-01
An application of phased array methods to numerical data is presented, aimed at identifying equivalent flow sound sources from aeroacoustic simulations. Based on phased array data extracted from compressible flow simulations, sound source strengths are computed on a set of points in the source region using phased array techniques assuming monopole propagation. Two phased array techniques are used to compute the source strengths: an approach using a Moore-Penrose pseudo-inverse and a beamforming approach using dual linear programming (dual-LP) deconvolution. The first approach gives a model of correlated sources for the acoustic field generated from the flow expressed in a matrix of cross- and auto-power spectral values, whereas the second approach results in a model of uncorrelated sources expressed in a vector of auto-power spectral values. The accuracy of the equivalent source model is estimated by computing the acoustic spectrum at a far-field observer. The approach is tested first on an analytical case with known point sources. It is then applied to the example of the flow around a submerged air inlet. The far-field spectra obtained from the source models for two different flow conditions are in good agreement with the spectra obtained with a Ffowcs Williams-Hawkings integral, showing the accuracy of the source model from the observer's standpoint. Various configurations for the phased array and for the sources are used. The dual-LP beamforming approach shows better robustness to changes in the number of probes and sources than the pseudo-inverse approach. The good results obtained with this simulation case demonstrate the potential of the phased array approach as a modelling tool for aeroacoustic simulations.
Source sparsity control of sound field reproduction using the elastic-net and the lasso minimizers.
Gauthier, P-A; Lecomte, P; Berry, A
2017-04-01
Sound field reproduction is aimed at the reconstruction of a sound pressure field in an extended area using dense loudspeaker arrays. In some circumstances, sound field reproduction is targeted at the reproduction of a sound field captured using microphone arrays. Although methods and algorithms already exist to convert microphone array recordings to loudspeaker array signals, one remaining research question is how to control the spatial sparsity in the resulting loudspeaker array signals and what would be the resulting practical advantages. Sparsity is an interesting feature for spatial audio since it can drastically reduce the number of concurrently active reproduction sources and, therefore, increase the spatial contrast of the solution at the expense of a difference between the target and reproduced sound fields. In this paper, the application of the elastic-net cost function to sound field reproduction is compared to the lasso cost function. It is shown that the elastic-net can induce solution sparsity and overcomes limitations of the lasso: The elastic-net solves the non-uniqueness of the lasso solution, induces source clustering in the sparse solution, and provides a smoother solution within the activated source clusters.
Horizontal sound localization in cochlear implant users with a contralateral hearing aid.
Veugen, Lidwien C E; Hendrikse, Maartje M E; van Wanrooij, Marc M; Agterberg, Martijn J H; Chalupper, Josef; Mens, Lucas H M; Snik, Ad F M; John van Opstal, A
2016-06-01
Interaural differences in sound arrival time (ITD) and in level (ILD) enable us to localize sounds in the horizontal plane, and can support source segregation and speech understanding in noisy environments. It is uncertain whether these cues are also available to hearing-impaired listeners who are bimodally fitted, i.e. with a cochlear implant (CI) and a contralateral hearing aid (HA). Here, we assessed sound localization behavior of fourteen bimodal listeners, all using the same Phonak HA and an Advanced Bionics CI processor, matched with respect to loudness growth. We aimed to determine the availability and contribution of binaural (ILDs, temporal fine structure and envelope ITDs) and monaural (loudness, spectral) cues to horizontal sound localization in bimodal listeners, by systematically varying the frequency band, level and envelope of the stimuli. The sound bandwidth had a strong effect on the localization bias of bimodal listeners, although localization performance was typically poor for all conditions. Responses could be systematically changed by adjusting the frequency range of the stimulus, or by simply switching the HA and CI on and off. Localization responses were largely biased to one side, typically the CI side for broadband and high-pass filtered sounds, and occasionally to the HA side for low-pass filtered sounds. HA-aided thresholds better than 45 dB HL in the frequency range of the stimulus appeared to be a prerequisite, but not a guarantee, for the ability to indicate sound source direction. We argue that bimodal sound localization is likely based on ILD cues, even at frequencies below 1500 Hz for which the natural ILDs are small. These cues are typically perturbed in bimodal listeners, leading to a biased localization percept of sounds. The high accuracy of some listeners could result from a combination of sufficient spectral overlap and loudness balance in bimodal hearing. Copyright © 2016 Elsevier B.V. All rights reserved.
Efficient Geometric Sound Propagation Using Visibility Culling
NASA Astrophysics Data System (ADS)
Chandak, Anish
2011-07-01
Simulating propagation of sound can improve the sense of realism in interactive applications such as video games and can lead to better designs in engineering applications such as architectural acoustics. In this thesis, we present geometric sound propagation techniques which are faster than prior methods and map well to upcoming parallel multi-core CPUs. We model specular reflections by using the image-source method and model finite-edge diffraction by using the well-known Biot-Tolstoy-Medwin (BTM) model. We accelerate the computation of specular reflections by applying novel visibility algorithms, FastV and AD-Frustum, which compute visibility from a point. We accelerate finite-edge diffraction modeling by applying a novel visibility algorithm which computes visibility from a region. Our visibility algorithms are based on frustum tracing and exploit recent advances in fast ray-hierarchy intersections, data-parallel computations, and scalable, multi-core algorithms. The AD-Frustum algorithm adapts its computation to the scene complexity and allows small errors in computing specular reflection paths for higher computational efficiency. FastV and our visibility algorithm from a region are general, object-space, conservative visibility algorithms that together significantly reduce the number of image sources compared to other techniques while preserving the same accuracy. Our geometric propagation algorithms are an order of magnitude faster than prior approaches for modeling specular reflections and two to ten times faster for modeling finite-edge diffraction. Our algorithms are interactive, scale almost linearly on multi-core CPUs, and can handle large, complex, and dynamic scenes. We also compare the accuracy of our sound propagation algorithms with other methods. Once sound propagation is performed, it is desirable to listen to the propagated sound in interactive and engineering applications. We can generate smooth, artifact-free output audio signals by applying efficient audio-processing algorithms. We also present the first efficient audio-processing algorithm for scenarios with simultaneously moving source and moving receiver (MS-MR) which incurs less than 25% overhead compared to static source and moving receiver (SS-MR) or moving source and static receiver (MS-SR) scenario.
High-speed helicopter rotor noise - Shock waves as a potent source of sound
NASA Technical Reports Server (NTRS)
Farassat, F.; Lee, Yung-Jang; Tadghighi, H.; Holz, R.
1991-01-01
In this paper we discuss the problem of high speed rotor noise prediction. In particular, we propose that from the point of view of the acoustic analogy, shocks around rotating blades are sources of sound. We show that, although for a wing at uniform steady rectilinear motion with shocks the volume quadrupole and shock sources cancel in the far field to the order of 1/r, this cannot happen for rotating blades. In this case, some cancellation between volume quadrupoles and shock sources occurs, yet the remaining shock noise contribution is still potent. A formula for shock noise prediction is presented based on mapping the deformable shock surface to a time independent region. The resulting equation is similar to Formulation 1A of Langley. Shock noise prediction for a hovering model rotor for which experimental noise data exist is presented. The comparison of measured and predicted acoustic data shows good agreement.
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the sound speed distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Both computer simulation and experimental phantom studies are conducted to demonstrate the use of the WISE method. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
NASA Astrophysics Data System (ADS)
Rasmussen, Karsten B.; Juhl, Peter
2004-05-01
Boundary element method (BEM) calculations are used for the purpose of predicting the acoustic influence of the human head in two cases. In the first case the sound source is the mouth and in the second case the sound is plane waves arriving from different directions in the horizontal plane. In both cases the sound field is studied in relation to two positions above the right ear being representative of hearing aid microphone positions. Both cases are relevant for hearing aid development. The calculations are based upon a direct BEM implementation in Matlab. The meshing is based on the original geometrical data files describing the B&K Head and Torso Simulator 4128 combined with a 3D scan of the pinna.
Linear multivariate evaluation models for spatial perception of soundscape.
Deng, Zhiyong; Kang, Jian; Wang, Daiwei; Liu, Aili; Kang, Joe Zhengyu
2015-11-01
Soundscape is a sound environment that emphasizes the awareness of auditory perception and social or cultural understandings. The case of spatial perception is significant to soundscape. However, previous studies on the auditory spatial perception of the soundscape environment have been limited. Based on 21 native binaural-recorded soundscape samples and a set of auditory experiments for subjective spatial perception (SSP), a study of the analysis among semantic parameters, the inter-aural-cross-correlation coefficient (IACC), A-weighted-equal sound-pressure-level (L(eq)), dynamic (D), and SSP is introduced to verify the independent effect of each parameter and to re-determine some of their possible relationships. The results show that the more noisiness the audience perceived, the worse spatial awareness they received, while the closer and more directional the sound source image variations, dynamics, and numbers of sound sources in the soundscape are, the better the spatial awareness would be. Thus, the sensations of roughness, sound intensity, transient dynamic, and the values of Leq and IACC have a suitable range for better spatial perception. A better spatial awareness seems to promote the preference slightly for the audience. Finally, setting SSPs as functions of the semantic parameters and Leq-D-IACC, two linear multivariate evaluation models of subjective spatial perception are proposed.
Source splitting via the point source method
NASA Astrophysics Data System (ADS)
Potthast, Roland; Fazi, Filippo M.; Nelson, Philip A.
2010-04-01
We introduce a new algorithm for source identification and field splitting based on the point source method (Potthast 1998 A point-source method for inverse acoustic and electromagnetic obstacle scattering problems IMA J. Appl. Math. 61 119-40, Potthast R 1996 A fast new method to solve inverse scattering problems Inverse Problems 12 731-42). The task is to separate the sound fields uj, j = 1, ..., n of n \\in \\mathbb {N} sound sources supported in different bounded domains G1, ..., Gn in \\mathbb {R}^3 from measurements of the field on some microphone array—mathematically speaking from the knowledge of the sum of the fields u = u1 + sdotsdotsdot + un on some open subset Λ of a plane. The main idea of the scheme is to calculate filter functions g_1, \\ldots, g_n, n\\in \\mathbb {N} , to construct uell for ell = 1, ..., n from u|Λ in the form u_{\\ell }(x) = \\int _{\\Lambda } g_{\\ell,x}(y) u(y) {\\,\\rm d}s(y), \\qquad \\ell =1,\\ldots, n. We will provide the complete mathematical theory for the field splitting via the point source method. In particular, we describe uniqueness, solvability of the problem and convergence and stability of the algorithm. In the second part we describe the practical realization of the splitting for real data measurements carried out at the Institute for Sound and Vibration Research at Southampton, UK. A practical demonstration of the original recording and the splitting results for real data is available online.
Sound source measurement by using a passive sound insulation and a statistical approach
NASA Astrophysics Data System (ADS)
Dragonetti, Raffaele; Di Filippo, Sabato; Mercogliano, Francesco; Romano, Rosario A.
2015-10-01
This paper describes a measurement technique developed by the authors that allows carrying out acoustic measurements inside noisy environments reducing background noise effects. The proposed method is based on the integration of a traditional passive noise insulation system with a statistical approach. The latter is applied to signals picked up by usual sensors (microphones and accelerometers) equipping the passive sound insulation system. The statistical approach allows improving of the sound insulation given only by the passive sound insulation system at low frequency. The developed measurement technique has been validated by means of numerical simulations and measurements carried out inside a real noisy environment. For the case-studies here reported, an average improvement of about 10 dB has been obtained in a frequency range up to about 250 Hz. Considerations on the lower sound pressure level that can be measured by applying the proposed method and the measurement error related to its application are reported as well.
Iyendo, Timothy Onosahwo
2017-11-01
Most prior hospital noise research usually deals with sound in its noise facet and is based merely on sound level abatement, rather than as an informative or orientational element. This paper stimulates scientific research into the effect of sound interventions on physical and mental health care in the clinical environment. Data sources comprised relevant World Health Organization guidelines and the results of a literature search of ISI Web of Science, ProQuest Central, MEDLINE, PubMed, Scopus, JSTOR and Google Scholar. Noise induces stress and impedes the recovery process. Pleasant natural sound intervention which includes singing birds, gentle wind and ocean waves, revealed benefits that contribute to perceived restoration of attention and stress recovery in patients and staff. Clinicians should consider pleasant natural sounds perception as a low-risk non-pharmacological and unobtrusive intervention that should be implemented in their routine care for speedier recovery of patients undergoing medical procedures. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khangaonkar, Tarang; Sackmann, Brandon; Long, Wen
2012-08-14
Nutrient pollution from rivers, nonpoint source runoff, and nearly 100 wastewater discharges is a potential threat to the ecological health of Puget Sound with evidence of hypoxia in some basins. However, the relative contributions of loads entering Puget Sound from natural and anthropogenic sources, and the effects of exchange flow from the Pacific Ocean are not well understood. Development of a quantitative model of Puget Sound is thus presented to help improve our understanding of the annual biogeochemical cycles in this system using the unstructured grid Finite-Volume Coastal Ocean Model framework and the Integrated Compartment Model (CE-QUAL-ICM) water quality kinetics.more » Results based on 2006 data show that phytoplankton growth and die-off, succession between two species of algae, nutrient dynamics, and dissolved oxygen in Puget Sound are strongly tied to seasonal variation of temperature, solar radiation, and the annual exchange and flushing induced by upwelled Pacific Ocean waters. Concentrations in the mixed outflow surface layer occupying approximately 5–20 m of the upper water column show strong effects of eutrophication from natural and anthropogenic sources, spring and summer algae blooms, accompanied by depleted nutrients but high dissolved oxygen levels. The bottom layer reflects dissolved oxygen and nutrient concentrations of upwelled Pacific Ocean water modulated by mixing with biologically active surface outflow in the Strait of Juan de Fuca prior to entering Puget Sound over the Admiralty Inlet. The effect of reflux mixing at the Admiralty Inlet sill resulting in lower nutrient and higher dissolved oxygen levels in bottom waters of Puget Sound than the incoming upwelled Pacific Ocean water is reproduced. Finally, by late winter, with the reduction in algal activity, water column constituents of interest, were renewed and the system appeared to reset with cooler temperature, higher nutrient, and higher dissolved oxygen waters from the Pacific Ocean.« less
Egocentric and allocentric representations in auditory cortex
Brimijoin, W. Owen; Bizley, Jennifer K.
2017-01-01
A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position. PMID:28617796
How the owl tracks its prey – II
Takahashi, Terry T.
2010-01-01
Barn owls can capture prey in pitch darkness or by diving into snow, while homing in on the sounds made by their prey. First, the neural mechanisms by which the barn owl localizes a single sound source in an otherwise quiet environment will be explained. The ideas developed for the single source case will then be expanded to environments in which there are multiple sound sources and echoes – environments that are challenging for humans with impaired hearing. Recent controversies regarding the mechanisms of sound localization will be discussed. Finally, the case in which both visual and auditory information are available to the owl will be considered. PMID:20889819
Surface Penetrating Radar Simulations for Europa
NASA Technical Reports Server (NTRS)
Markus, T.; Gogineni, S. P.; Green, J. L.; Fung, S. F.; Cooper, J. F.; Taylor, W. W. L.; Garcia, L.; Reinisch, B. W.; Song, P.; Benson, R. F.
2004-01-01
The space environment above the icy surface of Europa is a source of radio noise in this frequency range from natural sources in the Jovian magnetosphere. The ionospheric and magnetospheric plasma environment of Europa affects propagation of transmitted and return signals between the spacecraft and the solid surface in a frequency-dependent manner. The ultimate resolution of the subsurface sounding measurements will be determined, in part, by a capability to mitigate these effects. We discuss an integrated multi-frequency approach to active radio sounding of the Europa ionospheric and local magnetospheric environments, based on operational experience from the Radio Plasma Imaging @PI) experiment on the IMAGE spacecraft in Earth orbit, in support of the subsurface measurement objectives.
Assessment of Hydroacoustic Propagation Using Autonomous Hydrophones in the Scotia Sea
2010-09-01
Award No. DE-AI52-08NA28654 Proposal No. BAA08-36 ABSTRACT The remote area of the Atlantic Ocean near the Antarctic Peninsula and the South...hydroacoustic blind spot. To investigate the sound propagation and interferences affected by these landmasses in the vicinity of the Antarctic polar...from large icebergs (near-surface sources) were utilized as natural sound sources. Surface sound sources, e.g., ice-related events, tend to suffer less
Active control of noise on the source side of a partition to increase its sound isolation
NASA Astrophysics Data System (ADS)
Tarabini, Marco; Roure, Alain; Pinhede, Cedric
2009-03-01
This paper describes a local active noise control system that virtually increases the sound isolation of a dividing wall by means of a secondary source array. With the proposed method, sound pressure on the source side of the partition is reduced using an array of loudspeakers that generates destructive interference on the wall surface, where an array of error microphones is placed. The reduction of sound pressure on the incident side of the wall is expected to decrease the sound radiated into the contiguous room. The method efficiency was experimentally verified by checking the insertion loss of the active noise control system; in order to investigate the possibility of using a large number of actuators, a decentralized FXLMS control algorithm was used. Active control performances and stability were tested with different array configurations, loudspeaker directivities and enclosure characteristics (sound source position and absorption coefficient). The influence of all these parameters was investigated with the factorial design of experiments. The main outcome of the experimental campaign was that the insertion loss produced by the secondary source array, in the 50-300 Hz frequency range, was close to 10 dB. In addition, the analysis of variance showed that the active noise control performance can be optimized with a proper choice of the directional characteristics of the secondary source and the distance between loudspeakers and error microphones.
NASA Astrophysics Data System (ADS)
Montazeri, Allahyar; Taylor, C. James
2017-10-01
This article addresses the coupling of acoustic secondary sources in a confined space in a sound field reduction framework. By considering the coupling of sources in a rectangular enclosure, the set of coupled equations governing its acoustical behavior are solved. The model obtained in this way is used to analyze the behavior of multi-input multi-output (MIMO) active sound field control (ASC) systems, where the coupling of sources cannot be neglected. In particular, the article develops the analytical results to analyze the effect of coupling of an array of secondary sources on the sound pressure levels inside an enclosure, when an array of microphones is used to capture the acoustic characteristics of the enclosure. The results are supported by extensive numerical simulations showing how coupling of loudspeakers through acoustic modes of the enclosure will change the strength and hence the driving voltage signal applied to the secondary loudspeakers. The practical significance of this model is to provide a better insight on the performance of the sound reproduction/reduction systems in confined spaces when an array of loudspeakers and microphones are placed in a fraction of wavelength of the excitation signal to reduce/reproduce the sound field. This is of particular importance because the interaction of different sources affects their radiation impedance depending on the electromechanical properties of the loudspeakers.
NASA Astrophysics Data System (ADS)
Crone, T. J.; Tolstoy, M.; Carton, H. D.
2013-12-01
In the summer of 2012, two multi-channel seismic (MCS) experiments, Cascadia Open-Access Seismic Transects (COAST) and Ridge2Trench, were conducted in the offshore Cascadia region. An area of growing environmental concern with active source seismic experiments is the potential impact of the received sound on marine mammals, but data relating to this issue is limited. For these surveys sound level 'mitigation radii' are established for the protection of marine mammals, based on direct arrival modeling and previous calibration experiments. Propagation of sound from seismic arrays can be accurately modeled in deep-water environments, but in shallow and sloped environments the complexity of local geology and bathymetry can make it difficult to predict sound levels as a function of distance from the source array. One potential solution to this problem is to measure the received levels in real-time using the ship's streamer (Diebold et al., 2010), which would allow the dynamic determination of suitable mitigation radii. We analyzed R/V Langseth streamer data collected on the shelf and slope off the Washington coast during the COAST experiment to measure received levels in situ up to 8 km away from the ship. Our analysis shows that water depth and bathymetric features can affect received levels in shallow water environments. The establishment of dynamic mitigation radii based on local conditions may help maximize the safety of marine mammals while also maximizing the ability of scientists to conduct seismic research. With increasing scientific and societal focus on subduction zone environments, a better understanding of shallow water sound propagation is essential for allowing seismic exploration of these hazardous environments to continue. Diebold, J. M., M. Tolstoy, L. Doermann, S. Nooner, S. Webb, and T. J. Crone (2010) R/V Marcus G. Langseth Seismic Source: Modeling and Calibration. Geochemistry, Geophysics, Geosystems, 11, Q12012, doi:10.1029/2010GC003216.
Coleman, Philip; Jackson, Philip J B; Olik, Marek; Møller, Martin; Olsen, Martin; Abildgaard Pedersen, Jan
2014-04-01
Since the mid 1990s, acoustics research has been undertaken relating to the sound zone problem-using loudspeakers to deliver a region of high sound pressure while simultaneously creating an area where the sound is suppressed-in order to facilitate independent listening within the same acoustic enclosure. The published solutions to the sound zone problem are derived from areas such as wave field synthesis and beamforming. However, the properties of such methods differ and performance tends to be compared against similar approaches. In this study, the suitability of energy focusing, energy cancelation, and synthesis approaches for sound zone reproduction is investigated. Anechoic simulations based on two zones surrounded by a circular array show each of the methods to have a characteristic performance, quantified in terms of acoustic contrast, array control effort and target sound field planarity. Regularization is shown to have a significant effect on the array effort and achieved acoustic contrast, particularly when mismatched conditions are considered between calculation of the source weights and their application to the system.
Consistent modelling of wind turbine noise propagation from source to receiver.
Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; Dag, Kaya O; Moriarty, Patrick
2017-11-01
The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. The local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.
Consistent modelling of wind turbine noise propagation from source to receiver
Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; ...
2017-11-28
The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less
Consistent modelling of wind turbine noise propagation from source to receiver
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong
The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less
NASA Technical Reports Server (NTRS)
Lehnert, H.; Blauert, Jens; Pompetzki, W.
1991-01-01
In every-day listening the auditory event perceived by a listener is determined not only by the sound signal that a sound emits but also by a variety of environmental parameters. These parameters are the position, orientation and directional characteristics of the sound source, the listener's position and orientation, the geometrical and acoustical properties of surfaces which affect the sound field and the sound propagation properties of the surrounding fluid. A complete set of these parameters can be called an Acoustic Environment. If the auditory event perceived by a listener is manipulated in such a way that the listener is shifted acoustically into a different acoustic environment without moving himself physically, a Virtual Acoustic Environment has been created. Here, we deal with a special technique to set up nearly arbitrary Virtual Acoustic Environments, the Binaural Room Simulation. The purpose of the Binaural Room Simulation is to compute the binaural impulse response related to a virtual acoustic environment taking into account all parameters mentioned above. One possible way to describe a Virtual Acoustic Environment is the concept of the virtual sound sources. Each of the virtual sources emits a certain signal which is correlated but not necessarily identical with the signal emitted by the direct sound source. If source and receiver are non moving, the acoustic environment becomes a linear time-invariant system. Then, the Binaural Impulse Response from the source to a listener' s eardrums contains all relevant auditory information related to the Virtual Acoustic Environment. Listening into the simulated environment can easily be achieved by convolving the Binaural Impulse Response with dry signals and representing the results via headphones.
Optical-beam wavefront control based on the atmospheric backscatter signal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banakh, V A; Razenkov, I A; Rostov, A P
2015-02-28
The feasibility of compensating for aberrations of the optical-beam initial wavefront by aperture sounding, based on the atmospheric backscatter signal from an additional laser source with a different wavelength, is experimentally studied. It is shown that the adaptive system based on this principle makes it possible to compensate for distortions of the initial beam wavefront on a surface path in atmosphere. Specifically, the beam divergence decreases, while the level of the detected mean backscatter power from the additional laser source increases. (light scattering)
The impact of the microphone position on the frequency analysis of snoring sounds.
Herzog, Michael; Kühnel, Thomas; Bremert, Thomas; Herzog, Beatrice; Hosemann, Werner; Kaftan, Holger
2009-08-01
Frequency analysis of snoring sounds has been reported as a diagnostic tool to differentiate between different sources of snoring. Several studies have been published presenting diverging results of the frequency analyses of snoring sounds. Depending on the position of the used microphones, the results of the frequency analysis of snoring sounds vary. The present study investigated the influence of different microphone positions on the outcome of the frequency analysis of snoring sounds. Nocturnal snoring was recorded simultaneously at six positions (air-coupled: 30 cm middle, 100 cm middle, 30 cm lateral to both sides of the patients' head; body contact: neck and parasternal) in five patients. The used microphones had a flat frequency response and a similar frequency range (10/40 Hz-18 kHz). Frequency analysis was performed by fast Fourier transformation and frequency bands as well as peak intensities (Peaks 1-5) were detected. Air-coupled microphones presented a wider frequency range (60 Hz-10 kHz) compared to contact microphones. The contact microphone at cervical position presented a cut off at frequencies above 300 Hz, whereas the contact microphone at parasternal position revealed a cut off above 100 Hz. On an exemplary base, the study demonstrates that frequencies above 1,000 Hz do appear in complex snoring patterns, and it is emphasised that high frequencies are imported for the interpretation of snoring sounds with respect to the identification of the source of snoring. Contact microphones might be used in screening devices, but for a natural analysis of snoring sounds the use of air-coupled microphones is indispensable.
Broad band sound from wind turbine generators
NASA Technical Reports Server (NTRS)
Hubbard, H. H.; Shepherd, K. P.; Grosveld, F. W.
1981-01-01
Brief descriptions are given of the various types of large wind turbines and their sound characteristics. Candidate sources of broadband sound are identified and are rank ordered for a large upwind configuration wind turbine generator for which data are available. The rotor is noted to be the main source of broadband sound which arises from inflow turbulence and from the interactions of the turbulent boundary layer on the blade with its trailing edge. Sound is radiated about equally in all directions but the refraction effects of the wind produce an elongated contour pattern in the downwind direction.
NASA Astrophysics Data System (ADS)
Cho, Wan-Ho; Ih, Jeong-Guon; Toi, Takeshi
2015-12-01
For rendering a desired characteristics of a sound field, a proper conditioning of acoustic actuators in an array are required, but the source condition depends strongly on its position. Actuators located at inefficient positions for control would consume the input power too much or become too much sensitive to disturbing noise. These actuators can be considered redundant, which should be sorted out as far as such elimination does not damage the whole control performance significantly. It is known that the inverse approach based on the acoustical holography concept, employing the transfer matrix between sources and field points as core element, is useful for rendering the desired sound field. By investigating the information indwelling in the transfer matrix between actuators and field points, the linear independency of an actuator from the others in the array can be evaluated. To this end, the square of the right singular vector, which means the radiation contribution from the source, can be used as an indicator. Inefficient position for fulfilling the desired sound field can be determined as one having smallest indicator value among all possible actuator positions. The elimination process continues one by one, or group by group, until the remaining number of actuators meets the preset number. Control examples of exterior and interior spaces are taken for the validation. The results reveal that the present method for choosing least dependent actuators, for a given number of actuators and field condition, is quite effective in realizing the desired sound field with a noisy input condition, and in minimizing the required input power.
Effects of sound source directivity on auralizations
NASA Astrophysics Data System (ADS)
Sheets, Nathan W.; Wang, Lily M.
2002-05-01
Auralization, the process of rendering audible the sound field in a simulated space, is a useful tool in the design of acoustically sensitive spaces. The auralization depends on the calculation of an impulse response between a source and a receiver which have certain directional behavior. Many auralizations created to date have used omnidirectional sources; the effects of source directivity on auralizations is a relatively unexplored area. To examine if and how the directivity of a sound source affects the acoustical results obtained from a room, we used directivity data for three sources in a room acoustic modeling program called Odeon. The three sources are: violin, piano, and human voice. The results from using directional data are compared to those obtained using omnidirectional source behavior, both through objective measure calculations and subjective listening tests.
Development of a directivity-controlled piezoelectric transducer for sound reproduction
NASA Astrophysics Data System (ADS)
Bédard, Magella; Berry, Alain
2008-04-01
Present sound reproduction systems do not attempt to simulate the spatial radiation of musical instruments, or sound sources in general, even though the spatial directivity has a strong impact on the psychoacoustic experience. A transducer consisting of 4 piezoelectric elemental sources made from curved PVDF films is used to generate a target directivity pattern in the horizontal plane, in the frequency range of 5-20 kHz. The vibratory and acoustical response of an elemental source is addressed, both theoretically and experimentally. Two approaches to synthesize the input signals to apply to each elemental source are developed in order to create a prescribed, frequency-dependent acoustic directivity. The circumferential Fourier decomposition of the target directivity provides a compromise between the magnitude and the phase reconstruction, whereas the minimization of a quadratic error criterion provides a best magnitude reconstruction. This transducer can improve sound reproduction by introducing the spatial radiation aspect of the original source at high frequency.
Callback response of dugongs to conspecific chirp playbacks.
Ichikawa, Kotaro; Akamatsu, Tomonari; Shinke, Tomio; Adulyanukosol, Kanjana; Arai, Nobuaki
2011-06-01
Dugongs (Dugong dugon) produce bird-like calls such as chirps and trills. The vocal responses of dugongs to playbacks of several acoustic stimuli were investigated. Animals were exposed to four different playback stimuli: a recorded chirp from a wild dugong, a synthesized down-sweep sound, a synthesized constant-frequency sound, and silence. Wild dugongs vocalized more frequently after playback of broadcast chirps than that after constant-frequency sounds or silence. The down-sweep sound also elicited more vocal responses than did silence. No significant difference was found between the broadcast chirps and the down-sweep sound. The ratio of wild dugong chirps to all calls and the dominant frequencies of the wild dugong calls were significantly higher during playbacks of broadcast chirps, down-sweep sounds, and constant-frequency sounds than during those of silence. The source level and duration of dugong chirps increased significantly as signaling distance increased. No significant correlation was found between signaling distance and the source level of trills. These results show that dugongs vocalize to playbacks of frequency-modulated signals and suggest that the source level of dugong chirps may be manipulated to compensate for transmission loss between the source and receiver. This study provides the first behavioral observations revealing the function of dugong chirps. © 2011 Acoustical Society of America
Characteristics of sound radiation from turbulent premixed flames
NASA Astrophysics Data System (ADS)
Rajaram, Rajesh
Turbulent combustion processes are inherently unsteady and, thus, a source of acoustic radiation, which occurs due to the unsteady expansion of reacting gases. While prior studies have extensively characterized the total sound power radiated by turbulent flames, their spectral characteristics are not well understood. The objective of this research work is to measure the flow and acoustic properties of an open turbulent premixed jet flame and explain the spectral trends of combustion noise. The flame dynamics were characterized using high speed chemiluminescence images of the flame. A model based on the solution of the wave equation with unsteady heat release as the source was developed and was used to relate the measured chemiluminescence fluctuations to its acoustic emission. Acoustic measurements were performed in an anechoic environment for several burner diameters, flow velocities, turbulence intensities, fuels, and equivalence ratios. The acoustic emissions are shown to be characterized by four parameters: peak frequency (Fpeak), low frequency slope (beta), high frequency slope (alpha) and Overall Sound Pressure Level (OASPL). The peak frequency (Fpeak) is characterized by a Strouhal number based on the mean velocity and a flame length. The transfer function between the acoustic spectrum and the spectrum of heat release fluctuations has an f2 dependence at low frequencies, while it converged to a constant value at high frequencies. Furthermore, the OASPL was found to be characterized by (Fpeak mfH)2, which resembles the source term in the wave equation.
NASA Technical Reports Server (NTRS)
Embleton, Tony F. W.; Daigle, Gilles A.
1991-01-01
Reviewed here is the current state of knowledge with respect to each basic mechanism of sound propagation in the atmosphere and how each mechanism changes the spectral or temporal characteristics of the sound received at a distance from the source. Some of the basic processes affecting sound wave propagation which are present in any situation are discussed. They are geometrical spreading, molecular absorption, and turbulent scattering. In geometrical spreading, sound levels decrease with increasing distance from the source; there is no frequency dependence. In molecular absorption, sound energy is converted into heat as the sound wave propagates through the air; there is a strong dependence on frequency. In turbulent scattering, local variations in wind velocity and temperature induce fluctuations in phase and amplitude of the sound waves as they propagate through an inhomogeneous medium; there is a moderate dependence on frequency.
Wan, Lin; Zhou, Ji-Xun; Rogers, Peter H
2010-08-01
A joint China-U.S. underwater acoustics experiment was conducted in the Yellow Sea with a very flat bottom and a strong and sharp thermocline. Broadband explosive sources were deployed both above and below the thermocline along two radial lines up to 57.2 km and a quarter circle with a radius of 34 km. Two inversion schemes are used to obtain the seabottom sound speed. One is based on extracting normal mode depth functions from the cross-spectral density matrix. The other is based on the best match between the calculated and measured modal arrival times for different frequencies. The inverted seabottom sound speed is used as a constraint condition to extract the seabottom sound attenuation by three methods. The first method involves measuring the attenuation coefficients of normal modes. In the second method, the seabottom sound attenuation is estimated by minimizing the difference between the theoretical and measured modal amplitude ratios. The third method is based on finding the best match between the measured and modeled transmission losses (TLs). The resultant seabottom attenuation, averaged over three independent methods, can be expressed as alpha=(0.33+/-0.02)f(1.86+/-0.04)(dB/m kHz) over a frequency range of 80-1000 Hz.
Wensveen, Paul J; von Benda-Beckmann, Alexander M; Ainslie, Michael A; Lam, Frans-Peter A; Kvadsheim, Petter H; Tyack, Peter L; Miller, Patrick J O
2015-05-01
The behaviour of a marine mammal near a noise source can modulate the sound exposure it receives. We demonstrate that two long-finned pilot whales both surfaced in synchrony with consecutive arrivals of multiple sonar pulses. We then assess the effect of surfacing and other behavioural response strategies on the received cumulative sound exposure levels and maximum sound pressure levels (SPLs) by modelling realistic spatiotemporal interactions of a pilot whale with an approaching source. Under the propagation conditions of our model, some response strategies observed in the wild were effective in reducing received levels (e.g. movement perpendicular to the source's line of approach), but others were not (e.g. switching from deep to shallow diving; synchronous surfacing after maximum SPLs). Our study exemplifies how simulations of source-whale interactions guided by detailed observational data can improve our understanding about motivations behind behaviour responses observed in the wild (e.g., reducing sound exposure, prey movement). Copyright © 2015 Elsevier Ltd. All rights reserved.
Litovsky, Ruth Y.; Godar, Shelly P.
2010-01-01
The precedence effect refers to the fact that humans are able to localize sound in reverberant environments, because the auditory system assigns greater weight to the direct sound (lead) than the later-arriving sound (lag). In this study, absolute sound localization was studied for single source stimuli and for dual source lead-lag stimuli in 4–5 year old children and adults. Lead-lag delays ranged from 5–100 ms. Testing was conducted in free field, with pink noise bursts emitted from loudspeakers positioned on a horizontal arc in the frontal field. Listeners indicated how many sounds were heard and the perceived location of the first- and second-heard sounds. Results suggest that at short delays (up to 10 ms), the lead dominates sound localization strongly at both ages, and localization errors are similar to those with single-source stimuli. At longer delays errors can be large, stemming from over-integration of the lead and lag, interchanging of perceived locations of the first-heard and second-heard sounds due to temporal order confusion, and dominance of the lead over the lag. The errors are greater for children than adults. Results are discussed in the context of maturation of auditory and non-auditory factors. PMID:20968369
Hermannsen, Line; Beedholm, Kristian
2017-01-01
Acoustic harassment devices (AHD) or ‘seal scarers’ are used extensively, not only to deter seals from fisheries, but also as mitigation tools to deter marine mammals from potentially harmful sound sources, such as offshore pile driving. To test the effectiveness of AHDs, we conducted two studies with similar experimental set-ups on two key species: harbour porpoises and harbour seals. We exposed animals to 500 ms tone bursts at 12 kHz simulating that of an AHD (Lofitech), but with reduced output levels (source peak-to-peak level of 165 dB re 1 µPa). Animals were localized with a theodolite before, during and after sound exposures. In total, 12 sound exposures were conducted to porpoises and 13 exposures to seals. Porpoises were found to exhibit avoidance reactions out to ranges of 525 m from the sound source. Contrary to this, seal observations increased during sound exposure within 100 m of the loudspeaker. We thereby demonstrate that porpoises and seals respond very differently to AHD sounds. This has important implications for application of AHDs in multi-species habitats, as sound levels required to deter less sensitive species (seals) can lead to excessive and unwanted large deterrence ranges on more sensitive species (porpoises). PMID:28791155
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevelhimer, Mark S.; Deng, Z. Daniel; Scherelis, Constantin
2016-01-01
Underwater noise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels of different sizes and other underwater sound sources in both static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines onmore » the Mississippi River, where the sound of flowing water is included in background measurements. The size of vessels measured ranged from a small fishing boat with a 60 HP outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, and when compared to the sound created by an operating HK turbine were many times greater. A comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed values.« less
Andrews, John T.; Barber, D.C.; Jennings, A.E.; Eberl, D.D.; Maclean, B.; Kirby, M.E.; Stoner, J.S.
2012-01-01
Core HU97048-007PC was recovered from the continental Labrador Sea slope at a water depth of 945 m, 250 km seaward from the mouth of Cumberland Sound, and 400 km north of Hudson Strait. Cumberland Sound is a structural trough partly floored by Cretaceous mudstones and Paleozoic carbonates. The record extends from ∼10 to 58 ka. On-board logging revealed a complex series of lithofacies, including buff-colored detrital carbonate-rich sediments [Heinrich (H)-events] frequently bracketed by black facies. We investigate the provenance of these facies using quantitative X-ray diffraction on drill-core samples from Paleozoic and Cretaceous bedrock from the SE Baffin Island Shelf, and on the < 2-mm sediment fraction in a transect of five cores from Cumberland Sound to the NW Labrador Sea. A sediment unmixing program was used to discriminate between sediment sources, which included dolomite-rich sediments from Baffin Bay, calcite-rich sediments from Hudson Strait and discrete sources from Cumberland Sound. Results indicated that the bulk of the sediment was derived from Cumberland Sound, but Baffin Bay contributed to sediments coeval with H-0 (Younger Dryas), whereas Hudson Strait was the source during H-events 1–4. Contributions from the Cretaceous outcrops within Cumberland Sound bracket H-events, thus both leading and lagging Hudson Strait-sourced H-events.
Is low frequency ocean sound increasing globally?
Miksis-Olds, Jennifer L; Nichols, Stephen M
2016-01-01
Low frequency sound has increased in the Northeast Pacific Ocean over the past 60 yr [Ross (1993) Acoust. Bull. 18, 5-8; (2005) IEEE J. Ocean. Eng. 30, 257-261; Andrew, Howe, Mercer, and Dzieciuch (2002) J. Acoust. Soc. Am. 129, 642-651; McDonald, Hildebrand, and Wiggins (2006) J. Acoust. Soc. Am. 120, 711-717; Chapman and Price (2011) J. Acoust. Soc. Am. 129, EL161-EL165] and in the Indian Ocean over the past decade, [Miksis-Olds, Bradley, and Niu (2013) J. Acoust. Soc. Am. 134, 3464-3475]. More recently, Andrew, Howe, and Mercer's [(2011) J. Acoust. Soc. Am. 129, 642-651] observations in the Northeast Pacific show a level or slightly decreasing trend in low frequency noise. It remains unclear what the low frequency trends are in other regions of the world. In this work, data from the Comprehensive Nuclear-Test Ban Treaty Organization International Monitoring System was used to examine the rate and magnitude of change in low frequency sound (5-115 Hz) over the past decade in the South Atlantic and Equatorial Pacific Oceans. The dominant source observed in the South Atlantic was seismic air gun signals, while shipping and biologic sources contributed more to the acoustic environment at the Equatorial Pacific location. Sound levels over the past 5-6 yr in the Equatorial Pacific have decreased. Decreases were also observed in the ambient sound floor in the South Atlantic Ocean. Based on these observations, it does not appear that low frequency sound levels are increasing globally.
Separation of acoustic waves in isentropic flow perturbations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henke, Christian, E-mail: christian.henke@atlas-elektronik.com
2015-04-15
The present contribution investigates the mechanisms of sound generation and propagation in the case of highly-unsteady flows. Based on the linearisation of the isentropic Navier–Stokes equation around a new pathline-averaged base flow, it is demonstrated for the first time that flow perturbations of a non-uniform flow can be split into acoustic and vorticity modes, with the acoustic modes being independent of the vorticity modes. Therefore, we can propose this acoustic perturbation as a general definition of sound. As a consequence of the splitting result, we conclude that the present acoustic perturbation is propagated by the convective wave equation and fulfilsmore » Lighthill’s acoustic analogy. Moreover, we can define the deviations of the Navier–Stokes equation from the convective wave equation as “true” sound sources. In contrast to other authors, no assumptions on a slowly varying or irrotational flow are necessary. Using a symmetry argument for the conservation laws, an energy conservation result and a generalisation of the sound intensity are provided. - Highlights: • First splitting of non-uniform flows in acoustic and non-acoustic components. • These result leads to a generalisation of sound which is compatible with Lighthill’s acoustic analogy. • A closed equation for the generation and propagation of sound is given.« less
Riede, Tobias; Goller, Franz
2010-10-01
Song production in songbirds is a model system for studying learned vocal behavior. As in humans, bird phonation involves three main motor systems (respiration, vocal organ and vocal tract). The avian respiratory mechanism uses pressure regulation in air sacs to ventilate a rigid lung. In songbirds sound is generated with two independently controlled sound sources, which reside in a uniquely avian vocal organ, the syrinx. However, the physical sound generation mechanism in the syrinx shows strong analogies to that in the human larynx, such that both can be characterized as myoelastic-aerodynamic sound sources. Similarities include active adduction and abduction, oscillating tissue masses which modulate flow rate through the organ and a layered structure of the oscillating tissue masses giving rise to complex viscoelastic properties. Differences in the functional morphology of the sound producing system between birds and humans require specific motor control patterns. The songbird vocal apparatus is adapted for high speed, suggesting that temporal patterns and fast modulation of sound features are important in acoustic communication. Rapid respiratory patterns determine the coarse temporal structure of song and maintain gas exchange even during very long songs. The respiratory system also contributes to the fine control of airflow. Muscular control of the vocal organ regulates airflow and acoustic features. The upper vocal tract of birds filters the sounds generated in the syrinx, and filter properties are actively adjusted. Nonlinear source-filter interactions may also play a role. The unique morphology and biomechanical system for sound production in birds presents an interesting model for exploring parallels in control mechanisms that give rise to highly convergent physical patterns of sound generation. More comparative work should provide a rich source for our understanding of the evolution of complex sound producing systems. Copyright © 2009 Elsevier Inc. All rights reserved.
The auditory P50 component to onset and offset of sound
Pratt, Hillel; Starr, Arnold; Michalewski, Henry J.; Bleich, Naomi; Mittelman, Nomi
2008-01-01
Objective: The auditory Event-Related Potentials (ERP) component P50 to sound onset and offset have been reported to be similar, but their magnetic homologue has been reported absent to sound offset. We compared the spatio-temporal distribution of cortical activity during P50 to sound onset and offset, without confounds of spectral change. Methods: ERPs were recorded in response to onsets and offsets of silent intervals of 0.5 s (gaps) appearing randomly in otherwise continuous white noise and compared to ERPs to randomly distributed click pairs with half second separation presented in silence. Subjects were awake and distracted from the stimuli by reading a complicated text. Measures of P50 included peak latency and amplitude, as well as source current density estimates to the clicks and sound onsets and offsets. Results P50 occurred in response to noise onsets and to clicks, while to noise offset it was absent. Latency of P50 was similar to noise onset (56 msec) and to clicks (53 msec). Sources of P50 to noise onsets and clicks included bilateral superior parietal areas. In contrast, noise offsets activated left inferior temporal and occipital areas at the time of P50. Source current density was significantly higher to noise onset than offset in the vicinity of the temporo-parietal junction. Conclusions: P50 to sound offset is absent compared to the distinct P50 to sound onset and to clicks, at different intracranial sources. P50 to stimulus onset and to clicks appears to reflect preattentive arousal by a new sound in the scene. Sound offset does not involve a new sound and hence the absent P50. Significance: Stimulus onset activates distinct early cortical processes that are absent to offset. PMID:18055255
On the role of glottis-interior sources in the production of voiced sound.
Howe, M S; McGowan, R S
2012-02-01
The voice source is dominated by aeroacoustic sources downstream of the glottis. In this paper an investigation is made of the contribution to voiced speech of secondary sources within the glottis. The acoustic waveform is ultimately determined by the volume velocity of air at the glottis, which is controlled by vocal fold vibration, pressure forcing from the lungs, and unsteady backreactions from the sound and from the supraglottal air jet. The theory of aerodynamic sound is applied to study the influence on the fine details of the acoustic waveform of "potential flow" added-mass-type glottal sources, glottis friction, and vorticity either in the glottis-wall boundary layer or in the portion of the free jet shear layer within the glottis. These sources govern predominantly the high frequency content of the sound when the glottis is near closure. A detailed analysis performed for a canonical, cylindrical glottis of rectangular cross section indicates that glottis-interior boundary/shear layer vortex sources and the surface frictional source are of comparable importance; the influence of the potential flow source is about an order of magnitude smaller. © 2012 Acoustical Society of America
Calibration of International Space Station (ISS) Node 1 Vibro-Acoustic Model-Report 2
NASA Technical Reports Server (NTRS)
Zhang, Weiguo; Raveendra, Ravi
2014-01-01
Reported here is the capability of the Energy Finite Element Method (E-FEM) to predict the vibro-acoustic sound fields within the International Space Station (ISS) Node 1 and to compare the results with simulated leak sounds. A series of electronically generated structural ultrasonic noise sources were created in the pressure wall to emulate leak signals at different locations of the Node 1 STA module during its period of storage at Stennis Space Center (SSC). The exact sound source profiles created within the pressure wall at the source were unknown, but were estimated from the closest sensor measurement. The E-FEM method represents a reverberant sound field calculation, and of importance to this application is the requirement to correctly handle the direct field effect of the sound generation. It was also important to be able to compute the sound energy fields in the ultrasonic frequency range. This report demonstrates the capability of this technology as applied to this type of application.
Andreeva, I G; Vartanian, I A
2012-01-01
The ability to evaluate direction of amplitude changes of sound stimuli was studied in adults and in the 11-12- and 15-16-year old teenagers. The stimuli representing sequences of fragments of the tone of 1 kHz, whose amplitude is changing with time, are used as model of approach and withdrawal of the sound sources. The 11-12-year old teenagers at estimation of direction of amplitude changes were shown to make the significantly higher number of errors as compared with two other examined groups, including those in repeated experiments. The structure of errors - the ratio of the portion of errors at estimation of increasing and decreasing by amplitude stimulus - turned out to be different in teenagers and in adults. The question is discussed about the effect of unspecific activation of the large hemisphere cortex in teenagers on processes if taking solution about the complex sound stimulus, including a possibility estimation of approach and withdrawal of the sound source.
Interior and exterior sound field control using general two-dimensional first-order sources.
Poletti, M A; Abhayapala, T D
2011-01-01
Reproduction of a given sound field interior to a circular loudspeaker array without producing an undesirable exterior sound field is an unsolved problem over a broadband of frequencies. At low frequencies, by implementing the Kirchhoff-Helmholtz integral using a circular discrete array of line-source loudspeakers, a sound field can be recreated within the array and produce no exterior sound field, provided that the loudspeakers have azimuthal polar responses with variable first-order responses which are a combination of a two-dimensional (2D) monopole and a radially oriented 2D dipole. This paper examines the performance of circular discrete arrays of line-source loudspeakers which also include a tangential dipole, providing general variable-directivity responses in azimuth. It is shown that at low frequencies, the tangential dipoles are not required, but that near and above the Nyquist frequency, the tangential dipoles can both improve the interior accuracy and reduce the exterior sound field. The additional dipoles extend the useful range of the array by around an octave.
Sound Radiated by a Wave-Like Structure in a Compressible Jet
NASA Technical Reports Server (NTRS)
Golubev, V. V.; Prieto, A. F.; Mankbadi, R. R.; Dahl, M. D.; Hixon, R.
2003-01-01
This paper extends the analysis of acoustic radiation from the source model representing spatially-growing instability waves in a round jet at high speeds. Compared to previous work, a modified approach to the sound source modeling is examined that employs a set of solutions to linearized Euler equations. The sound radiation is then calculated using an integral surface method.
Numerical Recovering of a Speed of Sound by the BC-Method in 3D
NASA Astrophysics Data System (ADS)
Pestov, Leonid; Bolgova, Victoria; Danilin, Alexandr
We develop the numerical algorithm for solving the inverse problem for the wave equation by the Boundary Control method. The problem, which we refer to as a forward one, is an initial boundary value problem for the wave equation with zero initial data in the bounded domain. The inverse problem is to find the speed of sound c(x) by the measurements of waves induced by a set of boundary sources. The time of observation is assumed to be greater then two acoustical radius of the domain. The numerical algorithm for sound reconstruction is based on two steps. The first one is to find a (sufficiently large) number of controls {f_j} (the basic control is defined by the position of the source and some time delay), which generates the same number of known harmonic functions, i.e. Δ {u_j}(.,T) = 0 , where {u_j} is the wave generated by the control {f_j} . After that the linear integral equation w.r.t. the speed of sound is obtained. The piecewise constant model of the speed is used. The result of numerical testing of 3-dimensional model is presented.
Blind estimation of reverberation time
NASA Astrophysics Data System (ADS)
Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; O'Brien, William D.; Lansing, Charissa R.; Feng, Albert S.
2003-11-01
The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. Many state-of-the-art audio signal processing algorithms, for example in hearing-aids and telephony, are expected to have the ability to characterize the listening environment, and turn on an appropriate processing strategy accordingly. Thus, a method for characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, a method for estimating RT without prior knowledge of sound sources or room geometry is presented. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time-constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.
Online estimation of room reverberation time
NASA Astrophysics Data System (ADS)
Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; Feng, Albert S.
2003-04-01
The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. State-of-the-art signal processing algorithms for hearing aids are expected to have the ability to evaluate the characteristics of the listening environment and turn on an appropriate processing strategy accordingly. Thus, a method for the characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method or regression, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, we describe a method for estimating RT without prior knowledge of sound sources or room geometry. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.
NASA Astrophysics Data System (ADS)
Bolduc, A.; Gauthier, P.-A.; Berry, A.
2017-12-01
While perceptual evaluation and sound quality testing with jury are now recognized as essential parts of acoustical product development, they are rarely implemented with spatial sound field reproduction. Instead, monophonic, stereophonic or binaural presentations are used. This paper investigates the workability and interest of a method to use complete vibroacoustic engineering models for auralization based on 2.5D Wave Field Synthesis (WFS). This method is proposed in order that spatial characteristics such as directivity patterns and direction-of-arrival are part of the reproduced sound field while preserving the model complete formulation that coherently combines frequency and spatial responses. Modifications to the standard 2.5D WFS operators are proposed for extended primary sources, affecting the reference line definition and compensating for out-of-plane elementary primary sources. Reported simulations and experiments of reproductions of two physically-accurate vibroacoustic models of thin plates show that the proposed method allows for an effective reproduction in the horizontal plane: Spatial and frequency domains features are recreated. Application of the method to the sound rendering of a virtual transmission loss measurement setup shows the potential of the method for use in virtual acoustical prototyping for jury testing.
Kastelein, R A; Verboom, W C; Muijsers, M; Jennings, N V; van der Heul, S
2005-05-01
To prevent grounding of ships and collisions between ships in shallow coastal waters, an underwater data collection and communication network is currently under development: Acoustic Communication network for Monitoring of underwater Environment in coastal areas (ACME). Marine mammals might be affected by ACME sounds since they use sounds of similar frequencies (around 12 kHz) for communication, orientation, and prey location. If marine mammals tend to avoid the vicinity of the transmitters, they may be kept away from ecologically important areas by ACME sounds. One marine mammal species that may be affected in the North Sea is the harbour porpoise. Therefore, as part of an environmental impact assessment program, two captive harbour porpoises were subjected to four sounds, three of which may be used in the underwater acoustic data communication network. The effect of each sound was judged by comparing the animals' positions and respiration rates during a test period with those during a baseline period. Each of the four sounds could be made a deterrent by increasing the amplitude of the sound. The porpoises reacted by swimming away from the sounds and by slightly, but significantly, increasing their respiration rate. From the sound pressure level distribution in the pen, and the distribution of the animals during test sessions, discomfort sound level thresholds were determined for each sound. In combination with information on sound propagation in the areas where the communication system may be deployed, the extent of the 'discomfort zone' can be estimated for several source levels (SLs). The discomfort zone is defined as the area around a sound source that harbour porpoises are expected to avoid. Based on these results, SLs can be selected that have an acceptable effect on harbour porpoises in particular areas. The discomfort zone of a communication sound depends on the selected sound, the selected SL, and the propagation characteristics of the area in which the sound system is operational. In shallow, winding coastal water courses, with sandbanks, etc., the type of habitat in which the ACME sounds will be produced, propagation loss cannot be accurately estimated by using a simple propagation model, but should be measured on site. The SL of the communication system should be adapted to each area (taking into account bounding conditions created by narrow channels, sound propagation variability due to environmental factors, and the importance of an area to the affected species). The discomfort zone should not prevent harbour porpoises from spending sufficient time in ecologically important areas (for instance feeding areas), or routes towards these areas.
Photoacoustic Effect Generated from an Expanding Spherical Source
NASA Astrophysics Data System (ADS)
Bai, Wenyu; Diebold, Gerald J.
2018-02-01
Although the photoacoustic effect is typically generated by amplitude-modulated continuous or pulsed radiation, the form of the wave equation for pressure that governs the generation of sound indicates that optical sources moving in an absorbing fluid can produce sound as well. Here, the characteristics of the acoustic wave produced by a radially symmetric Gaussian source expanding outwardly from the origin are found. The unique feature of the photoacoustic effect from the spherical source is a trailing compressive wave that arises from reflection of an inwardly propagating component of the wave. Similar to the one-dimensional geometry, an unbounded amplification effect is found for the Gaussian source expanding at the sound speed.
Rising tones and rustling noises: Metaphors in gestural depictions of sounds
Scurto, Hugo; Françoise, Jules; Bevilacqua, Frédéric; Houix, Olivier; Susini, Patrick
2017-01-01
Communicating an auditory experience with words is a difficult task and, in consequence, people often rely on imitative non-verbal vocalizations and gestures. This work explored the combination of such vocalizations and gestures to communicate auditory sensations and representations elicited by non-vocal everyday sounds. Whereas our previous studies have analyzed vocal imitations, the present research focused on gestural depictions of sounds. To this end, two studies investigated the combination of gestures and non-verbal vocalizations. A first, observational study examined a set of vocal and gestural imitations of recordings of sounds representative of a typical everyday environment (ecological sounds) with manual annotations. A second, experimental study used non-ecological sounds whose parameters had been specifically designed to elicit the behaviors highlighted in the observational study, and used quantitative measures and inferential statistics. The results showed that these depicting gestures are based on systematic analogies between a referent sound, as interpreted by a receiver, and the visual aspects of the gestures: auditory-visual metaphors. The results also suggested a different role for vocalizations and gestures. Whereas the vocalizations reproduce all features of the referent sounds as faithfully as vocally possible, the gestures focus on one salient feature with metaphors based on auditory-visual correspondences. Both studies highlighted two metaphors consistently shared across participants: the spatial metaphor of pitch (mapping different pitches to different positions on the vertical dimension), and the rustling metaphor of random fluctuations (rapidly shaking of hands and fingers). We interpret these metaphors as the result of two kinds of representations elicited by sounds: auditory sensations (pitch and loudness) mapped to spatial position, and causal representations of the sound sources (e.g. rain drops, rustling leaves) pantomimed and embodied by the participants’ gestures. PMID:28750071
NASA Astrophysics Data System (ADS)
Kozuka, Teruyuki; Yasui, Kyuichi; Tuziuti, Toru; Towata, Atsuya; Lee, Judy; Iida, Yasuo
2009-07-01
Using a standing-wave field generated between a sound source and a reflector, it is possible to trap small objects at nodes of the sound pressure distribution in air. In this study, a sound field generated under a flat or concave reflector was studied by both experimental measurement and numerical calculation. The calculated result agrees well with the experimental data. The maximum force generated between a sound source of 25.0 mm diameter and a concave reflector is 0.8 mN in the experiment. A steel ball of 2.0 mm in diameter was levitated in the sound field in air.
Amplitude modulation detection by human listeners in sound fields.
Zahorik, Pavel; Kim, Duck O; Kuwada, Shigeyuki; Anderson, Paul W; Brandewie, Eugene; Srinivasan, Nirmal
2011-10-01
The temporal modulation transfer function (TMTF) approach allows techniques from linear systems analysis to be used to predict how the auditory system will respond to arbitrary patterns of amplitude modulation (AM). Although this approach forms the basis for a standard method of predicting speech intelligibility based on estimates of the acoustical modulation transfer function (MTF) between source and receiver, human sensitivity to AM as characterized by the TMTF has not been extensively studied under realistic listening conditions, such as in reverberant sound fields. Here, TMTFs (octave bands from 2 - 512 Hz) were obtained in 3 listening conditions simulated using virtual auditory space techniques: diotic, anechoic sound field, reverberant room sound field. TMTFs were then related to acoustical MTFs estimated using two different methods in each of the listening conditions. Both diotic and anechoic data were found to be in good agreement with classic results, but AM thresholds in the reverberant room were lower than predictions based on acoustical MTFs. This result suggests that simple linear systems techniques may not be appropriate for predicting TMTFs from acoustical MTFs in reverberant sound fields, and may be suggestive of mechanisms that functionally enhance modulation during reverberant listening.
Theoretical analysis of sound transmission loss through graphene sheets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Natsuki, Toshiaki, E-mail: natsuki@shinshu-u.ac.jp; Institute of Carbon Science and Technology, Shinshu University, 4-17-1 Wakasato, Nagano 380-8553; Ni, Qing-Qing
2014-11-17
We examine the potential of using graphene sheets (GSs) as sound insulating materials that can be used for nano-devices because of their small size, super electronic, and mechanical properties. In this study, a theoretical analysis is proposed to predict the sound transmission loss through multi-layered GSs, which are formed by stacks of GS and bound together by van der Waals (vdW) forces between individual layers. The result shows that the resonant frequencies of the sound transmission loss occur in the multi-layered GSs and the values are very high. Based on the present analytical solution, we predict the acoustic insulation propertymore » for various layers of sheets under both normal incident wave and acoustic field of random incidence source. The scheme could be useful in vibration absorption application of nano devices and materials.« less
Biologically inspired binaural hearing aid algorithms: Design principles and effectiveness
NASA Astrophysics Data System (ADS)
Feng, Albert
2002-05-01
Despite rapid advances in the sophistication of hearing aid technology and microelectronics, listening in noise remains problematic for people with hearing impairment. To solve this problem two algorithms were designed for use in binaural hearing aid systems. The signal processing strategies are based on principles in auditory physiology and psychophysics: (a) the location/extraction (L/E) binaural computational scheme determines the directions of source locations and cancels noise by applying a simple subtraction method over every frequency band; and (b) the frequency-domain minimum-variance (FMV) scheme extracts a target sound from a known direction amidst multiple interfering sound sources. Both algorithms were evaluated using standard metrics such as signal-to-noise-ratio gain and articulation index. Results were compared with those from conventional adaptive beam-forming algorithms. In free-field tests with multiple interfering sound sources our algorithms performed better than conventional algorithms. Preliminary intelligibility and speech reception results in multitalker environments showed gains for every listener with normal or impaired hearing when the signals were processed in real time with the FMV binaural hearing aid algorithm. [Work supported by NIH-NIDCD Grant No. R21DC04840 and the Beckman Institute.
Mapping underwater sound noise and assessing its sources by using a self-organizing maps method.
Rako, Nikolina; Vilibić, Ivica; Mihanović, Hrvoje
2013-03-01
This study aims to provide an objective mapping of the underwater noise and its sources over an Adriatic coastal marine habitat by applying the self-organizing maps (SOM) method. Systematic sampling of sea ambient noise (SAN) was carried out at ten predefined acoustic stations between 2007 and 2009. Analyses of noise levels were performed for 1/3 octave band standard centered frequencies in terms of instantaneous sound pressure levels averaged over 300 s to calculate the equivalent continuous sound pressure levels. Data on vessels' presence, type, and distance from the monitoring stations were also collected at each acoustic station during the acoustic sampling. Altogether 69 noise surveys were introduced to the SOM predefined 2 × 2 array. The overall results of the analysis distinguished two dominant underwater soundscapes, associating them mainly to the seasonal changes in the nautical tourism and fishing activities within the study area and to the wind and wave action. The analysis identified recreational vessels as the dominant anthropogenic source of underwater noise, particularly during the tourist season. The method demonstrated to be an efficient tool in predicting the SAN levels based on the vessel distribution, indicating also the possibility of its wider implication for marine conservation.
Investigation of spherical loudspeaker arrays for local active control of sound.
Peleg, Tomer; Rafaely, Boaz
2011-10-01
Active control of sound can be employed globally to reduce noise levels in an entire enclosure, or locally around a listener's head. Recently, spherical loudspeaker arrays have been studied as multiple-channel sources for local active control of sound, presenting the fundamental theory and several active control configurations. In this paper, important aspects of using a spherical loudspeaker array for local active control of sound are further investigated. First, the feasibility of creating sphere-shaped quiet zones away from the source is studied both theoretically and numerically, showing that these quiet zones are associated with sound amplification and poor system robustness. To mitigate the latter, the design of shell-shaped quiet zones around the source is investigated. A combination of two spherical sources is then studied with the aim of enlarging the quiet zone. The two sources are employed to generate quiet zones that surround a rigid sphere, investigating the application of active control around a listener's head. A significant improvement in performance is demonstrated in this case over a conventional headrest-type system that uses two monopole secondary sources. Finally, several simulations are presented to support the theoretical work and to demonstrate the performance and limitations of the system. © 2011 Acoustical Society of America
A moving medium formulation for prediction of propeller noise at incidence
NASA Astrophysics Data System (ADS)
Ghorbaniasl, Ghader; Lacor, Chris
2012-01-01
This paper presents a time domain formulation for the sound field radiated by moving bodies in a uniform steady flow with arbitrary orientation. The aim is to provide a formulation for prediction of noise from body so that effects of crossflow on a propeller can be modeled in the time domain. An established theory of noise generation by a moving source is combined with the moving medium Green's function for derivation of the formulation. A formula with Doppler factor is developed because it is more easily interpreted and is more helpful in examining the physic of systems. Based on the technique presented, the source of asymmetry of the sound field can be explained in terms of physics of a moving source. It is shown that the derived formulation can be interpreted as an extension of formulation 1 and 1A of Farassat based on the Ffowcs Williams and Hawkings (FW-H) equation for moving medium problems. Computational results for a stationary monopole and dipole point source in moving medium, a rotating point force in crossflow, a model of helicopter blade at incidence and a propeller case with subsonic tips at incidence verify the formulation.
Echolocation versus echo suppression in humans
Wallmeier, Ludwig; Geßele, Nikodemus; Wiegrebe, Lutz
2013-01-01
Several studies have shown that blind humans can gather spatial information through echolocation. However, when localizing sound sources, the precedence effect suppresses spatial information of echoes, and thereby conflicts with effective echolocation. This study investigates the interaction of echolocation and echo suppression in terms of discrimination suppression in virtual acoustic space. In the ‘Listening’ experiment, sighted subjects discriminated between positions of a single sound source, the leading or the lagging of two sources, respectively. In the ‘Echolocation’ experiment, the sources were replaced by reflectors. Here, the same subjects evaluated echoes generated in real time from self-produced vocalizations and thereby discriminated between positions of a single reflector, the leading or the lagging of two reflectors, respectively. Two key results were observed. First, sighted subjects can learn to discriminate positions of reflective surfaces echo-acoustically with accuracy comparable to sound source discrimination. Second, in the Listening experiment, the presence of the leading source affected discrimination of lagging sources much more than vice versa. In the Echolocation experiment, however, the presence of both the lead and the lag strongly affected discrimination. These data show that the classically described asymmetry in the perception of leading and lagging sounds is strongly diminished in an echolocation task. Additional control experiments showed that the effect is owing to both the direct sound of the vocalization that precedes the echoes and owing to the fact that the subjects actively vocalize in the echolocation task. PMID:23986105
Two dimensional sound field reproduction using higher order sources to exploit room reflections.
Betlehem, Terence; Poletti, Mark A
2014-04-01
In this paper, sound field reproduction is performed in a reverberant room using higher order sources (HOSs) and a calibrating microphone array. Previously a sound field was reproduced with fixed directivity sources and the reverberation compensated for using digital filters. However by virtue of their directive properties, HOSs may be driven to not only avoid the creation of excess reverberation but also to use room reflection to contribute constructively to the desired sound field. The manner by which the loudspeakers steer the sound around the room is determined by measuring the acoustic transfer functions. The requirements on the number and order N of HOSs for accurate reproduction in a reverberant room are derived, showing a 2N + 1-fold decrease in the number of loudspeakers in comparison to using monopole sources. HOSs are shown applicable to rooms with a rich variety of wall reflections while in an anechoic room their advantages may be lost. Performance is investigated in a room using extensions of both the diffuse field model and a more rigorous image-source simulation method, which account for the properties of the HOSs. The robustness of the proposed method is validated by introducing measurement errors.
Aliabadi, Mohsen; Golmohammadi, Rostam; Mansoorizadeh, Muharram
2014-03-01
It is highly important to analyze the acoustic properties of workrooms in order to identify best noise control measures from the standpoint of noise exposure limits. Due to the fact that sound pressure is dependent upon environments, it cannot be a suitable parameter for determining the share of workroom acoustic characteristics in producing noise pollution. This paper aims to empirically analyze noise source characteristics and acoustic properties of noisy embroidery workrooms based on special parameters. In this regard, reverberation time as the special room acoustic parameter in 30 workrooms was measured based on ISO 3382-2. Sound power quantity of embroidery machines was also determined based on ISO 9614-3. Multiple linear regression was employed for predicting reverberation time based on acoustic features of the workrooms using MATLAB software. The results showed that the measured reverberation times in most of the workrooms were approximately within the ranges recommended by ISO 11690-1. Similarity between reverberation time values calculated by the Sabine formula and measured values was relatively poor (R (2) = 0.39). This can be due to the inaccurate estimation of the acoustic influence of furniture and formula preconditions. Therefore, this value cannot be considered representative of an actual acoustic room. However, the prediction performance of the regression method with root mean square error (RMSE) = 0.23 s and R (2) = 0.69 is relatively acceptable. Because the sound power of the embroidery machines was relatively high, these sources get the highest priority when it comes to applying noise controls. Finally, an objective approach for the determination of the share of workroom acoustic characteristics in producing noise could facilitate the identification of cost-effective noise controls.
Calculating far-field radiated sound pressure levels from NASTRAN output
NASA Technical Reports Server (NTRS)
Lipman, R. R.
1986-01-01
FAFRAP is a computer program which calculates far field radiated sound pressure levels from quantities computed by a NASTRAN direct frequency response analysis of an arbitrarily shaped structure. Fluid loading on the structure can be computed directly by NASTRAN or an added-mass approximation to fluid loading on the structure can be used. Output from FAFRAP includes tables of radiated sound pressure levels and several types of graphic output. FAFRAP results for monopole and dipole sources compare closely with an explicit calculation of the radiated sound pressure level for those sources.
Directive sources in acoustic discrete-time domain simulations based on directivity diagrams.
Escolano, José; López, José J; Pueo, Basilio
2007-06-01
Discrete-time domain methods provide a simple and flexible way to solve initial boundary value problems. With regard to the sources in such methods, only monopoles or dipoles can be considered. However, in many problems such as room acoustics, the radiation of realistic sources is directional-dependent and their directivity patterns have a clear influence on the total sound field. In this letter, a method to synthesize the directivity of sources is proposed, especially in cases where the knowledge is only based on discrete values of the directivity diagram. Some examples have been carried out in order to show the behavior and accuracy of the proposed method.
Smith, Rosanna C G; Price, Stephen R
2014-01-01
Sound source localization is critical to animal survival and for identification of auditory objects. We investigated the acuity with which humans localize low frequency, pure tone sounds using timing differences between the ears. These small differences in time, known as interaural time differences or ITDs, are identified in a manner that allows localization acuity of around 1° at the midline. Acuity, a relative measure of localization ability, displays a non-linear variation as sound sources are positioned more laterally. All species studied localize sounds best at the midline and progressively worse as the sound is located out towards the side. To understand why sound localization displays this variation with azimuthal angle, we took a first-principles, systemic, analytical approach to model localization acuity. We calculated how ITDs vary with sound frequency, head size and sound source location for humans. This allowed us to model ITD variation for previously published experimental acuity data and determine the distribution of just-noticeable differences in ITD. Our results suggest that the best-fit model is one whereby just-noticeable differences in ITDs are identified with uniform or close to uniform sensitivity across the physiological range. We discuss how our results have several implications for neural ITD processing in different species as well as development of the auditory system.
NASA Technical Reports Server (NTRS)
Succi, G. P.
1983-01-01
The techniques of helicopter rotor noise prediction attempt to describe precisely the details of the noise field and remove the empiricisms and restrictions inherent in previous methods. These techniques require detailed inputs of the rotor geometry, operating conditions, and blade surface pressure distribution. The Farassat noise prediction techniques was studied, and high speed helicopter noise prediction using more detailed representations of the thickness and loading noise sources was investigated. These predictions were based on the measured blade surface pressures on an AH-1G rotor and compared to the measured sound field. Although refinements in the representation of the thickness and loading noise sources improve the calculation, there are still discrepancies between the measured and predicted sound field. Analysis of the blade surface pressure data indicates shocks on the blades, which are probably responsible for these discrepancies.
Broadband calibration of R/V Ewing seismic sources
NASA Astrophysics Data System (ADS)
Tolstoy, M.; Diebold, J. B.; Webb, S. C.; Bohnenstiehl, D. R.; Chapp, E.; Holmes, R. C.; Rawson, M.
2004-07-01
The effects of anthropogenic sound sources on marine mammals are of increasing interest and controversy [e.g., Malakoff, 2001]. To understand and mitigate better the possible impacts of specific sound sources, well-calibrated broadband measurements of acoustic received levels must be made in a variety of environments. In late spring 2003 an acoustic calibration study was conducted in the northern Gulf of Mexico to obtain broad frequency band measurements of seismic sources used by the R/V Maurice Ewing. Received levels in deep water were lower than anticipated based on modeling, and in shallow water they were higher. For the marine mammals of greatest concern (beaked whales) the 1-20 kHz frequency range is considered particularly significant [National Oceanic Atmospheric Administration and U. S. Navy, 2001; Frantzis et al., 2002]. 1/3-octave measurements show received levels at 1 kHz are ~20-33 dB (re: 1 μPa) lower than peak levels at 5-100 Hz, and decrease an additional ~20-33 dB in the 10-20 kHz range.
Da Costa, Sandra; Bourquin, Nathalie M.-P.; Knebel, Jean-François; Saenz, Melissa; van der Zwaag, Wietske; Clarke, Stephanie
2015-01-01
Environmental sounds are highly complex stimuli whose recognition depends on the interaction of top-down and bottom-up processes in the brain. Their semantic representations were shown to yield repetition suppression effects, i. e. a decrease in activity during exposure to a sound that is perceived as belonging to the same source as a preceding sound. Making use of the high spatial resolution of 7T fMRI we have investigated the representations of sound objects within early-stage auditory areas on the supratemporal plane. The primary auditory cortex was identified by means of tonotopic mapping and the non-primary areas by comparison with previous histological studies. Repeated presentations of different exemplars of the same sound source, as compared to the presentation of different sound sources, yielded significant repetition suppression effects within a subset of early-stage areas. This effect was found within the right hemisphere in primary areas A1 and R as well as two non-primary areas on the antero-medial part of the planum temporale, and within the left hemisphere in A1 and a non-primary area on the medial part of Heschl’s gyrus. Thus, several, but not all early-stage auditory areas encode the meaning of environmental sounds. PMID:25938430
Conversion of environmental data to a digital-spatial database, Puget Sound area, Washington
Uhrich, M.A.; McGrath, T.S.
1997-01-01
Data and maps from the Puget Sound Environmental Atlas, compiled for the U.S. Environmental Protection Agency, the Puget Sound Water Quality Authority, and the U.S. Army Corps of Engineers, have been converted into a digital-spatial database using a geographic information system. Environmental data for the Puget Sound area,collected from sources other than the Puget SoundEnvironmental Atlas by different Federal, State, andlocal agencies, also have been converted into thisdigital-spatial database. Background on the geographic-information-system planning process, the design and implementation of the geographic information-system database, and the reasons for conversion to this digital-spatial database are included in this report. The Puget Sound Environmental Atlas data layers include information about seabird nesting areas, eelgrass and kelp habitat, marine mammal and fish areas, and shellfish resources and bed certification. Data layers, from sources other than the Puget Sound Environmental Atlas, include the Puget Sound shoreline, the water-body system, shellfish growing areas, recreational shellfish beaches, sewage-treatment outfalls, upland hydrography,watershed and political boundaries, and geographicnames. The sources of data, descriptions of the datalayers, and the steps and errors of processing associated with conversion to a digital-spatial database used in development of the Puget Sound Geographic Information System also are included in this report. The appendixes contain data dictionaries for each of the resource layers and error values for the conversion of Puget SoundEnvironmental Atlas data.
Propagation of Finite Amplitude Sound in Multiple Waveguide Modes.
NASA Astrophysics Data System (ADS)
van Doren, Thomas Walter
1993-01-01
This dissertation describes a theoretical and experimental investigation of the propagation of finite amplitude sound in multiple waveguide modes. Quasilinear analytical solutions of the full second order nonlinear wave equation, the Westervelt equation, and the KZK parabolic wave equation are obtained for the fundamental and second harmonic sound fields in a rectangular rigid-wall waveguide. It is shown that the Westervelt equation is an acceptable approximation of the full nonlinear wave equation for describing guided sound waves of finite amplitude. A system of first order equations based on both a modal and harmonic expansion of the Westervelt equation is developed for waveguides with locally reactive wall impedances. Fully nonlinear numerical solutions of the system of coupled equations are presented for waveguides formed by two parallel planes which are either both rigid, or one rigid and one pressure release. These numerical solutions are compared to finite -difference solutions of the KZK equation, and it is shown that solutions of the KZK equation are valid only at frequencies which are high compared to the cutoff frequencies of the most important modes of propagation (i.e., for which sound propagates at small grazing angles). Numerical solutions of both the Westervelt and KZK equations are compared to experiments performed in an air-filled, rigid-wall, rectangular waveguide. Solutions of the Westervelt equation are in good agreement with experiment for low source frequencies, at which sound propagates at large grazing angles, whereas solutions of the KZK equation are not valid for these cases. At higher frequencies, at which sound propagates at small grazing angles, agreement between numerical solutions of the Westervelt and KZK equations and experiment is only fair, because of problems in specifying the experimental source condition with sufficient accuracy.
The propagation of sound in narrow street canyons
NASA Astrophysics Data System (ADS)
Iu, K. K.; Li, K. M.
2002-08-01
This paper addresses an important problem of predicting sound propagation in narrow street canyons with width less than 10 m, which are commonly found in a built-up urban district. Major noise sources are, for example, air conditioners installed on building facades and powered mechanical equipment for repair and construction work. Interference effects due to multiple reflections from building facades and ground surfaces are important contributions in these complex environments. Although the studies of sound transmission in urban areas can be traced back to as early as the 1960s, the resulting mathematical and numerical models are still unable to predict sound fields accurately in city streets. This is understandable because sound propagation in city streets involves many intriguing phenomena such as reflections and scattering at the building facades, diffusion effects due to recessions and protrusions of building surfaces, geometric spreading, and atmospheric absorption. This paper describes the development of a numerical model for the prediction of sound fields in city streets. To simplify the problem, a typical city street is represented by two parallel reflecting walls and a flat impedance ground. The numerical model is based on a simple ray theory that takes account of multiple reflections from the building facades. The sound fields due to the point source and its images are summed coherently such that mutual interference effects between contributing rays can be included in the analysis. Indoor experiments are conducted in an anechoic chamber. Experimental data are compared with theoretical predictions to establish the validity and usefulness of this simple model. Outdoor experimental measurements have also been conducted to further validate the model. copyright 2002 Acoustical Society of America.
Vector Acoustics, Vector Sensors, and 3D Underwater Imaging
NASA Astrophysics Data System (ADS)
Lindwall, D.
2007-12-01
Vector acoustic data has two more dimensions of information than pressure data and may allow for 3D underwater imaging with much less data than with hydrophone data. The vector acoustic sensors measures the particle motions due to passing sound waves and, in conjunction with a collocated hydrophone, the direction of travel of the sound waves. When using a controlled source with known source and sensor locations, the reflection points of the sound field can be determined with a simple trigonometric calculation. I demonstrate this concept with an experiment that used an accelerometer based vector acoustic sensor in a water tank with a short-pulse source and passive scattering targets. The sensor consists of a three-axis accelerometer and a matched hydrophone. The sound source was a standard transducer driven by a short 7 kHz pulse. The sensor was suspended in a fixed location and the hydrophone was moved about the tank by a robotic arm to insonify the tank from many locations. Several floats were placed in the tank as acoustic targets at diagonal ranges of approximately one meter. The accelerometer data show the direct source wave as well as the target scattered waves and reflections from the nearby water surface, tank bottom and sides. Without resorting to the usual methods of seismic imaging, which in this case is only two dimensional and relied entirely on the use of a synthetic source aperture, the two targets, the tank walls, the tank bottom, and the water surface were imaged. A directional ambiguity inherent to vector sensors is removed by using collocated hydrophone data. Although this experiment was in a very simple environment, it suggests that 3-D seismic surveys may be achieved with vector sensors using the same logistics as a 2-D survey that uses conventional hydrophones. This work was supported by the Office of Naval Research, program element 61153N.
Wave Field Synthesis of moving sources with arbitrary trajectory and velocity profile.
Firtha, Gergely; Fiala, Péter
2017-08-01
The sound field synthesis of moving sound sources is of great importance when dynamic virtual sound scenes are to be reconstructed. Previous solutions considered only virtual sources moving uniformly along a straight trajectory, synthesized employing a linear loudspeaker array. This article presents the synthesis of point sources following an arbitrary trajectory. Under high-frequency assumptions 2.5D Wave Field Synthesis driving functions are derived for arbitrary shaped secondary source contours by adapting the stationary phase approximation to the dynamic description of sources in motion. It is explained how a referencing function should be chosen in order to optimize the amplitude of synthesis on an arbitrary receiver curve. Finally, a finite difference implementation scheme is considered, making the presented approach suitable for real-time applications.
Grieco-Calub, Tina M.; Litovsky, Ruth Y.
2010-01-01
Objectives To measure sound source localization in children who have sequential bilateral cochlear implants (BICIs); to determine if localization accuracy correlates with performance on a right-left discrimination task (i.e., spatial acuity); to determine if there is a measurable bilateral benefit on a sound source identification task (i.e., localization accuracy) by comparing performance under bilateral and unilateral listening conditions; to determine if sound source localization continues to improve with longer durations of bilateral experience. Design Two groups of children participated in this study: a group of 21 children who received BICIs in sequential procedures (5–14 years old) and a group of 7 typically-developing children with normal acoustic hearing (5 years old). Testing was conducted in a large sound-treated booth with loudspeakers positioned on a horizontal arc with a radius of 1.2 m. Children participated in two experiments that assessed spatial hearing skills. Spatial hearing acuity was assessed with a discrimination task in which listeners determined if a sound source was presented on the right or left side of center; the smallest angle at which performance on this task was reliably above chance is the minimum audible angle. Sound localization accuracy was assessed with a sound source identification task in which children identified the perceived position of the sound source from a multi-loudspeaker array (7 or 15); errors are quantified using the root-mean-square (RMS) error. Results Sound localization accuracy was highly variable among the children with BICIs, with RMS errors ranging from 19°–56°. Performance of the NH group, with RMS errors ranging from 9°–29° was significantly better. Within the BICI group, in 11/21 children RMS errors were smaller in the bilateral vs. unilateral listening condition, indicating bilateral benefit. There was a significant correlation between spatial acuity and sound localization accuracy (R2=0.68, p<0.01), suggesting that children who achieve small RMS errors tend to have the smallest MAAs. Although there was large intersubject variability, testing of 11 children in the BICI group at two sequential visits revealed a subset of children who show improvement in spatial hearing skills over time. Conclusions A subset of children who use sequential BICIs can acquire sound localization abilities, even after long intervals between activation of hearing in the first- and second-implanted ears. This suggests that children with activation of the second implant later in life may be capable of developing spatial hearing abilities. The large variability in performance among the children with BICIs suggests that maturation of sound localization abilities in children with BICIs may be dependent on various individual subject factors such as age of implantation and chronological age. PMID:20592615
Kang
2000-03-01
This paper systematically compares the sound fields in street canyons with diffusely and geometrically reflecting boundaries. For diffuse boundaries, a radiosity-based theoretical/computer model has been developed. For geometrical boundaries, the image source method has been used. Computations using the models show that there are considerable differences between the sound fields resulting from the two kinds of boundaries. By replacing diffuse boundaries with geometrical boundaries, the sound attenuation along the length becomes significantly less; the RT30 is considerably longer; and the extra attenuation caused by air or vegetation absorption is reduced. There are also some similarities between the sound fields under the two boundary conditions. For example, in both cases the sound attenuation along the length with a given amount of absorption is the highest if the absorbers are arranged on one boundary and the lowest if they are evenly distributed on all boundaries. Overall, the results suggest that, from the viewpoint of urban noise reduction, it is better to design the street boundaries as diffusely reflective rather than acoustically smooth.
Evaluating a linearized Euler equations model for strong turbulence effects on sound propagation.
Ehrhardt, Loïc; Cheinet, Sylvain; Juvé, Daniel; Blanc-Benon, Philippe
2013-04-01
Sound propagation outdoors is strongly affected by atmospheric turbulence. Under strongly perturbed conditions or long propagation paths, the sound fluctuations reach their asymptotic behavior, e.g., the intensity variance progressively saturates. The present study evaluates the ability of a numerical propagation model based on the finite-difference time-domain solving of the linearized Euler equations in quantitatively reproducing the wave statistics under strong and saturated intensity fluctuations. It is the continuation of a previous study where weak intensity fluctuations were considered. The numerical propagation model is presented and tested with two-dimensional harmonic sound propagation over long paths and strong atmospheric perturbations. The results are compared to quantitative theoretical or numerical predictions available on the wave statistics, including the log-amplitude variance and the probability density functions of the complex acoustic pressure. The match is excellent for the evaluated source frequencies and all sound fluctuations strengths. Hence, this model captures these many aspects of strong atmospheric turbulence effects on sound propagation. Finally, the model results for the intensity probability density function are compared with a standard fit by a generalized gamma function.
NASA Technical Reports Server (NTRS)
Rentz, P. E.
1976-01-01
Experimental evaluations of the acoustical characteristics and source sound power and directionality measurement capabilities of the NASA Lewis 9 x 15 foot low speed wind tunnel in the untreated or hardwall configuration were performed. The results indicate that source sound power estimates can be made using only settling chamber sound pressure measurements. The accuracy of these estimates, expressed as one standard deviation, can be improved from + or - 4 db to + or - 1 db if sound pressure measurements in the preparation room and diffuser are also used and source directivity information is utilized. A simple procedure is presented. Acceptably accurate measurements of source direct field acoustic radiation were found to be limited by the test section reverberant characteristics to 3.0 feet for omni-directional and highly directional sources. Wind-on noise measurements in the test section, settling chamber and preparation room were found to depend on the sixth power of tunnel velocity. The levels were compared with various analytic models. Results are presented and discussed.
Reduced order modeling of head related transfer functions for virtual acoustic displays
NASA Astrophysics Data System (ADS)
Willhite, Joel A.; Frampton, Kenneth D.; Grantham, D. Wesley
2003-04-01
The purpose of this work is to improve the computational efficiency in acoustic virtual applications by creating and testing reduced order models of the head related transfer functions used in localizing sound sources. State space models of varying order were generated from zero-elevation Head Related Impulse Responses (HRIRs) using Kungs Single Value Decomposition (SVD) technique. The inputs to the models are the desired azimuths of the virtual sound sources (from minus 90 deg to plus 90 deg, in 10 deg increments) and the outputs are the left and right ear impulse responses. Trials were conducted in an anechoic chamber in which subjects were exposed to real sounds that were emitted by individual speakers across a numbered speaker array, phantom sources generated from the original HRIRs, and phantom sound sources generated with the different reduced order state space models. The error in the perceived direction of the phantom sources generated from the reduced order models was compared to errors in localization using the original HRIRs.
NASA Astrophysics Data System (ADS)
Wang, Xun; Quost, Benjamin; Chazot, Jean-Daniel; Antoni, Jérôme
2016-01-01
This paper considers the problem of identifying multiple sound sources from acoustical measurements obtained by an array of microphones. The problem is solved via maximum likelihood. In particular, an expectation-maximization (EM) approach is used to estimate the sound source locations and strengths, the pressure measured by a microphone being interpreted as a mixture of latent signals emitted by the sources. This work also considers two kinds of uncertainties pervading the sound propagation and measurement process: uncertain microphone locations and uncertain wavenumber. These uncertainties are transposed to the data in the belief functions framework. Then, the source locations and strengths can be estimated using a variant of the EM algorithm, known as the Evidential EM (E2M) algorithm. Eventually, both simulation and real experiments are shown to illustrate the advantage of using the EM in the case without uncertainty and the E2M in the case of uncertain measurement.
Near-field sound radiation of fan tones from an installed turbofan aero-engine.
McAlpine, Alan; Gaffney, James; Kingan, Michael J
2015-09-01
The development of a distributed source model to predict fan tone noise levels of an installed turbofan aero-engine is reported. The key objective is to examine a canonical problem: how to predict the pressure field due to a distributed source located near an infinite, rigid cylinder. This canonical problem is a simple representation of an installed turbofan, where the distributed source is based on the pressure pattern generated by a spinning duct mode, and the rigid cylinder represents an aircraft fuselage. The radiation of fan tones can be modelled in terms of spinning modes. In this analysis, based on duct modes, theoretical expressions for the near-field acoustic pressures on the cylinder, or at the same locations without the cylinder, have been formulated. Simulations of the near-field acoustic pressures are compared against measurements obtained from a fan rig test. Also, the installation effect is quantified by calculating the difference in the sound pressure levels with and without the adjacent cylindrical fuselage. Results are shown for the blade passing frequency fan tone radiated at a supersonic fan operating condition.
Moving Sound Source Localization Based on Sequential Subspace Estimation in Actual Room Environments
NASA Astrophysics Data System (ADS)
Tsuji, Daisuke; Suyama, Kenji
This paper presents a novel method for moving sound source localization and its performance evaluation in actual room environments. The method is based on the MUSIC (MUltiple SIgnal Classification) which is one of the most high resolution localization methods. When using the MUSIC, a computation of eigenvectors of correlation matrix is required for the estimation. It needs often a high computational costs. Especially, in the situation of moving source, it becomes a crucial drawback because the estimation must be conducted at every the observation time. Moreover, since the correlation matrix varies its characteristics due to the spatial-temporal non-stationarity, the matrix have to be estimated using only a few observed samples. It makes the estimation accuracy degraded. In this paper, the PAST (Projection Approximation Subspace Tracking) is applied for sequentially estimating the eigenvectors spanning the subspace. In the PAST, the eigen-decomposition is not required, and therefore it is possible to reduce the computational costs. Several experimental results in the actual room environments are shown to present the superior performance of the proposed method.
Auditory Localization: An Annotated Bibliography
1983-11-01
tranverse plane, natural sound localization in ,-- both horizontal and vertical planes can be performed with nearly the same accuracy as real sound sources...important for unscrambling the competing sounds which so often occur in natural environments. A workable sound sensor has been constructed and empirical
NASA Technical Reports Server (NTRS)
Callegari, A. J.
1979-01-01
A nonlinear theory for sound propagation in variable area ducts carrying a nearly sonic flow is presented. Linear acoustic theory is shown to be singular and the detailed nature of the singularity is used to develop the correct nonlinear theory. The theory is based on a quasi-one dimensional model. It is derived by the method of matched asymptotic expansions. In a nearly chocked flow, the theory indicates the following processes to be acting: a transonic trapping of upstream propagating sound causing an intensification of this sound in the throat region of the duct; generation of superharmonics and an acoustic streaming effect; development of shocks in the acoustic quantities near the throat. Several specific problems are solved analytically and numerical parameter studies are carried out. Results indicate that appreciable acoustic power is shifted to higher harmonics as shocked conditions are approached. The effect of the throat Mach number on the attenuation of upstream propagating sound excited by a fixed source is also determined.
Demodulation processes in auditory perception
NASA Astrophysics Data System (ADS)
Feth, Lawrence L.
1994-08-01
The long range goal of this project is the understanding of human auditory processing of information conveyed by complex, time-varying signals such as speech, music or important environmental sounds. Our work is guided by the assumption that human auditory communication is a 'modulation - demodulation' process. That is, we assume that sound sources produce a complex stream of sound pressure waves with information encoded as variations ( modulations) of the signal amplitude and frequency. The listeners task then is one of demodulation. Much of past. psychoacoustics work has been based in what we characterize as 'spectrum picture processing.' Complex sounds are Fourier analyzed to produce an amplitude-by-frequency 'picture' and the perception process is modeled as if the listener were analyzing the spectral picture. This approach leads to studies such as 'profile analysis' and the power-spectrum model of masking. Our approach leads us to investigate time-varying, complex sounds. We refer to them as dynamic signals and we have developed auditory signal processing models to help guide our experimental work.
Shao, Wei; Mechefske, Chris K
2005-04-01
This paper describes an analytical model of finite cylindrical ducts with infinite flanges. This model is used to investigate the sound radiation characteristics of the gradient coil system of a magnetic resonance imaging (MRI) scanner. The sound field in the duct satisfies both the boundary conditions at the wall and at the open ends. The vibrating cylindrical wall of the duct is assumed to be the only sound source. Different acoustic conditions for the wall (rigid and absorptive) are used in the simulations. The wave reflection phenomenon at the open ends of the finite duct is described by general radiation impedance. The analytical model is validated by the comparison with its counterpart in a commercial code based on the boundary element method (BEM). The analytical model shows significant advantages over the BEM model with better numerical efficiency and a direct relation between the design parameters and the sound field inside the duct.
Location, location, location: finding a suitable home among the noise
Stanley, Jenni A.; Radford, Craig A.; Jeffs, Andrew G.
2012-01-01
While sound is a useful cue for guiding the onshore orientation of larvae because it travels long distances underwater, it also has the potential to convey valuable information about the quality and type of the habitat at the source. Here, we provide, to our knowledge, the first evidence that settlement-stage coastal crab species can interpret and show a strong settlement and metamorphosis response to habitat-related differences in natural underwater sound. Laboratory- and field-based experiments demonstrated that time to metamorphosis in the settlement-stage larvae of common coastal crab species varied in response to different underwater sound signatures produced by different habitat types. The megalopae of five species of both temperate and tropical crabs showed a significant decrease in time to metamorphosis, when exposed to sound from their optimal settlement habitat type compared with other habitat types. These results indicate that sounds emanating from specific underwater habitats may play a major role in determining spatial patterns of recruitment in coastal crab species. PMID:22673354
Detection of Sound Image Movement During Horizontal Head Rotation
Ohba, Kagesho; Iwaya, Yukio; Suzuki, Yôiti
2016-01-01
Movement detection for a virtual sound source was measured during the listener’s horizontal head rotation. Listeners were instructed to do head rotation at a given speed. A trial consisted of two intervals. During an interval, a virtual sound source was presented 60° to the right or left of the listener, who was instructed to rotate the head to face the sound image position. Then in one of a pair of intervals, the sound position was moved slightly in the middle of the rotation. Listeners were asked to judge the interval in a trial during which the sound stimuli moved. Results suggest that detection thresholds are higher when listeners do head rotation. Moreover, this effect was found to be independent of the rotation velocity. PMID:27698993
NASA Technical Reports Server (NTRS)
Lucas, Michael J.; Marcolini, Michael A.
1997-01-01
The Rotorcraft Noise Model (RNM) is an aircraft noise impact modeling computer program being developed for NASA-Langley Research Center which calculates sound levels at receiver positions either on a uniform grid or at specific defined locations. The basic computational model calculates a variety of metria. Acoustic properties of the noise source are defined by two sets of sound pressure hemispheres, each hemisphere being centered on a noise source of the aircraft. One set of sound hemispheres provides the broadband data in the form of one-third octave band sound levels. The other set of sound hemispheres provides narrowband data in the form of pure-tone sound pressure levels and phase. Noise contours on the ground are output graphically or in tabular format, and are suitable for inclusion in Environmental Impact Statements or Environmental Assessments.
Hindmarsh, Mark
2018-02-16
A model for the acoustic production of gravitational waves at a first-order phase transition is presented. The source of gravitational radiation is the sound waves generated by the explosive growth of bubbles of the stable phase. The model assumes that the sound waves are linear and that their power spectrum is determined by the characteristic form of the sound shell around the expanding bubble. The predicted power spectrum has two length scales, the average bubble separation and the sound shell width when the bubbles collide. The peak of the power spectrum is at wave numbers set by the sound shell width. For a higher wave number k, the power spectrum decreases to k^{-3}. At wave numbers below the inverse bubble separation, the power spectrum goes to k^{5}. For bubble wall speeds near the speed of sound where these two length scales are distinguished, there is an intermediate k^{1} power law. The detailed dependence of the power spectrum on the wall speed and the other parameters of the phase transition raises the possibility of their constraint or measurement at a future space-based gravitational wave observatory such as LISA.
NASA Astrophysics Data System (ADS)
Hindmarsh, Mark
2018-02-01
A model for the acoustic production of gravitational waves at a first-order phase transition is presented. The source of gravitational radiation is the sound waves generated by the explosive growth of bubbles of the stable phase. The model assumes that the sound waves are linear and that their power spectrum is determined by the characteristic form of the sound shell around the expanding bubble. The predicted power spectrum has two length scales, the average bubble separation and the sound shell width when the bubbles collide. The peak of the power spectrum is at wave numbers set by the sound shell width. For a higher wave number k , the power spectrum decreases to k-3. At wave numbers below the inverse bubble separation, the power spectrum goes to k5. For bubble wall speeds near the speed of sound where these two length scales are distinguished, there is an intermediate k1 power law. The detailed dependence of the power spectrum on the wall speed and the other parameters of the phase transition raises the possibility of their constraint or measurement at a future space-based gravitational wave observatory such as LISA.
Keil, Richard; Salemme, Keri; Forrest, Brittany; Neibauer, Jaqui; Logsdon, Miles
2011-11-01
Organic compounds were evaluated in March 2010 at 22 stations in Barkley Sound, Vancouver Island Canada and at 66 locations in Puget Sound. Of 37 compounds, 15 were xenobiotics, 8 were determined to have an anthropogenic imprint over natural sources, and 13 were presumed to be of natural or mixed origin. The three most frequently detected compounds were salicyclic acid, vanillin and thymol. The three most abundant compounds were diethylhexyl phthalate (DEHP), ethyl vanillin and benzaldehyde (∼600 n g L(-1) on average). Concentrations of xenobiotics were 10-100 times higher in Puget Sound relative to Barkley Sound. Three compound couplets are used to illustrate the influence of human activity on marine waters; vanillin and ethyl vanillin, salicylic acid and acetylsalicylic acid, and cinnamaldehyde and cinnamic acid. Ratios indicate that anthropogenic activities are the predominant source of these chemicals in Puget Sound. Published by Elsevier Ltd.
A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS
NASA Astrophysics Data System (ADS)
Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto
At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.
A Hybrid RANS/LES Approach for Predicting Jet Noise
NASA Technical Reports Server (NTRS)
Goldstein, Marvin E.
2006-01-01
Hybrid acoustic prediction methods have an important advantage over the current Reynolds averaged Navier-Stokes (RANS) based methods in that they only involve modeling of the relatively universal subscale motion and not the configuration dependent larger scale turbulence. Unfortunately, they are unable to account for the high frequency sound generated by the turbulence in the initial mixing layers. This paper introduces an alternative approach that directly calculates the sound from a hybrid RANS/LES flow model (which can resolve the steep gradients in the initial mixing layers near the nozzle lip) and adopts modeling techniques similar to those used in current RANS based noise prediction methods to determine the unknown sources in the equations for the remaining unresolved components of the sound field. The resulting prediction method would then be intermediate between the current noise prediction codes and previously proposed hybrid noise prediction methods.
An underwater ranging system based on photoacoustic effect occurring on target surface
NASA Astrophysics Data System (ADS)
Ni, Kai; Hu, Kai; Li, Xinghui; Wang, Lidai; Zhou, Qian; Wang, Xiaohao
2016-11-01
In this paper, an underwater ranging system based on photoacoustic effect occurring on target surface is proposed. In this proposal, laser pulse generated by blue-green laser is directly incident on target surface, where the photoacoustic effect occurs and a sound source is formed. And then the sound wave which is also called photoacoustic signal is received by the ultrasonic receiver after passing through water. According to the time delay between transmitting laser and receiving photoacoustic signal, and sound velocity in water, the distance between the target and the ultrasonic receiver can be calculated. Differing from underwater range finding by only laser, this approach can avoid backscattering of laser beam, so easier to implement. Experimental system according to this principle has been constructed to verify the feasibility of this technology. The experimental results showed that a ranging accuracy of 1 mm can be effectively achieved when the target is close to the ultrasonic receiver.
Mapping the sound field of an erupting submarine volcano using an acoustic glider.
Matsumoto, Haru; Haxel, Joseph H; Dziak, Robert P; Bohnenstiehl, Delwayne R; Embley, Robert W
2011-03-01
An underwater glider with an acoustic data logger flew toward a recently discovered erupting submarine volcano in the northern Lau basin. With the volcano providing a wide-band sound source, recordings from the two-day survey produced a two-dimensional sound level map spanning 1 km (depth) × 40 km(distance). The observed sound field shows depth- and range-dependence, with the first-order spatial pattern being consistent with the predictions of a range-dependent propagation model. The results allow constraining the acoustic source level of the volcanic activity and suggest that the glider provides an effective platform for monitoring natural and anthropogenic ocean sounds. © 2011 Acoustical Society of America
Understanding the intentional acoustic behavior of humpback whales: a production-based approach.
Cazau, Dorian; Adam, Olivier; Laitman, Jeffrey T; Reidenberg, Joy S
2013-09-01
Following a production-based approach, this paper deals with the acoustic behavior of humpback whales. This approach investigates various physical factors, which are either internal (e.g., physiological mechanisms) or external (e.g., environmental constraints) to the respiratory tractus of the whale, for their implications in sound production. This paper aims to describe a functional scenario of this tractus for the generation of vocal sounds. To do so, a division of this tractus into three different configurations is proposed, based on the air recirculation process which determines air sources and laryngeal valves. Then, assuming a vocal function (in sound generation or modification) for several specific anatomical components, an acoustic characterization of each of these configurations is proposed to link different spectral features, namely, fundamental frequencies and formant structures, to specific vocal production mechanisms. A discussion around the question of whether the whale is able to fully exploit the acoustic potential of its respiratory tractus is eventually provided.
Sensor system for heart sound biomonitor
NASA Astrophysics Data System (ADS)
Maple, Jarrad L.; Hall, Leonard T.; Agzarian, John; Abbott, Derek
1999-09-01
Heart sounds can be utilized more efficiently by medical doctors when they are displayed visually, rather than through a conventional stethoscope. A system whereby a digital stethoscope interfaces directly to a PC will be directly along with signal processing algorithms, adopted. The sensor is based on a noise cancellation microphone, with a 450 Hz bandwidth and is sampled at 2250 samples/sec with 12-bit resolution. Further to this, we discuss for comparison a piezo-based sensor with a 1 kHz bandwidth. A major problem is that the recording of the heart sound into these devices is subject to unwanted background noise which can override the heart sound and results in a poor visual representation. This noise originates from various sources such as skin contact with the stethoscope diaphragm, lung sounds, and other surrounding sounds such as speech. Furthermore we demonstrate a solution using 'wavelet denoising'. The wavelet transform is used because of the similarity between the shape of wavelets and the time-domain shape of a heartbeat sound. Thus coding of the waveform into the wavelet domain is achieved with relatively few wavelet coefficients, in contrast to the many Fourier components that would result from conventional decomposition. We show that the background noise can be dramatically reduced by a thresholding operation in the wavelet domain. The principle is that the background noise codes into many small broadband wavelet coefficients that can be removed without significant degradation of the signal of interest.
Perception of Water-Based Masking Sounds-Long-Term Experiment in an Open-Plan Office.
Hongisto, Valtteri; Varjo, Johanna; Oliva, David; Haapakangas, Annu; Benway, Evan
2017-01-01
A certain level of masking sound is necessary to control the disturbance caused by speech sounds in open-plan offices. The sound is usually provided with evenly distributed loudspeakers. Pseudo-random noise is often used as a source of artificial sound masking (PRMS). A recent laboratory experiment suggested that water-based masking sound (WBMS) could be more favorable than PRMS. The purpose of our study was to determine how the employees perceived different WBMSs compared to PRMS. The experiment was conducted in an open-plan office of 77 employees who had been accustomed to work under PRMS (44 dB L Aeq ). The experiment consisted of five masking conditions: the original PRMS, four different WBMSs and return to the original PRMS. The exposure time of each condition was 3 weeks. The noise level was nearly equal between the conditions (43-45 dB L Aeq ) but the spectra and the nature of the sounds were very different. A questionnaire was completed at the end of each condition. Acoustic satisfaction was worse during the WBMSs than during the PRMS. The disturbance caused by three out of four WBMSs was larger than that of PRMS. Several attributes describing the sound quality itself were in favor of PRMS. Colleagues' speech sounds disturbed more during WBMSs. None of the WBMSs produced better subjective ratings than PRMS. Although the first WBMS was equal with the PRMS for several variables, the overall results cannot be seen to support the use of WBMSs in office workplaces. Because the experiment suffered from some methodological weaknesses, conclusions about the adequacy of WBMSs cannot yet be drawn.
NASA Astrophysics Data System (ADS)
Sridhara, Basavapatna Sitaramaiah
In an internal combustion engine, the engine is the noise source and the exhaust pipe is the main transmitter of noise. Mufflers are often used to reduce engine noise level in the exhaust pipe. To optimize a muffler design, a series of experiments could be conducted using various mufflers installed in the exhaust pipe. For each configuration, the radiated sound pressure could be measured. However, this is not a very efficient method. A second approach would be to develop a scheme involving only a few measurements which can predict the radiated sound pressure at a specified distance from the open end of the exhaust pipe. In this work, the engine exhaust system was modelled as a lumped source-muffler-termination system. An expression for the predicted sound pressure level was derived in terms of the source and termination impedances, and the muffler geometry. The pressure source and monopole radiation models were used for the source and the open end of the exhaust pipe. The four pole parameters were used to relate the acoustic properties at two different cross sections of the muffler and the pipe. The developed formulation was verified through a series of experiments. Two loudspeakers and a reciprocating type vacuum pump were used as sound sources during the tests. The source impedance was measured using the direct, two-load and four-load methods. A simple expansion chamber and a side-branch resonator were used as mufflers. Sound pressure level measurements for the prediction scheme were made for several source-muffler and source-straight pipe combinations. The predicted and measured sound pressure levels were compared for all cases considered. In all cases, correlation of the experimental results and those predicted by the developed expressions was good. Predicted and measured values of the insertion loss of the mufflers were compared. The agreement between the two was good. Also, an error analysis of the four-load method was done.
Hart, Carl R; Reznicek, Nathan J; Wilson, D Keith; Pettit, Chris L; Nykaza, Edward T
2016-05-01
Many outdoor sound propagation models exist, ranging from highly complex physics-based simulations to simplified engineering calculations, and more recently, highly flexible statistical learning methods. Several engineering and statistical learning models are evaluated by using a particular physics-based model, namely, a Crank-Nicholson parabolic equation (CNPE), as a benchmark. Narrowband transmission loss values predicted with the CNPE, based upon a simulated data set of meteorological, boundary, and source conditions, act as simulated observations. In the simulated data set sound propagation conditions span from downward refracting to upward refracting, for acoustically hard and soft boundaries, and low frequencies. Engineering models used in the comparisons include the ISO 9613-2 method, Harmonoise, and Nord2000 propagation models. Statistical learning methods used in the comparisons include bagged decision tree regression, random forest regression, boosting regression, and artificial neural network models. Computed skill scores are relative to sound propagation in a homogeneous atmosphere over a rigid ground. Overall skill scores for the engineering noise models are 0.6%, -7.1%, and 83.8% for the ISO 9613-2, Harmonoise, and Nord2000 models, respectively. Overall skill scores for the statistical learning models are 99.5%, 99.5%, 99.6%, and 99.6% for bagged decision tree, random forest, boosting, and artificial neural network regression models, respectively.
Articulatory speech synthesis and speech production modelling
NASA Astrophysics Data System (ADS)
Huang, Jun
This dissertation addresses the problem of speech synthesis and speech production modelling based on the fundamental principles of human speech production. Unlike the conventional source-filter model, which assumes the independence of the excitation and the acoustic filter, we treat the entire vocal apparatus as one system consisting of a fluid dynamic aspect and a mechanical part. We model the vocal tract by a three-dimensional moving geometry. We also model the sound propagation inside the vocal apparatus as a three-dimensional nonplane-wave propagation inside a viscous fluid described by Navier-Stokes equations. In our work, we first propose a combined minimum energy and minimum jerk criterion to estimate the dynamic vocal tract movements during speech production. Both theoretical error bound analysis and experimental results show that this method can achieve very close match at the target points and avoid the abrupt change in articulatory trajectory at the same time. Second, a mechanical vocal fold model is used to compute the excitation signal of the vocal tract. The advantage of this model is that it is closely coupled with the vocal tract system based on fundamental aerodynamics. As a result, we can obtain an excitation signal with much more detail than the conventional parametric vocal fold excitation model. Furthermore, strong evidence of source-tract interaction is observed. Finally, we propose a computational model of the fricative and stop types of sounds based on the physical principles of speech production. The advantage of this model is that it uses an exogenous process to model the additional nonsteady and nonlinear effects due to the flow mode, which are ignored by the conventional source- filter speech production model. A recursive algorithm is used to estimate the model parameters. Experimental results show that this model is able to synthesize good quality fricative and stop types of sounds. Based on our dissertation work, we carefully argue that the articulatory speech production model has the potential to flexibly synthesize natural-quality speech sounds and to provide a compact computational model for speech production that can be beneficial to a wide range of areas in speech signal processing.
Amplitude and Wavelength Measurement of Sound Waves in Free Space using a Sound Wave Phase Meter
NASA Astrophysics Data System (ADS)
Ham, Sounggil; Lee, Kiwon
2018-05-01
We developed a sound wave phase meter (SWPM) and measured the amplitude and wavelength of sound waves in free space. The SWPM consists of two parallel metal plates, where the front plate was operated as a diaphragm. An aluminum perforated plate was additionally installed in front of the diaphragm, and the same signal as that applied to the sound source was applied to the perforated plate. The SWPM measures both the sound wave signal due to the diaphragm vibration and the induction signal due to the electric field of the aluminum perforated plate. Therefore, the two measurement signals interfere with each other due to the phase difference according to the distance between the sound source and the SWPM, and the amplitude of the composite signal that is output as a result is periodically changed. We obtained the wavelength of the sound wave from this periodic amplitude change measured in the free space and compared it with the theoretically calculated values.
Tinnitus retraining therapy: a different view on tinnitus.
Jastreboff, Pawel J; Jastreboff, Margaret M
2006-01-01
Tinnitus retraining therapy (TRT) is a method for treating tinnitus and decreased sound tolerance, based on the neurophysiological model of tinnitus. This model postulates involvement of the limbic and autonomic nervous systems in all cases of clinically significant tinnitus and points out the importance of both conscious and subconscious connections, which are governed by principles of conditioned reflexes. The treatments for tinnitus and misophonia are based on the concept of extinction of these reflexes, labeled as habituation. TRT aims at inducing changes in the mechanisms responsible for transferring signal (i.e., tinnitus, or external sound in the case of misophonia) from the auditory system to the limbic and autonomic nervous systems, and through this, remove signal-induced reactions without attempting to directly attenuate the tinnitus source or tinnitus/misophonia-evoked reactions. As such, TRT is effective for any type of tinnitus regardless of its etiology. TRT consists of: (1) counseling based on the neurophysiological model of tinnitus, and (2) sound therapy (with or without instrumentation). The main role of counseling is to reclassify tinnitus into the category of neutral stimuli. The role of sound therapy is to decrease the strength of the tinnitus signal. It is crucial to assess and treat tinnitus, decreased sound tolerance, and hearing loss simultaneously. Results from various groups have shown that TRT can be an effective method of treatment. Copyright (c) 2006 S. Karger AG, Basel.
New insights into insect's silent flight. Part II: sound source and noise control
NASA Astrophysics Data System (ADS)
Xue, Qian; Geng, Biao; Zheng, Xudong; Liu, Geng; Dong, Haibo
2016-11-01
The flapping flight of aerial animals has excellent aerodynamic performance but meanwhile generates low noise. In this study, the unsteady flow and acoustic characteristics of the flapping wing are numerically investigated for three-dimensional (3D) models of Tibicen linnei cicada at free forward flight conditions. Single cicada wing is modelled as a membrane with prescribed motion reconstructed by Wan et al. (2015). The flow field and acoustic field around the flapping wing are solved with immersed-boundary-method based incompressible flow solver and linearized-perturbed-compressible-equations based acoustic solver. The 3D simulation allows examination of both directivity and frequency composition of the produced sound in a full space. The mechanism of sound generation of flapping wing is analyzed through correlations between acoustic signals and flow features. Along with a flexible wing model, a rigid wing model is also simulated. The results from these two cases will be compared to investigate the effects of wing flexibility on sound generation. This study is supported by NSF CBET-1313217 and AFOSR FA9550-12-1-0071.
NASA Technical Reports Server (NTRS)
Smith, Wayne Farrior
1973-01-01
The effect of finite source size on the power statistics in a reverberant room for pure tone excitation was investigated. Theoretical results indicate that the standard deviation of low frequency, pure tone finite sources is always less than that predicted by point source theory and considerably less when the source dimension approaches one-half an acoustic wavelength or greater. A supporting experimental study was conducted utilizing an eight inch loudspeaker and a 30 inch loudspeaker at eleven source positions. The resulting standard deviation of sound power output of the smaller speaker is in excellent agreement with both the derived finite source theory and existing point source theory, if the theoretical data is adjusted to account for experimental incomplete spatial averaging. However, the standard deviation of sound power output of the larger speaker is measurably lower than point source theory indicates, but is in good agreement with the finite source theory.
Localizing nearby sound sources in a classroom: Binaural room impulse responses
NASA Astrophysics Data System (ADS)
Shinn-Cunningham, Barbara G.; Kopco, Norbert; Martin, Tara J.
2005-05-01
Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy. .
Localizing nearby sound sources in a classroom: binaural room impulse responses.
Shinn-Cunningham, Barbara G; Kopco, Norbert; Martin, Tara J
2005-05-01
Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy.
INTEGRATED HUMAN EXPOSURE SOURCE-TO-DOSE MODELING
The NERL human exposure research program is designed to provide a sound, scientifically-based approach to understanding how people are actually exposed to pollutants and the factors and pathways influencing exposure and dose. This research project serves to integrate and incorpo...
Effective Contracting: Trends and Lessons-Learned
2011-01-25
present a briefing on their current state of medical contracting; covering trends and lessons- learned from over the past 24 months. Their perspective...basing • Future opportunities in the MHS with a focus on strategic sourcing. • SECDEF guidance 2011 MHS Conference HCAA Mission To provide sound...including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing
Environmental Assessment of Installation Development at McConnell Air Force Base, Kansas
2007-05-01
characteristics of the noise source, distance between source and receptor, receptor sensitivity, weather , and time of day. Sound is measured with...bulk fuel storage and transfer, fuel dispensing, service stations , solvent degreasing, surface coating, and chemical usage/fugitive emissions. The...and weathered Permian bedrock. The deeper aquifer is within calcareous shales of the Wellington Formation. Groundwater flow follows the local
Computed narrow-band azimuthal time-reversing array retrofocusing in shallow water.
Dungan, M R; Dowling, D R
2001-10-01
The process of acoustic time reversal sends sound waves back to their point of origin in reciprocal acoustic environments even when the acoustic environment is unknown. The properties of the time-reversed field commonly depend on the frequency of the original signal, the characteristics of the acoustic environment, and the configuration of the time-reversing transducer array (TRA). In particular, vertical TRAs are predicted to produce horizontally confined foci in environments containing random volume refraction. This article validates and extends this prediction to shallow water environments via monochromatic Monte Carlo propagation simulations (based on parabolic equation computations using RAM). The computational results determine the azimuthal extent of a TRA's retrofocus in shallow-water sound channels either having random bottom roughness or containing random internal-wave-induced sound speed fluctuations. In both cases, randomness in the environment may reduce the predicted azimuthal angular width of the vertical TRA retrofocus to as little as several degrees (compared to 360 degrees for uniform environments) for source-array ranges from 5 to 20 km at frequencies from 500 Hz to 2 kHz. For both types of randomness, power law scalings are found to collapse the calculated azimuthal retrofocus widths for shallow sources over a variety of acoustic frequencies, source-array ranges, water column depths, and random fluctuation amplitudes and correlation scales. Comparisons are made between retrofocusing on shallow and deep sources, and in strongly and mildly absorbing environments.
Psychophysics and Neuronal Bases of Sound Localization in Humans
Ahveninen, Jyrki; Kopco, Norbert; Jääskeläinen, Iiro P.
2013-01-01
Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory “where” pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. PMID:23886698
Urban sound energy reduction by means of sound barriers
NASA Astrophysics Data System (ADS)
Iordache, Vlad; Ionita, Mihai Vlad
2018-02-01
In urban environment, various heating ventilation and air conditioning appliances designed to maintain indoor comfort become urban acoustic pollution vectors due to the sound energy produced by these equipment. The acoustic barriers are the recommended method for the sound energy reduction in urban environment. The current sizing method of these acoustic barriers is too difficult and it is not practical for any 3D location of the noisy equipment and reception point. In this study we will develop based on the same method a new simplified tool for acoustic barriers sizing, maintaining the same precision characteristic to the classical method. Abacuses for acoustic barriers sizing are built that can be used for different 3D locations of the source and the reception points, for several frequencies and several acoustic barrier heights. The study case presented in the article represents a confirmation for the rapidity and ease of use of these abacuses in the design of the acoustic barriers.
Spectral analysis methods for vehicle interior vibro-acoustics identification
NASA Astrophysics Data System (ADS)
Hosseini Fouladi, Mohammad; Nor, Mohd. Jailani Mohd.; Ariffin, Ahmad Kamal
2009-02-01
Noise has various effects on comfort, performance and health of human. Sound are analysed by human brain based on the frequencies and amplitudes. In a dynamic system, transmission of sound and vibrations depend on frequency and direction of the input motion and characteristics of the output. It is imperative that automotive manufacturers invest a lot of effort and money to improve and enhance the vibro-acoustics performance of their products. The enhancement effort may be very difficult and time-consuming if one relies only on 'trial and error' method without prior knowledge about the sources itself. Complex noise inside a vehicle cabin originated from various sources and travel through many pathways. First stage of sound quality refinement is to find the source. It is vital for automotive engineers to identify the dominant noise sources such as engine noise, exhaust noise and noise due to vibration transmission inside of vehicle. The purpose of this paper is to find the vibro-acoustical sources of noise in a passenger vehicle compartment. The implementation of spectral analysis method is much faster than the 'trial and error' methods in which, parts should be separated to measure the transfer functions. Also by using spectral analysis method, signals can be recorded in real operational conditions which conduce to more consistent results. A multi-channel analyser is utilised to measure and record the vibro-acoustical signals. Computational algorithms are also employed to identify contribution of various sources towards the measured interior signal. These achievements can be utilised to detect, control and optimise interior noise performance of road transport vehicles.
Captive Bottlenose Dolphins Do Discriminate Human-Made Sounds Both Underwater and in the Air.
Lima, Alice; Sébilleau, Mélissa; Boye, Martin; Durand, Candice; Hausberger, Martine; Lemasson, Alban
2018-01-01
Bottlenose dolphins ( Tursiops truncatus ) spontaneously emit individual acoustic signals that identify them to group members. We tested whether these cetaceans could learn artificial individual sound cues played underwater and whether they would generalize this learning to airborne sounds. Dolphins are thought to perceive only underwater sounds and their training depends largely on visual signals. We investigated the behavioral responses of seven dolphins in a group to learned human-made individual sound cues, played underwater and in the air. Dolphins recognized their own sound cue after hearing it underwater as they immediately moved toward the source, whereas when it was airborne they gazed more at the source of their own sound cue but did not approach it. We hypothesize that they perhaps detected modifications of the sound induced by air or were confused by the novelty of the situation, but nevertheless recognized they were being "targeted." They did not respond when hearing another group member's cue in either situation. This study provides further evidence that dolphins respond to individual-specific sounds and that these marine mammals possess some capacity for processing airborne acoustic signals.
Exploring positive hospital ward soundscape interventions.
Mackrill, J; Jennings, P; Cain, R
2014-11-01
Sound is often considered as a negative aspect of an environment that needs mitigating, particularly in hospitals. It is worthwhile however, to consider how subjective responses to hospital sounds can be made more positive. The authors identified natural sound, steady state sound and written sound source information as having the potential to do this. Listening evaluations were conducted with 24 participants who rated their emotional (Relaxation) and cognitive (Interest and Understanding) response to a variety of hospital ward soundscape clips across these three interventions. A repeated measures ANOVA revealed that the 'Relaxation' response was significantly affected (n(2) = 0.05, p = 0.001) by the interventions with natural sound producing a 10.1% more positive response. Most interestingly, written sound source information produced a 4.7% positive change in response. The authors conclude that exploring different ways to improve the sounds of a hospital offers subjective benefits that move beyond sound level reduction. This is an area for future work to focus upon in an effort to achieve more positively experienced hospital soundscapes and environments. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
A precedence effect resolves phantom sound source illusions in the parasitoid fly Ormia ochracea
Lee, Norman; Elias, Damian O.; Mason, Andrew C.
2009-01-01
Localizing individual sound sources under reverberant environmental conditions can be a challenge when the original source and its acoustic reflections arrive at the ears simultaneously from different paths that convey ambiguous directional information. The acoustic parasitoid fly Ormia ochracea (Diptera: Tachinidae) relies on a pair of ears exquisitely sensitive to sound direction to localize the 5-kHz tone pulsatile calling song of their host crickets. In nature, flies are expected to encounter a complex sound field with multiple sources and their reflections from acoustic clutter potentially masking temporal information relevant to source recognition and localization. In field experiments, O. ochracea were lured onto a test arena and subjected to small random acoustic asymmetries between 2 simultaneous sources. Most flies successfully localize a single source but some localize a ‘phantom’ source that is a summed effect of both source locations. Such misdirected phonotaxis can be elicited reliably in laboratory experiments that present symmetric acoustic stimulation. By varying onset delay between 2 sources, we test whether hyperacute directional hearing in O. ochracea can function to exploit small time differences to determine source location. Selective localization depends on both the relative timing and location of competing sources. Flies preferred phonotaxis to a forward source. With small onset disparities within a 10-ms temporal window of attention, flies selectively localize the leading source while the lagging source has minimal influence on orientation. These results demonstrate the precedence effect as a mechanism to overcome phantom source illusions that arise from acoustic reflections or competing sources. PMID:19332794
Physiological correlates of sound localization in a parasitoid fly, Ormia ochracea
NASA Astrophysics Data System (ADS)
Oshinsky, Michael Lee
A major focus of research in the nervous system is the investigation of neural circuits. The question of how neurons connect to form functional units has driven modern neuroscience research from its inception. From the beginning, the neural circuits of the auditory system and specifically sound localization were used as a model system for investigating neural connectivity and computation. Sound localization lends itself to this task because there is no mapping of spatial information on a receptor sheet as in vision. With only one eye, an animal would still have positional information for objects. Since the receptor sheet in the ear is frequency oriented and not spatially oriented, positional information for a sound source does not exist with only one ear. The nervous system computes the location of a sound source based on differences in the physiology of the two ears. In this study, I investigated the neural circuits for sound localization in a fly, Ormia ochracea (Diptera, Tachinidae, Ormiini), which is a parasitoid of crickets. This fly possess a unique mechanically coupled hearing organ. The two ears are contained in one air sac and a cuticular bridge, that has a flexible spring-like structure at its center, connects them. This mechanical coupling preprocesses the sound before it is detected by the nervous system and provides the fly with directional information. The subject of this study is the neural coding of the location of sound stimuli by a mechanically coupled auditory system. In chapter 1, I present the natural history of an acoustic parasitoid and I review the peripheral processing of sound by the Ormian ear. In chapter 2, I describe the anatomy and physiology of the auditory afferents. I present this physiology in the context of sound localization. In chapter 3, I describe the directional dependent physiology for the thoracic local and ascending acoustic interneurons. In chapter 4, I quantify the threshold and I detail the kinematics of the phonotactic walking behavior in Ormia ochracea. I also quantify the angular resolution of the phonotactic turning behavior. Using a model, I show that the temporal coding properties of the afferents provide most of the information required by the fly to localize a singing cricket.
Shock waves and the Ffowcs Williams-Hawkings equation
NASA Technical Reports Server (NTRS)
Isom, Morris P.; Yu, Yung H.
1991-01-01
The expansion of the double divergence of the generalized Lighthill stress tensor, which is the basis of the concept of the role played by shock and contact discontinuities as sources of dipole and monopole sound, is presently applied to the simplest transonic flows: (1) a fixed wing in steady motion, for which there is no sound field, and (2) a hovering helicopter blade that produces a sound field. Attention is given to the contribution of the shock to sound from the viewpoint of energy conservation; the shock emerges as the source of only the quantity of entropy.
NASA Astrophysics Data System (ADS)
Shinn-Cunningham, Barbara
2003-04-01
One of the key functions of hearing is to help us monitor and orient to events in our environment (including those outside the line of sight). The ability to compute the spatial location of a sound source is also important for detecting, identifying, and understanding the content of a sound source, especially in the presence of competing sources from other positions. Determining the spatial location of a sound source poses difficult computational challenges; however, we perform this complex task with proficiency, even in the presence of noise and reverberation. This tutorial will review the acoustic, psychoacoustic, and physiological processes underlying spatial auditory perception. First, the tutorial will examine how the many different features of the acoustic signals reaching a listener's ears provide cues for source direction and distance, both in anechoic and reverberant space. Then we will discuss psychophysical studies of three-dimensional sound localization in different environments and the basic neural mechanisms by which spatial auditory cues are extracted. Finally, ``virtual reality'' approaches for simulating sounds at different directions and distances under headphones will be reviewed. The tutorial will be structured to appeal to a diverse audience with interests in all fields of acoustics and will incorporate concepts from many areas, such as psychological and physiological acoustics, architectural acoustics, and signal processing.
NASA Technical Reports Server (NTRS)
Fuller, C. R.; Hansen, C. H.; Snyder, S. D.
1991-01-01
Active control of sound radiation from a rectangular panel by two different methods has been experimentally studied and compared. In the first method a single control force applied directly to the structure is used with a single error microphone located in the radiated acoustic field. Global attenuation of radiated sound was observed to occur by two main mechanisms. For 'on-resonance' excitation, the control force had the effect of increasing the total panel input impedance presented to the nosie source, thus reducing all radiated sound. For 'off-resonance' excitation, the control force tends not significantly to modify the panel total response amplitude but rather to restructure the relative phases of the modes leading to a more complex vibration pattern and a decrease in radiation efficiency. For acoustic control, the second method, the number of acoustic sources required for global reduction was seen to increase with panel modal order. The mechanism in this case was that the acoustic sources tended to create an inverse pressure distribution at the panel surface and thus 'unload' the panel by reducing the panel radiation impedance. In general, control by structural inputs appears more effective than control by acoustic sources for structurally radiated noise.
NASA Astrophysics Data System (ADS)
Proskurov, S.; Darbyshire, O. R.; Karabasov, S. A.
2017-12-01
The present work discusses modifications to the stochastic Fast Random Particle Mesh (FRPM) method featuring both tonal and broadband noise sources. The technique relies on the combination of incorporated vortex-shedding resolved flow available from Unsteady Reynolds-Averaged Navier-Stokes (URANS) simulation with the fine-scale turbulence FRPM solution generated via the stochastic velocity fluctuations in the context of vortex sound theory. In contrast to the existing literature, our method encompasses a unified treatment for broadband and tonal acoustic noise sources at the source level, thus, accounting for linear source interference as well as possible non-linear source interaction effects. When sound sources are determined, for the sound propagation, Acoustic Perturbation Equations (APE-4) are solved in the time-domain. Results of the method's application for two aerofoil benchmark cases, with both sharp and blunt trailing edges are presented. In each case, the importance of individual linear and non-linear noise sources was investigated. Several new key features related to the unsteady implementation of the method were tested and brought into the equation. Encouraging results have been obtained for benchmark test cases using the new technique which is believed to be potentially applicable to other airframe noise problems where both tonal and broadband parts are important.
Padois, Thomas; Prax, Christian; Valeau, Vincent; Marx, David
2012-10-01
The possibility of using the time-reversal technique to localize acoustic sources in a wind-tunnel flow is investigated. While the technique is widespread, it has scarcely been used in aeroacoustics up to now. The proposed method consists of two steps: in a first experimental step, the acoustic pressure fluctuations are recorded over a linear array of microphones; in a second numerical step, the experimental data are time-reversed and used as input data for a numerical code solving the linearized Euler equations. The simulation achieves the back-propagation of the waves from the array to the source and takes into account the effect of the mean flow on sound propagation. The ability of the method to localize a sound source in a typical wind-tunnel flow is first demonstrated using simulated data. A generic experiment is then set up in an anechoic wind tunnel to validate the proposed method with a flow at Mach number 0.11. Monopolar sources are first considered that are either monochromatic or have a narrow or wide-band frequency content. The source position estimation is well-achieved with an error inferior to the wavelength. An application to a dipolar sound source shows that this type of source is also very satisfactorily characterized.
Different categories of living and non-living sound-sources activate distinct cortical networks
Engel, Lauren R.; Frum, Chris; Puce, Aina; Walker, Nathan A.; Lewis, James W.
2009-01-01
With regard to hearing perception, it remains unclear as to whether, or the extent to which, different conceptual categories of real-world sounds and related categorical knowledge are differentially represented in the brain. Semantic knowledge representations are reported to include the major divisions of living versus non-living things, plus more specific categories including animals, tools, biological motion, faces, and places—categories typically defined by their characteristic visual features. Here, we used functional magnetic resonance imaging (fMRI) to identify brain regions showing preferential activity to four categories of action sounds, which included non-vocal human and animal actions (living), plus mechanical and environmental sound-producing actions (non-living). The results showed a striking antero-posterior division in cortical representations for sounds produced by living versus non-living sources. Additionally, there were several significant differences by category, depending on whether the task was category-specific (e.g. human or not) versus non-specific (detect end-of-sound). In general, (1) human-produced sounds yielded robust activation in the bilateral posterior superior temporal sulci independent of task. Task demands modulated activation of left-lateralized fronto-parietal regions, bilateral insular cortices, and subcortical regions previously implicated in observation-execution matching, consistent with “embodied” and mirror-neuron network representations subserving recognition. (2) Animal action sounds preferentially activated the bilateral posterior insulae. (3) Mechanical sounds activated the anterior superior temporal gyri and parahippocampal cortices. (4) Environmental sounds preferentially activated dorsal occipital and medial parietal cortices. Overall, this multi-level dissociation of networks for preferentially representing distinct sound-source categories provides novel support for grounded cognition models that may underlie organizational principles for hearing perception. PMID:19465134
The directivity of the sound radiation from panels and openings.
Davy, John L
2009-06-01
This paper presents a method for calculating the directivity of the radiation of sound from a panel or opening, whose vibration is forced by the incidence of sound from the other side. The directivity of the radiation depends on the angular distribution of the incident sound energy in the room or duct in whose wall or end the panel or opening occurs. The angular distribution of the incident sound energy is predicted using a model which depends on the sound absorption coefficient of the room or duct surfaces. If the sound source is situated in the room or duct, the sound absorption coefficient model is used in conjunction with a model for the directivity of the sound source. For angles of radiation approaching 90 degrees to the normal to the panel or opening, the effect of the diffraction by the panel or opening, or by the finite baffle in which the panel or opening is mounted, is included. A simple empirical model is developed to predict the diffraction of sound into the shadow zone when the angle of radiation is greater than 90 degrees to the normal to the panel or opening. The method is compared with published experimental results.
Caldwell, Michael S.; Bee, Mark A.
2014-01-01
The ability to reliably locate sound sources is critical to anurans, which navigate acoustically complex breeding choruses when choosing mates. Yet, the factors influencing sound localization performance in frogs remain largely unexplored. We applied two complementary methodologies, open and closed loop playback trials, to identify influences on localization abilities in Cope’s gray treefrog, Hyla chrysoscelis. We examined localization acuity and phonotaxis behavior of females in response to advertisement calls presented from 12 azimuthal angles, at two signal levels, in the presence and absence of noise, and at two noise levels. Orientation responses were consistent with precise localization of sound sources, rather than binary discrimination between sources on either side of the body (lateralization). Frogs were unable to discriminate between sounds arriving from forward and rearward directions, and accurate localization was limited to forward sound presentation angles. Within this region, sound presentation angle had little effect on localization acuity. The presence of noise and low signal-to-noise ratios also did not strongly impair localization ability in open loop trials, but females exhibited reduced phonotaxis performance consistent with impaired localization during closed loop trials. We discuss these results in light of previous work on spatial hearing in anurans. PMID:24504182
Sex differences present in auditory looming perception, absent in auditory recession
NASA Astrophysics Data System (ADS)
Neuhoff, John G.; Seifritz, Erich
2005-04-01
When predicting the arrival time of an approaching sound source, listeners typically exhibit an anticipatory bias that affords a margin of safety in dealing with looming objects. The looming bias has been demonstrated behaviorally in the laboratory and in the field (Neuhoff 1998, 2001), neurally in fMRI studies (Seifritz et al., 2002), and comparatively in non-human primates (Ghazanfar, Neuhoff, and Logothetis, 2002). In the current work, male and female listeners were presented with three-dimensional looming sound sources and asked to press a button when the source was at the point of closest approach. Females exhibited a significantly greater anticipatory bias than males. Next, listeners were presented with sounds that either approached or receded and then stopped at three different terminal distances. Consistent with the time-to-arrival judgments, female terminal distance judgments for looming sources were significantly closer than male judgments. However, there was no difference between male and female terminal distance judgments for receding sounds. Taken together with the converging behavioral, neural, and comparative evidence, the current results illustrate the environmental salience of looming sounds and suggest that the anticipatory bias for auditory looming may have been shaped by evolution to provide a selective advantage in dealing with looming objects.
NASA Astrophysics Data System (ADS)
Mironov, M. A.
2011-11-01
A method of allowing for the spatial sound field structure in designing the sound-absorbing structures for turbojet aircraft engine ducts is proposed. The acoustic impedance of a duct should be chosen so as to prevent the reflection of the primary sound field, which is generated by the sound source in the absence of the duct, from the duct walls.
Quantifying the influence of flow asymmetries on glottal sound sources in speech
NASA Astrophysics Data System (ADS)
Erath, Byron; Plesniak, Michael
2008-11-01
Human speech is made possible by the air flow interaction with the vocal folds. During phonation, asymmetries in the glottal flow field may arise from flow phenomena (e.g. the Coanda effect) as well as from pathological vocal fold motion (e.g. unilateral paralysis). In this study, the effects of flow asymmetries on glottal sound sources were investigated. Dynamically-programmable 7.5 times life-size vocal fold models with 2 degrees-of-freedom (linear and rotational) were constructed to provide a first-order approximation of vocal fold motion. Important parameters (Reynolds, Strouhal, and Euler numbers) were scaled to physiological values. Normal and abnormal vocal fold motions were synthesized, and the velocity field and instantaneous transglottal pressure drop were measured. Variability in the glottal jet trajectory necessitated sorting of the data according to the resulting flow configuration. The dipole sound source is related to the transglottal pressure drop via acoustic analogies. Variations in the transglottal pressure drop (and subsequently the dipole sound source) arising from flow asymmetries are discussed.
Psychophysical evidence for auditory motion parallax.
Genzel, Daria; Schutte, Michael; Brimijoin, W Owen; MacNeilage, Paul R; Wiegrebe, Lutz
2018-04-17
Distance is important: From an ecological perspective, knowledge about the distance to either prey or predator is vital. However, the distance of an unknown sound source is particularly difficult to assess, especially in anechoic environments. In vision, changes in perspective resulting from observer motion produce a reliable, consistent, and unambiguous impression of depth known as motion parallax. Here we demonstrate with formal psychophysics that humans can exploit auditory motion parallax, i.e., the change in the dynamic binaural cues elicited by self-motion, to assess the relative depths of two sound sources. Our data show that sensitivity to relative depth is best when subjects move actively; performance deteriorates when subjects are moved by a motion platform or when the sound sources themselves move. This is true even though the dynamic binaural cues elicited by these three types of motion are identical. Our data demonstrate a perceptual strategy to segregate intermittent sound sources in depth and highlight the tight interaction between self-motion and binaural processing that allows assessment of the spatial layout of complex acoustic scenes.
Auditory event perception: the source-perception loop for posture in human gait.
Pastore, Richard E; Flint, Jesse D; Gaston, Jeremy R; Solomon, Matthew J
2008-01-01
There is a small but growing literature on the perception of natural acoustic events, but few attempts have been made to investigate complex sounds not systematically controlled within a laboratory setting. The present study investigates listeners' ability to make judgments about the posture (upright-stooped) of the walker who generated acoustic stimuli contrasted on each trial. We use a comprehensive three-stage approach to event perception, in which we develop a solid understanding of the source event and its sound properties, as well as the relationships between these two event stages. Developing this understanding helps both to identify the limitations of common statistical procedures and to develop effective new procedures for investigating not only the two information stages above, but also the decision strategies employed by listeners in making source judgments from sound. The result is a comprehensive, ultimately logical, but not necessarily expected picture of both the source-sound-perception loop and the utility of alternative research tools.
Nonlinear theory of shocked sound propagation in a nearly choked duct flow
NASA Technical Reports Server (NTRS)
Myers, M. K.; Callegari, A. J.
1982-01-01
The development of shocks in the sound field propagating through a nearly choked duct flow is analyzed by extending a quasi-one dimensional theory. The theory is applied to the case in which sound is introduced into the flow by an acoustic source located in the vicinity of a near-sonic throat. Analytical solutions for the field are obtained which illustrate the essential features of the nonlinear interaction between sound and flow. Numerical results are presented covering ranges of variation of source strength, throat Mach number, and frequency. It is found that the development of shocks leads to appreciable attenuation of acoustic power transmitted upstream through the near-sonic flow. It is possible, for example, that the power loss in the fundamental harmonic can be as much as 90% of that introduced at the source.
Perception of Water-Based Masking Sounds—Long-Term Experiment in an Open-Plan Office
Hongisto, Valtteri; Varjo, Johanna; Oliva, David; Haapakangas, Annu; Benway, Evan
2017-01-01
A certain level of masking sound is necessary to control the disturbance caused by speech sounds in open-plan offices. The sound is usually provided with evenly distributed loudspeakers. Pseudo-random noise is often used as a source of artificial sound masking (PRMS). A recent laboratory experiment suggested that water-based masking sound (WBMS) could be more favorable than PRMS. The purpose of our study was to determine how the employees perceived different WBMSs compared to PRMS. The experiment was conducted in an open-plan office of 77 employees who had been accustomed to work under PRMS (44 dB LAeq). The experiment consisted of five masking conditions: the original PRMS, four different WBMSs and return to the original PRMS. The exposure time of each condition was 3 weeks. The noise level was nearly equal between the conditions (43–45 dB LAeq) but the spectra and the nature of the sounds were very different. A questionnaire was completed at the end of each condition. Acoustic satisfaction was worse during the WBMSs than during the PRMS. The disturbance caused by three out of four WBMSs was larger than that of PRMS. Several attributes describing the sound quality itself were in favor of PRMS. Colleagues' speech sounds disturbed more during WBMSs. None of the WBMSs produced better subjective ratings than PRMS. Although the first WBMS was equal with the PRMS for several variables, the overall results cannot be seen to support the use of WBMSs in office workplaces. Because the experiment suffered from some methodological weaknesses, conclusions about the adequacy of WBMSs cannot yet be drawn. PMID:28769834
2010-01-01
Comparative Effectiveness Research, or other efforts to determine best practices and to develop guidelines based on meta-analysis and evidence - based medicine . An...authoritative reviews or other evidence - based medicine sources, but they have been made unambiguous and computable – a process which sounds...best practice recommendation created through an evidence - based medicine (EBM) development process. The lifecycle envisions four stages of refinement
Noise abatement in a pine plantation
R. E. Leonard; L. P. Herrington
1971-01-01
Observations on sound propagation were made in two red pine plantations. Measurements were taken of attenuation of prerecorded frequencies at various distances from the sound source. Sound absorption was strongly dependent on frequencies. Peak absorption was at 500 Hz.
Hearing in three dimensions: Sound localization
NASA Technical Reports Server (NTRS)
Wightman, Frederic L.; Kistler, Doris J.
1990-01-01
The ability to localize a source of sound in space is a fundamental component of the three dimensional character of the sound of audio. For over a century scientists have been trying to understand the physical and psychological processes and physiological mechanisms that subserve sound localization. This research has shown that important information about sound source position is provided by interaural differences in time of arrival, interaural differences in intensity and direction-dependent filtering provided by the pinnae. Progress has been slow, primarily because experiments on localization are technically demanding. Control of stimulus parameters and quantification of the subjective experience are quite difficult problems. Recent advances, such as the ability to simulate a three dimensional sound field over headphones, seem to offer potential for rapid progress. Research using the new techniques has already produced new information. It now seems that interaural time differences are a much more salient and dominant localization cue than previously believed.
Jiang, Tinglei; Long, Zhenyu; Ran, Xin; Zhao, Xue; Xu, Fei; Qiu, Fuyuan; Kanwal, Jagmeet S.
2016-01-01
ABSTRACT Bats vocalize extensively within different social contexts. The type and extent of information conveyed via their vocalizations and their perceptual significance, however, remains controversial and difficult to assess. Greater tube-nosed bats, Murina leucogaster, emit calls consisting of long rectangular broadband noise burst (rBNBl) syllables during aggression between males. To experimentally test the behavioral impact of these sounds for feeding, we deployed an approach and place-preference paradigm. Two food trays were placed on opposite sides and within different acoustic microenvironments, created by sound playback, within a specially constructed tent. Specifically, we tested whether the presence of rBNBl sounds at a food source effectively deters the approach of male bats in comparison to echolocation sounds and white noise. In each case, contrary to our expectation, males preferred to feed at a location where rBNBl sounds were present. We propose that the species-specific rBNBl provides contextual information, not present within non-communicative sounds, to facilitate approach towards a food source. PMID:27815241
What the Toadfish Ear Tells the Toadfish Brain About Sound.
Edds-Walton, Peggy L
2016-01-01
Of the three, paired otolithic endorgans in the ear of teleost fishes, the saccule is the one most often demonstrated to have a major role in encoding frequencies of biologically relevant sounds. The toadfish saccule also encodes sound level and sound source direction in the phase-locked activity conveyed via auditory afferents to nuclei of the ipsilateral octaval column in the medulla. Although paired auditory receptors are present in teleost fishes, binaural processes were believed to be unimportant due to the speed of sound in water and the acoustic transparency of the tissues in water. In contrast, there are behavioral and anatomical data that support binaural processing in fishes. Studies in the toadfish combined anatomical tract-tracing and physiological recordings from identified sites along the ascending auditory pathway to document response characteristics at each level. Binaural computations in the medulla and midbrain sharpen the directional information provided by the saccule. Furthermore, physiological studies in the central nervous system indicated that encoding frequency, sound level, temporal pattern, and sound source direction are important components of what the toadfish ear tells the toadfish brain about sound.
The Coast Artillery Journal. Volume 65, Number 4, October 1926
1926-10-01
sound. a. Sound location of airplanes by binaural observation in all antiaircraft regiments. b. Sound ranging on report of enemy guns, together with...Direction finding by binaural observation. [Subparagraphs 30 a and 30 c (l).J This applies to continuous sounds such as pro- pellor noises. b. Point...impacts. 32. The so-called binaural sense is our means of sensing the direc- tion of a sound source. When we hear a sound we judge the approxi- mate
Object localization using a biosonar beam: how opening your mouth improves localization.
Arditi, G; Weiss, A J; Yovel, Y
2015-08-01
Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions.
Object localization using a biosonar beam: how opening your mouth improves localization
Arditi, G.; Weiss, A. J.; Yovel, Y.
2015-01-01
Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions. PMID:26361552
Hemispherical breathing mode speaker using a dielectric elastomer actuator.
Hosoya, Naoki; Baba, Shun; Maeda, Shingo
2015-10-01
Although indoor acoustic characteristics should ideally be assessed by measuring the reverberation time using a point sound source, a regular polyhedron loudspeaker, which has multiple loudspeakers on a chassis, is typically used. However, such a configuration is not a point sound source if the size of the loudspeaker is large relative to the target sound field. This study investigates a small lightweight loudspeaker using a dielectric elastomer actuator vibrating in the breathing mode (the pulsating mode such as the expansion and contraction of a balloon). Acoustic testing with regard to repeatability, sound pressure, vibration mode profiles, and acoustic radiation patterns indicate that dielectric elastomer loudspeakers may be feasible.
The role of reverberation-related binaural cues in the externalization of speech.
Catic, Jasmina; Santurette, Sébastien; Dau, Torsten
2015-08-01
The perception of externalization of speech sounds was investigated with respect to the monaural and binaural cues available at the listeners' ears in a reverberant environment. Individualized binaural room impulse responses (BRIRs) were used to simulate externalized sound sources via headphones. The measured BRIRs were subsequently modified such that the proportion of the response containing binaural vs monaural information was varied. Normal-hearing listeners were presented with speech sounds convolved with such modified BRIRs. Monaural reverberation cues were found to be sufficient for the externalization of a lateral sound source. In contrast, for a frontal source, an increased amount of binaural cues from reflections was required in order to obtain well externalized sound images. It was demonstrated that the interaction between the interaural cues of the direct sound and the reverberation strongly affects the perception of externalization. An analysis of the short-term binaural cues showed that the amount of fluctuations of the binaural cues corresponded well to the externalization ratings obtained in the listening tests. The results further suggested that the precedence effect is involved in the auditory processing of the dynamic binaural cues that are utilized for externalization perception.
Meaningless artificial sound and its application in urban soundscape research
NASA Astrophysics Data System (ADS)
de Coensel, Bert; Botteldooren, Dick
2004-05-01
Urban areas are increasingly being overwhelmed with uninteresting (traffic) noise. Designing a more matching soundscape for urban parks, quiet backyards, shopping areas, etc., clearly deserves more attention. Urban planners, being architects rather than musical composers, like to have a set of ``objective'' indicators of the urban soundscape at their disposal. In deriving such indicators, one can assume that the soundscape is appreciated as a conglomerate of sound events, recognized as originating from individual sources by people evaluating it. A more recent line of research assumes that the soundscape as a whole evokes particular emotions. In this research project we follow the latter, more holistic view. Given this choice, the challenge is to create a test setup where subjects are not tempted to react to a sound in a cognitive way, analyzing it to its individual components. Meaningless sound is therefore preferred. After selection of appealing sounds for a given context by subjects, objective indicators can then be extracted. To generate long, complex, but meaningless sound fragments not containing repetition, based on a limited number of parameters, swarm technology is used. This technique has previously been used for creating artificial music and has proved to be very useful.
NASA Astrophysics Data System (ADS)
Li, Xuebao; Cui, Xiang; Lu, Tiebing; Ma, Wenzuo; Bian, Xingming; Wang, Donglai; Hiziroglu, Huseyin
2016-03-01
The corona-generated audible noise (AN) has become one of decisive factors in the design of high voltage direct current (HVDC) transmission lines. The AN from transmission lines can be attributed to sound pressure pulses which are generated by the multiple corona sources formed on the conductor, i.e., transmission lines. In this paper, a detailed time-domain characteristics of the sound pressure pulses, which are generated by the DC corona discharges formed over the surfaces of a stranded conductors, are investigated systematically in a laboratory settings using a corona cage structure. The amplitude of sound pressure pulse and its time intervals are extracted by observing a direct correlation between corona current pulses and corona-generated sound pressure pulses. Based on the statistical characteristics, a stochastic model is presented for simulating the sound pressure pulses due to DC corona discharges occurring on conductors. The proposed stochastic model is validated by comparing the calculated and measured A-weighted sound pressure level (SPL). The proposed model is then used to analyze the influence of the pulse amplitudes and pulse rate on the SPL. Furthermore, a mathematical relationship is found between the SPL and conductor diameter, electric field, and radial distance.
Effect of diffusive and nondiffusive surfaces combinations on sound diffusion
NASA Astrophysics Data System (ADS)
Shafieian, Masoume; Kashani, Farokh Hodjat
2010-05-01
One of room acoustic goals, especially in small to medium rooms, is sound diffusion in low frequencies, which have been the subject of lots of researches. Sound diffusion is a very important consideration in acoustics because it minimizes the coherent reflections that cause problems. It also tends to make an enclosed space sound larger than it is. Diffusion is an excellent alternative or complement to sound absorption in acoustic treatment because it doesn’t really remove much energy, which means it can be used to effectively reduce reflections while still leaving an ambient or live sounding space. Distribution of diffusive and nondiffusive surfaces on room walls affect sound diffusion in room, but the amount, combination, and location of these surfaces are still the matter of question. This paper investigates effects of these issues on room acoustic frequency response in different parts of the room with different source-receiver locations. Room acoustic model based on wave method is used (implemented) which is very accurate and convenient for low frequencies in such rooms. Different distributions of acoustic surfaces on room walls have been introduced to the model and room frequency response results are calculated. For the purpose of comparison, some measurements results are presented. Finally for more smooth frequency response in small and medium rooms, some suggestions are made.
An open access database for the evaluation of heart sound algorithms.
Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D
2016-12-01
In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.
Tiitinen, Hannu; Salminen, Nelli H; Palomäki, Kalle J; Mäkinen, Ville T; Alku, Paavo; May, Patrick J C
2006-03-20
In an attempt to delineate the assumed 'what' and 'where' processing streams, we studied the processing of spatial sound in the human cortex by using magnetoencephalography in the passive and active recording conditions and two kinds of spatial stimuli: individually constructed, highly realistic spatial (3D) stimuli and stimuli containing interaural time difference (ITD) cues only. The auditory P1m, N1m, and P2m responses of the event-related field were found to be sensitive to the direction of sound source in the azimuthal plane. In general, the right-hemispheric responses to spatial sounds were more prominent than the left-hemispheric ones. The right-hemispheric P1m and N1m responses peaked earlier for sound sources in the contralateral than for sources in the ipsilateral hemifield and the peak amplitudes of all responses reached their maxima for contralateral sound sources. The amplitude of the right-hemispheric P2m response reflected the degree of spatiality of sound, being twice as large for the 3D than ITD stimuli. The results indicate that the right hemisphere is specialized in the processing of spatial cues in the passive recording condition. Minimum current estimate (MCE) localization revealed that temporal areas were activated both in the active and passive condition. This initial activation, taking place at around 100 ms, was followed by parietal and frontal activity at 180 and 200 ms, respectively. The latter activations, however, were specific to attentional engagement and motor responding. This suggests that parietal activation reflects active responding to a spatial sound rather than auditory spatial processing as such.
Captive Bottlenose Dolphins Do Discriminate Human-Made Sounds Both Underwater and in the Air
Lima, Alice; Sébilleau, Mélissa; Boye, Martin; Durand, Candice; Hausberger, Martine; Lemasson, Alban
2018-01-01
Bottlenose dolphins (Tursiops truncatus) spontaneously emit individual acoustic signals that identify them to group members. We tested whether these cetaceans could learn artificial individual sound cues played underwater and whether they would generalize this learning to airborne sounds. Dolphins are thought to perceive only underwater sounds and their training depends largely on visual signals. We investigated the behavioral responses of seven dolphins in a group to learned human-made individual sound cues, played underwater and in the air. Dolphins recognized their own sound cue after hearing it underwater as they immediately moved toward the source, whereas when it was airborne they gazed more at the source of their own sound cue but did not approach it. We hypothesize that they perhaps detected modifications of the sound induced by air or were confused by the novelty of the situation, but nevertheless recognized they were being “targeted.” They did not respond when hearing another group member’s cue in either situation. This study provides further evidence that dolphins respond to individual-specific sounds and that these marine mammals possess some capacity for processing airborne acoustic signals. PMID:29445350
The influence of crowd density on the sound environment of commercial pedestrian streets.
Meng, Qi; Kang, Jian
2015-04-01
Commercial pedestrian streets are very common in China and Europe, with many situated in historic or cultural centres. The environments of these streets are important, including their sound environments. The objective of this study is to explore the relationships between the crowd density and the sound environments of commercial pedestrian streets. On-site measurements were performed at the case study site in Harbin, China, and a questionnaire was administered. The sound pressure measurements showed that the crowd density has an insignificant effect on sound pressure below 0.05 persons/m2, whereas when the crowd density is greater than 0.05 persons/m2, the sound pressure increases with crowd density. The sound sources were analysed, showing that several typical sound sources, such as traffic noise, can be masked by the sounds resulting from dense crowds. The acoustic analysis showed that crowd densities outside the range of 0.10 to 0.25 persons/m2 exhibited lower acoustic comfort evaluation scores. In terms of audiovisual characteristics, the subjective loudness increases with greater crowd density, while the acoustic comfort decreases. The results for an indoor underground shopping street are also presented for comparison. Copyright © 2014 Elsevier B.V. All rights reserved.
Soundscapes and the sense of hearing of fishes.
Fay, Richard
2009-03-01
Underwater soundscapes have probably played an important role in the adaptation of ears and auditory systems of fishes throughout evolutionary time, and for all species. These sounds probably contain important information about the environment and about most objects and events that confront the receiving fish so that appropriate behavior is possible. For example, the sounds from reefs appear to be used by at least some fishes for their orientation and migration. These sorts of environmental sounds should be considered much like "acoustic daylight," that continuously bathes all environments and contain information that all organisms can potentially use to form a sort of image of the environment. At present, however, we are generally ignorant of the nature of ambient sound fields impinging on fishes, and the adaptive value of processing these fields to resolve the multiple sources of sound. Our field has focused almost exclusively on the adaptive value of processing species-specific communication sounds, and has not considered the informational value of ambient "noise." Since all fishes can detect and process acoustic particle motion, including the directional characteristics of this motion, underwater sound fields are potentially more complex and information-rich than terrestrial acoustic environments. The capacities of one fish species (goldfish) to receive and make use of such sound source information have been demonstrated (sound source segregation and auditory scene analysis), and it is suggested that all vertebrate species have this capacity. A call is made to better understand underwater soundscapes, and the associated behaviors they determine in fishes. © 2009 ISZS, Blackwell Publishing and IOZ/CAS.
Possibilities of psychoacoustics to determine sound quality
NASA Astrophysics Data System (ADS)
Genuit, Klaus
For some years, acoustic engineers have increasingly become aware of the importance of analyzing and minimizing noise problems not only with regard to the A-weighted sound pressure level, but to design sound quality. It is relatively easy to determine the A-weighted SPL according to international standards. However, the objective evaluation to describe subjectively perceived sound quality - taking into account psychoacoustic parameters such as loudness, roughness, fluctuation strength, sharpness and so forth - is more difficult. On the one hand, the psychoacoustic measurement procedures which are known so far have yet not been standardized. On the other hand, they have only been tested in laboratories by means of listening tests in the free-field and one sound source and simple signals. Therefore, the results achieved cannot be transferred to complex sound situations with several spatially distributed sound sources without difficulty. Due to the directional hearing and selectivity of human hearing, individual sound events can be selected among many. Already in the late seventies a new binaural Artificial Head Measurement System was developed which met the requirements of the automobile industry in terms of measurement technology. The first industrial application of the Artificial Head Measurement System was in 1981. Since that time the system was further developed, particularly by the cooperation between HEAD acoustics and Mercedes-Benz. In addition to a calibratable Artificial Head Measurement System which is compatible with standard measurement technologies and has transfer characteristics comparable to human hearing, a Binaural Analysis System is now also available. This system permits the analysis of binaural signals regarding physical and psychoacoustic procedures. Moreover, the signals to be analyzed can be simultaneously monitored through headphones and manipulated in the time and frequency domain so that those signal components being responsible for noise annoyance can be found. Especially in complex sound situations with several spatially distributed sound sources, standard, one-channel measurements methods cannot adequately determine sound quality, the acoustic comfort, or annoyance of sound events.
Interactive Sound Propagation using Precomputation and Statistical Approximations
NASA Astrophysics Data System (ADS)
Antani, Lakulish
Acoustic phenomena such as early reflections, diffraction, and reverberation have been shown to improve the user experience in interactive virtual environments and video games. These effects arise due to repeated interactions between sound waves and objects in the environment. In interactive applications, these effects must be simulated within a prescribed time budget. We present two complementary approaches for computing such acoustic effects in real time, with plausible variation in the sound field throughout the scene. The first approach, Precomputed Acoustic Radiance Transfer, precomputes a matrix that accounts for multiple acoustic interactions between all scene objects. The matrix is used at run time to provide sound propagation effects that vary smoothly as sources and listeners move. The second approach couples two techniques---Ambient Reverberance, and Aural Proxies---to provide approximate sound propagation effects in real time, based on only the portion of the environment immediately visible to the listener. These approaches lie at different ends of a space of interactive sound propagation techniques for modeling sound propagation effects in interactive applications. The first approach emphasizes accuracy by modeling acoustic interactions between all parts of the scene; the second approach emphasizes efficiency by only taking the local environment of the listener into account. These methods have been used to efficiently generate acoustic walkthroughs of architectural models. They have also been integrated into a modern game engine, and can enable realistic, interactive sound propagation on commodity desktop PCs.
Loiselle, Louise H; Dorman, Michael F; Yost, William A; Cook, Sarah J; Gifford, Rene H
2016-08-01
To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Eleven bilateral listeners with MED-EL (Durham, NC) CIs and 8 listeners with hearing-preservation CIs with symmetrical low frequency, acoustic hearing using the MED-EL or Cochlear device were evaluated using 2 tests designed to task binaural hearing, localization, and a simulated cocktail party. Access to interaural cues for localization was constrained by the use of low-pass, high-pass, and wideband noise stimuli. Sound-source localization accuracy for listeners with bilateral CIs in response to the high-pass noise stimulus and sound-source localization accuracy for the listeners with hearing-preservation CIs in response to the low-pass noise stimulus did not differ significantly. Speech understanding in a cocktail party listening environment improved for all listeners when interaural cues, either interaural time difference or interaural level difference, were available. The findings of the current study indicate that similar degrees of benefit to sound-source localization and speech understanding in complex listening environments are possible with 2 very different rehabilitation strategies: the provision of bilateral CIs and the preservation of hearing.
The use of an active controlled enclosure to attenuate sound radiation from a heavy radiator
NASA Astrophysics Data System (ADS)
Sun, Yao; Yang, Tiejun; Zhu, Minggang; Pan, Jie
2017-03-01
Active structural acoustical control usually experiences difficulty in the control of heavy sources or sources where direct applications of control forces are not practical. To overcome this difficulty, an active controlled enclosure, which forms a cavity with both flexible and open boundary, is employed. This configuration permits indirect implementation of active control in which the control inputs can be applied to subsidiary structures other than the sources. To determine the control effectiveness of the configuration, the vibro-acoustic behavior of the system, which consists of a top plate with an open, a sound cavity and a source panel, is investigated in this paper. A complete mathematical model of the system is formulated involving modified Fourier series formulations and the governing equations are solved using the Rayleigh-Ritz method. The coupling mechanisms of a partly opened cavity and a plate are analysed in terms of modal responses and directivity patterns. Furthermore, to attenuate sound power radiated from both the top panel and the open, two strategies are studied: minimizing the total radiated power and the cancellation of volume velocity. Moreover, three control configurations are compared, using a point force on the control panel (structural control), using a sound source in the cavity (acoustical control) and applying hybrid structural-acoustical control. In addition, the effects of boundary condition of the control panel on the sound radiation and control performance are discussed.
Material sound source localization through headphones
NASA Astrophysics Data System (ADS)
Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada
2012-09-01
In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.
Neural Decoding of Bistable Sounds Reveals an Effect of Intention on Perceptual Organization
2018-01-01
Auditory signals arrive at the ear as a mixture that the brain must decompose into distinct sources based to a large extent on acoustic properties of the sounds. An important question concerns whether listeners have voluntary control over how many sources they perceive. This has been studied using pure high (H) and low (L) tones presented in the repeating pattern HLH-HLH-, which can form a bistable percept heard either as an integrated whole (HLH-) or as segregated into high (H-H-) and low (-L-) sequences. Although instructing listeners to try to integrate or segregate sounds affects reports of what they hear, this could reflect a response bias rather than a perceptual effect. We had human listeners (15 males, 12 females) continuously report their perception of such sequences and recorded neural activity using MEG. During neutral listening, a classifier trained on patterns of neural activity distinguished between periods of integrated and segregated perception. In other conditions, participants tried to influence their perception by allocating attention either to the whole sequence or to a subset of the sounds. They reported hearing the desired percept for a greater proportion of time than when listening neutrally. Critically, neural activity supported these reports; stimulus-locked brain responses in auditory cortex were more likely to resemble the signature of segregation when participants tried to hear segregation than when attempting to perceive integration. These results indicate that listeners can influence how many sound sources they perceive, as reflected in neural responses that track both the input and its perceptual organization. SIGNIFICANCE STATEMENT Can we consciously influence our perception of the external world? We address this question using sound sequences that can be heard either as coming from a single source or as two distinct auditory streams. Listeners reported spontaneous changes in their perception between these two interpretations while we recorded neural activity to identify signatures of such integration and segregation. They also indicated that they could, to some extent, choose between these alternatives. This claim was supported by corresponding changes in responses in auditory cortex. By linking neural and behavioral correlates of perception, we demonstrate that the number of objects that we perceive can depend not only on the physical attributes of our environment, but also on how we intend to experience it. PMID:29440556
Degerman, Alexander; Rinne, Teemu; Särkkä, Anna-Kaisa; Salmi, Juha; Alho, Kimmo
2008-06-01
Event-related brain potentials (ERPs) and magnetic fields (ERFs) were used to compare brain activity associated with selective attention to sound location or pitch in humans. Sixteen healthy adults participated in the ERP experiment, and 11 adults in the ERF experiment. In different conditions, the participants focused their attention on a designated sound location or pitch, or pictures presented on a screen, in order to detect target sounds or pictures among the attended stimuli. In the Attend Location condition, the location of sounds varied randomly (left or right), while their pitch (high or low) was kept constant. In the Attend Pitch condition, sounds of varying pitch (high or low) were presented at a constant location (left or right). Consistent with previous ERP results, selective attention to either sound feature produced a negative difference (Nd) between ERPs to attended and unattended sounds. In addition, ERPs showed a more posterior scalp distribution for the location-related Nd than for the pitch-related Nd, suggesting partially different generators for these Nds. The ERF source analyses found no source distribution differences between the pitch-related Ndm (the magnetic counterpart of the Nd) and location-related Ndm in the superior temporal cortex (STC), where the main sources of the Ndm effects are thought to be located. Thus, the ERP scalp distribution differences between the location-related and pitch-related Nd effects may have been caused by activity of areas outside the STC, perhaps in the inferior parietal regions.
Sound field simulation and acoustic animation in urban squares
NASA Astrophysics Data System (ADS)
Kang, Jian; Meng, Yan
2005-04-01
Urban squares are important components of cities, and the acoustic environment is important for their usability. While models and formulae for predicting the sound field in urban squares are important for their soundscape design and improvement, acoustic animation tools would be of great importance for designers as well as for public participation process, given that below a certain sound level, the soundscape evaluation depends mainly on the type of sounds rather than the loudness. This paper first briefly introduces acoustic simulation models developed for urban squares, as well as empirical formulae derived from a series of simulation. It then presents an acoustic animation tool currently being developed. In urban squares there are multiple dynamic sound sources, so that the computation time becomes a main concern. Nevertheless, the requirements for acoustic animation in urban squares are relatively low compared to auditoria. As a result, it is important to simplify the simulation process and algorithms. Based on a series of subjective tests in a virtual reality environment with various simulation parameters, a fast simulation method with acceptable accuracy has been explored. [Work supported by the European Commission.
200 kHz Commercial Sonar Systems Generate Lower Frequency Side Lobes Audible to Some Marine Mammals
Deng, Z. Daniel; Southall, Brandon L.; Carlson, Thomas J.; Xu, Jinshan; Martinez, Jayson J.; Weiland, Mark A.; Ingraham, John M.
2014-01-01
The spectral properties of pulses transmitted by three commercially available 200 kHz echo sounders were measured to assess the possibility that marine mammals might hear sound energy below the center (carrier) frequency that may be generated by transmitting short rectangular pulses. All three sounders were found to generate sound at frequencies below the center frequency and within the hearing range of some marine mammals, e.g. killer whales, false killer whales, beluga whales, Atlantic bottlenose dolphins, harbor porpoises, and others. The frequencies of these sub-harmonic sounds ranged from 90 to 130 kHz. These sounds were likely detectable by the animals over distances up to several hundred meters but were well below potentially harmful levels. The sounds generated by the sounders could potentially affect the behavior of marine mammals within fairly close proximity to the sources and therefore the exclusion of echo sounders from environmental impact analysis based solely on the center frequency output in relation to the range of marine mammal hearing should be reconsidered. PMID:24736608
The Scaling of Broadband Shock-Associated Noise with Increasing Temperature
NASA Technical Reports Server (NTRS)
Miller, Steven A. E.
2013-01-01
A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. To isolate the relevant physics, the scaling of BBSAN peak intensity level at the sideline observer location is examined. The equivalent source within the framework of an acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green's function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for saturation of BBSAN with increasing stagnation temperature. The sources and vector Green's function have arguments involving the steady Reynolds- Averaged Navier-Stokes solution of the jet. It is proposed that saturation of BBSAN with increasing jet temperature occurs due to a balance between the amplication of the sound propagation through the shear layer and the source term scaling.
Sound Explorations from the Ages of 10 to 37 Months: The Ontogenesis of Musical Conducts
ERIC Educational Resources Information Center
Delalande, Francois; Cornara, Silvia
2010-01-01
One of the forms of first musical conduct is the exploration of sound sources. When young children produce sounds with any object, these sounds may surprise them and so they make the sounds again--not exactly the same, but introducing some variation. A process of repetition with slight changes is set in motion which can be analysed, as did Piaget,…
Monitoring the Ocean Using High Frequency Ambient Sound
2008-10-01
even identify specific groups within the resident killer whale type ( Puget Sound Southern Resident pods J, K and L) because these groups have...particular, the different populations of killer whales in the NE Pacific Ocean. This has been accomplished by detecting transient sounds with short...high sea state (the sound of spray), general shipping - close and distant, clanking and whale calls and clicking. These sound sources form the basis
2014-01-01
This study evaluates a spatial-filtering algorithm as a method to improve speech reception for cochlear-implant (CI) users in reverberant environments with multiple noise sources. The algorithm was designed to filter sounds using phase differences between two microphones situated 1 cm apart in a behind-the-ear hearing-aid capsule. Speech reception thresholds (SRTs) were measured using a Coordinate Response Measure for six CI users in 27 listening conditions including each combination of reverberation level (T60 = 0, 270, and 540 ms), number of noise sources (1, 4, and 11), and signal-processing algorithm (omnidirectional response, dipole-directional response, and spatial-filtering algorithm). Noise sources were time-reversed speech segments randomly drawn from the Institute of Electrical and Electronics Engineers sentence recordings. Target speech and noise sources were processed using a room simulation method allowing precise control over reverberation times and sound-source locations. The spatial-filtering algorithm was found to provide improvements in SRTs on the order of 6.5 to 11.0 dB across listening conditions compared with the omnidirectional response. This result indicates that such phase-based spatial filtering can improve speech reception for CI users even in highly reverberant conditions with multiple noise sources. PMID:25330772
Meteorological effects on long-range outdoor sound propagation
NASA Technical Reports Server (NTRS)
Klug, Helmut
1990-01-01
Measurements of sound propagation over distances up to 1000 m were carried out with an impulse sound source offering reproducible, short time signals. Temperature and wind speed at several heights were monitored simultaneously; the meteorological data are used to determine the sound speed gradients according to the Monin-Obukhov similarity theory. The sound speed profile is compared to a corresponding prediction, gained through the measured travel time difference between direct and ground reflected pulse (which depends on the sound speed gradient). Positive sound speed gradients cause bending of the sound rays towards the ground yielding enhanced sound pressure levels. The measured meteorological effects on sound propagation are discussed and illustrated by ray tracing methods.
López-Pacheco, María G; Sánchez-Fernández, Luis P; Molina-Lozano, Herón
2014-01-15
Noise levels of common sources such as vehicles, whistles, sirens, car horns and crowd sounds are mixed in urban soundscapes. Nowadays, environmental acoustic analysis is performed based on mixture signals recorded by monitoring systems. These mixed signals make it difficult for individual analysis which is useful in taking actions to reduce and control environmental noise. This paper aims at separating, individually, the noise source from recorded mixtures in order to evaluate the noise level of each estimated source. A method based on blind deconvolution and blind source separation in the wavelet domain is proposed. This approach provides a basis to improve results obtained in monitoring and analysis of common noise sources in urban areas. The method validation is through experiments based on knowledge of the predominant noise sources in urban soundscapes. Actual recordings of common noise sources are used to acquire mixture signals using a microphone array in semi-controlled environments. The developed method has demonstrated great performance improvements in identification, analysis and evaluation of common urban sources. © 2013 Elsevier B.V. All rights reserved.
Leventhall, Geoff
2007-01-01
Definitions of infrasound and low-frequency noise are discussed and the fuzzy boundary between them described. Infrasound, in its popular definition as sound below a frequency of 20 Hz, is clearly audible, the hearing threshold having been measured down to 1.5 Hz. The popular concept that sound below 20 Hz is inaudible is not correct. Sources of infrasound are in the range from very low-frequency atmospheric fluctuations up into the lower audio frequencies. These sources include natural occurrences, industrial installations, low-speed machinery, etc. Investigations of complaints of low-frequency noise often fail to measure any significant noise. This has led some complainants to conjecture that their perception arises from non-acoustic sources, such as electromagnetic radiation. Over the past 40 years, infrasound and low-frequency noise have attracted a great deal of adverse publicity on their effects on health, based mainly on media exaggerations and misunderstandings. A result of this has been that the public takes a one-dimensional view of infrasound, concerned only by its presence, whilst ignoring its low levels.
Aerodynamic sound of flow past an airfoil
NASA Technical Reports Server (NTRS)
Wang, Meng
1995-01-01
The long term objective of this project is to develop a computational method for predicting the noise of turbulence-airfoil interactions, particularly at the trailing edge. We seek to obtain the energy-containing features of the turbulent boundary layers and the near-wake using Navier-Stokes Simulation (LES or DNS), and then to calculate the far-field acoustic characteristics by means of acoustic analogy theories, using the simulation data as acoustic source functions. Two distinct types of noise can be emitted from airfoil trailing edges. The first, a tonal or narrowband sound caused by vortex shedding, is normally associated with blunt trailing edges, high angles of attack, or laminar flow airfoils. The second source is of broadband nature arising from the aeroacoustic scattering of turbulent eddies by the trailing edge. Due to its importance to airframe noise, rotor and propeller noise, etc., trailing edge noise has been the subject of extensive theoretical (e.g. Crighton & Leppington 1971; Howe 1978) as well as experimental investigations (e.g. Brooks & Hodgson 1981; Blake & Gershfeld 1988). A number of challenges exist concerning acoustic analogy based noise computations. These include the elimination of spurious sound caused by vortices crossing permeable computational boundaries in the wake, the treatment of noncompact source regions, and the accurate description of wave reflection by the solid surface and scattering near the edge. In addition, accurate turbulence statistics in the flow field are required for the evaluation of acoustic source functions. Major efforts to date have been focused on the first two challenges. To this end, a paradigm problem of laminar vortex shedding, generated by a two dimensional, uniform stream past a NACA0012 airfoil, is used to address the relevant numerical issues. Under the low Mach number approximation, the near-field flow quantities are obtained by solving the incompressible Navier-Stokes equations numerically at chord Reynolds number of 104. The far-field noise is computed using Curle's extension to the Lighthill analogy (Curle 1955). An effective method for separating the physical noise source from spurious boundary contributions is developed. This allows an accurate evaluation of the Reynolds stress volume quadrupoles, in addition to the more readily computable surface dipoles due to the unsteady lift and drag. The effect of noncompact source distribution on the far-field sound is assessed using an efficient integration scheme for the Curle integral, with full account of retarded-time variations. The numerical results confirm in quantitative terms that the far-field sound is dominated by the surface pressure dipoles at low Mach number. The techniques developed are applicable to a wide range of flows, including jets and mixing layers, where the Reynolds stress quadrupoles play a prominent or even dominant role in the overall sound generation.
Lo, Kam W
2017-03-01
When an airborne sound source travels past a stationary ground-based acoustic sensor node in a straight line at constant altitude and constant speed that is not much less than the speed of sound in air, the movement of the source during the propagation of the signal from the source to the sensor node (commonly referred to as the "retardation effect") enables the full set of flight parameters of the source to be estimated by measuring the direction of arrival (DOA) of the signal at the sensor node over a sufficiently long period of time. This paper studies the possibility of using instantaneous frequency (IF) measurements from the sensor node to improve the precision of the flight parameter estimates when the source spectrum contains a harmonic line of constant frequency. A simplified Cramer-Rao lower bound analysis shows that the standard deviations in the estimates of the flight parameters can be reduced when IF measurements are used together with DOA measurements. Two flight parameter estimation algorithms that utilize both IF and DOA measurements are described and their performances are evaluated using both simulated data and real data.
Development of a noncompact source theory with applications to helicopter rotors
NASA Technical Reports Server (NTRS)
Farassat, F.; Brown, T. J.
1976-01-01
A new formulation for determining the acoustic field of moving bodies, based on acoustic analogy, is derived. The acoustic pressure is given as the sum of two integrals, one of which has a derivative with respect to time. The integrands are functions of the normal velocity and surface pressure of the body. A computer program based on this formulation was used to calculate acoustic pressure signatures for several helicoptor rotors from experimental surface pressure data. Results are compared with those from compact source calculations. It is shown that noncompactness of steady sources on the rotor can account for the high harmonics of the pressure system. Thickness noise is shown to be a significant source of sound, especially for blunt airfoils in regions where noncompact source theory should be applied.
Effects of high combustion chamber pressure on rocket noise environment
NASA Technical Reports Server (NTRS)
Pao, S. P.
1972-01-01
The acoustical environment for a high combustion chamber pressure engine was examined in detail, using both conventional and advanced theoretical analysis. The influence of elevated chamber pressure on the rocket noise environment was established, based on increase in exit velocity and flame temperature, and changes in basic engine dimensions. Compared to large rocket engines, the overall sound power level is found to be 1.5 dB higher, if the thrust is the same. The peak Strouhal number shifted about one octave lower to a value near 0.01. Data on apparent sound source location and directivity patterns are also presented.
Airborne sound insulation evaluation and flanking path prediction of coupled room
NASA Astrophysics Data System (ADS)
Tassia, R. D.; Asmoro, W. A.; Arifianto, D.
2016-11-01
One of the parameters to review the acoustic comfort is based on the value of the insulation partition in the classroom. The insulation value can be expressed by the sound transmission loss which converted into a single value as weighted sound reduction index (Rw, DnTw) and also have an additional sound correction factor in low frequency (C, Ctr) .In this study, the measurements were performed in two positions at each point using BSWA microphone and dodecahedron speaker as the sound source. The results of field measurements indicate the acoustic insulation values (DnT w + C) is 19.6 dB. It is noted that the partition wall not according to the standard which the DnTw + C> 51 dB. Hence the partition wall need to be redesign to improve acoustic insulation in the classroom. The design used gypsum board, plasterboard, cement board, and PVC as the replacement material. Based on the results, all the material is simulated in accordance with established standards. Best insulation is cement board with the insulation value is 69dB, the thickness of 12.5 mm on each side and the absorber material is 50 mm. Many factors lead to increase the value of acoustic insulation, such as the thickness of the panel, the addition of absorber material, density, and Poisson's ratio of a material. The prediction of flanking path can be estimated from noise reduction values at each measurement point in the class room. Based on data obtained, there is no significant change in noise reduction from each point so that the pathway of flanking is not affect the sound transmission in the classroom.
Large Field of View PIV Measurements of Air Entrainment by SLS SMAT Water Sound Suppression System
NASA Astrophysics Data System (ADS)
Stegmeir, Matthew; Pothos, Stamatios; Bissell, Dan
2015-11-01
Water-based sound suppressions systems have been used to reduce the acoustic impact of space vehicle launches. Water flows at a high rate during launch in order to suppress Engine Generated Acoustics and other potentially damaging sources of noise. For the Space Shuttle, peak flow rates exceeded 900,000 gallons per minute. Such large water flow rates have the potential to induce substantial entrainment of the surrounding air, affecting the launch conditions and generating airflow around the launch vehicle. Validation testing is necessary to quantify this impact for future space launch systems. In this study, PIV measurements were performed to map the flow field above the SMAT sub-scale launch vehicle scaled launch stand. Air entrainment effects generated by a water-based sound suppression system were studied. Mean and fluctuating fluid velocities were mapped up to 1m above the test stand deck and compared to simulation results. Measurements performed with NASA MSFC.
NASA Technical Reports Server (NTRS)
Johnson, Marty E.; Fuller, Chris R.; Jones, Michael G. (Technical Monitor)
2000-01-01
In this report both a frequency domain method for creating high level harmonic excitation and a time domain inverse method for creating large pulses in a duct are developed. To create controllable, high level sound an axial array of six JBL-2485 compression drivers was used. The pressure downstream is considered as input voltages to the sources filtered by the natural dynamics of the sources and the duct. It is shown that this dynamic behavior can be compensated for by filtering the inputs such that both time delays and phase changes are taken into account. The methods developed maximize the sound output while (i) keeping within the power constraints of the sources and (ii) maintaining a suitable level of reproduction accuracy. Harmonic excitation pressure levels of over 155dB were created experimentally over a wide frequency range (1000-4000Hz). For pulse excitation there is a tradeoff between accuracy of reproduction and sound level achieved. However, the accurate reproduction of a pulse with a maximum pressure level over 6500Pa was achieved experimentally. It was also shown that the throat connecting the driver to the duct makes it difficult to inject sound just below the cut-on of each acoustic mode (pre cut-on loading effect).
Perceptual assessment of quality of urban soundscapes with combined noise sources and water sounds.
Jeon, Jin Yong; Lee, Pyoung Jik; You, Jin; Kang, Jian
2010-03-01
In this study, urban soundscapes containing combined noise sources were evaluated through field surveys and laboratory experiments. The effect of water sounds on masking urban noises was then examined in order to enhance the soundscape perception. Field surveys in 16 urban spaces were conducted through soundwalking to evaluate the annoyance of combined noise sources. Synthesis curves were derived for the relationships between noise levels and the percentage of highly annoyed (%HA) and the percentage of annoyed (%A) for the combined noise sources. Qualitative analysis was also made using semantic scales for evaluating the quality of the soundscape, and it was shown that the perception of acoustic comfort and loudness was strongly related to the annoyance. A laboratory auditory experiment was then conducted in order to quantify the total annoyance caused by road traffic noise and four types of construction noise. It was shown that the annoyance ratings were related to the types of construction noise in combination with road traffic noise and the level of the road traffic noise. Finally, water sounds were determined to be the best sounds to use for enhancing the urban soundscape. The level of the water sounds should be similar to or not less than 3 dB below the level of the urban noises.
Sprague, Mark W; Luczkovich, Joseph J
2016-01-01
This finite-difference time domain (FDTD) model for sound propagation in very shallow water uses pressure and velocity grids with both 3-dimensional Cartesian and 2-dimensional cylindrical implementations. Parameters, including water and sediment properties, can vary in each dimension. Steady-state and transient signals from discrete and distributed sources, such as the surface of a vibrating pile, can be used. The cylindrical implementation uses less computation but requires axial symmetry. The Cartesian implementation allows asymmetry. FDTD calculations compare well with those of a split-step parabolic equation. Applications include modeling the propagation of individual fish sounds, fish aggregation sounds, and distributed sources.
Aircraft laser sensing of sound velocity in water - Brillouin scattering
NASA Technical Reports Server (NTRS)
Hickman, G. D.; Harding, John M.; Carnes, Michael; Pressman, AL; Kattawar, George W.; Fry, Edward S.
1991-01-01
A real-time data source for sound speed in the upper 100 m has been proposed for exploratory development. This data source is planned to be generated via a ship- or aircraft-mounted optical pulsed laser using the spontaneous Brillouin scattering technique. The system should be capable (from a single 10 ns 500 mJ pulse) of yielding range resolved sound speed profiles in water to depths of 75-100 m to an accuracy of 1 m/s. The 100 m profiles will provide the capability of rapidly monitoring the upper-ocean vertical structure. They will also provide an extensive, subsurface-data source for existing real-time, operational ocean nowcast/forecast systems.
McClaine, Elizabeth M.; Yin, Tom C. T.
2010-01-01
The precedence effect (PE) is an auditory spatial illusion whereby two identical sounds presented from two separate locations with a delay between them are perceived as a fused single sound source whose position depends on the value of the delay. By training cats using operant conditioning to look at sound sources, we have previously shown that cats experience the PE similarly to humans. For delays less than ±400 μs, cats exhibit summing localization, the perception of a “phantom” sound located between the sources. Consistent with localization dominance, for delays from 400 μs to ∼10 ms, cats orient toward the leading source location only, with little influence of the lagging source. Finally, echo threshold was reached for delays >10 ms, where cats first began to orient to the lagging source. It has been hypothesized by some that the neural mechanisms that produce facets of the PE, such as localization dominance and echo threshold, must likely occur at cortical levels. To test this hypothesis, we measured both pinnae position, which were not under any behavioral constraint, and eye position in cats and found that the pinnae orientations to stimuli that produce each of the three phases of the PE illusion was similar to the gaze responses. Although both eye and pinnae movements behaved in a manner that reflected the PE, because the pinnae moved with strikingly short latencies (∼30 ms), these data suggest a subcortical basis for the PE and that the cortex is not likely to be directly involved. PMID:19889848
Tollin, Daniel J; McClaine, Elizabeth M; Yin, Tom C T
2010-01-01
The precedence effect (PE) is an auditory spatial illusion whereby two identical sounds presented from two separate locations with a delay between them are perceived as a fused single sound source whose position depends on the value of the delay. By training cats using operant conditioning to look at sound sources, we have previously shown that cats experience the PE similarly to humans. For delays less than +/-400 mus, cats exhibit summing localization, the perception of a "phantom" sound located between the sources. Consistent with localization dominance, for delays from 400 mus to approximately 10 ms, cats orient toward the leading source location only, with little influence of the lagging source. Finally, echo threshold was reached for delays >10 ms, where cats first began to orient to the lagging source. It has been hypothesized by some that the neural mechanisms that produce facets of the PE, such as localization dominance and echo threshold, must likely occur at cortical levels. To test this hypothesis, we measured both pinnae position, which were not under any behavioral constraint, and eye position in cats and found that the pinnae orientations to stimuli that produce each of the three phases of the PE illusion was similar to the gaze responses. Although both eye and pinnae movements behaved in a manner that reflected the PE, because the pinnae moved with strikingly short latencies ( approximately 30 ms), these data suggest a subcortical basis for the PE and that the cortex is not likely to be directly involved.
Density Fluctuation in Asymmetric Nozzle Plumes and Correlation with Far Field Noise
NASA Technical Reports Server (NTRS)
Panda, J.; Zaman, K. B. M. Q.
2001-01-01
A comparative experimental study of air density fluctuations in the unheated plumes of a circular, 4-tabbed-circular, chevron-circular and 10-lobed rectangular nozzles was performed at a fixed Mach number of 0.95 using a recently developed Rayleigh scattering based technique. Subsequently, the flow density fluctuations are cross-correlated with the far field sound pressure fluctuations to determine sources for acoustics emission. The nearly identical noise spectra from the baseline circular and the chevron nozzles are found to be in agreement with the similarity in spreading, turbulence fluctuations, and flow-sound correlations measured in the plumes. The lobed nozzle produced the least low frequency noise, in agreement with the weakest overall density fluctuations and flow-sound correlation. The tabbed nozzle took an intermediate position in the hierarchy of noise generation, intensity of turbulent fluctuation and flow-sound correlation. Some of the features in the 4-tabbed nozzle are found to be explainable in terms of splitting of the jet in a central large core and 4 side jetlets.
Investigation of hydraulic transmission noise sources
NASA Astrophysics Data System (ADS)
Klop, Richard J.
Advanced hydrostatic transmissions and hydraulic hybrids show potential in new market segments such as commercial vehicles and passenger cars. Such new applications regard low noise generation as a high priority, thus, demanding new quiet hydrostatic transmission designs. In this thesis, the aim is to investigate noise sources of hydrostatic transmissions to discover strategies for designing compact and quiet solutions. A model has been developed to capture the interaction of a pump and motor working in a hydrostatic transmission and to predict overall noise sources. This model allows a designer to compare noise sources for various configurations and to design compact and inherently quiet solutions. The model describes dynamics of the system by coupling lumped parameter pump and motor models with a one-dimensional unsteady compressible transmission line model. The model has been verified with dynamic pressure measurements in the line over a wide operating range for several system structures. Simulation studies were performed illustrating sensitivities of several design variables and the potential of the model to design transmissions with minimal noise sources. A semi-anechoic chamber has been designed and constructed suitable for sound intensity measurements that can be used to derive sound power. Measurements proved the potential to reduce audible noise by predicting and reducing both noise sources. Sound power measurements were conducted on a series hybrid transmission test bench to validate the model and compare predicted noise sources with sound power.
Yan, W Y; Li, L; Yang, Y G; Lin, X L; Wu, J Z
2016-08-01
We designed a computer-based respiratory sound analysis system to identify pediatric normal lung sound. To verify the validity of the computer-based respiratory sound analysis system. First we downloaded the standard lung sounds from the network database (website: http: //www.easyauscultation.com/lung-sounds-reference-guide) and recorded 3 samples of abnormal loud sound (rhonchi, wheeze and crackles) from three patients of The Department of Pediatrics, the First Affiliated Hospital of Xiamen University. We regarded such lung sounds as"reference lung sounds". The"test lung sounds"were recorded from 29 children form Kindergarten of Xiamen University. we recorded lung sound by portable electronic stethoscope and valid lung sounds were selected by manual identification. We introduced Mel-frequency cepstral coefficient (MFCC) to extract lung sound features and dynamic time warping (DTW) for signal classification. We had 39 standard lung sounds, recorded 58 test lung sounds. This computer-based respiratory sound analysis system was carried out in 58 lung sound recognition, correct identification of 52 times, error identification 6 times. Accuracy was 89.7%. Based on MFCC and DTW, our computer-based respiratory sound analysis system can effectively identify healthy lung sounds of children (accuracy can reach 89.7%), fully embodies the reliability of the lung sounds analysis system.
Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano
2013-01-01
The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard "condition-based" designs, as well as "computational" methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli.
NASA Astrophysics Data System (ADS)
Spence, Heather Ruth
Sound is the primary sensory modality for dolphins, yet policies mitigating anthropogenic sound exposure are limited in wild populations and even fewer noise policies or guidelines have been developed for governing dolphin welfare under human care. Concerns have been raised that dolphins under human care live in facilities that are too noisy, or are too acoustically sterile. However, these claims have not been evaluated to characterize facility soundscapes, and further, how they compare to wild soundscapes. The soundscape of a wild dolphin habitat off the coast of Quintana, Roo, Mexico was characterized based on Passive Acoustic Monitoring (PAM) recordings over one year. Snapping shrimp were persistent and broadband, following a diel pattern. Fish sound production was pulsed and prominent in low frequencies (100 -- 1000 Hz), and abiotic surface wave action contributed to noise in higher frequencies (15 -- 28 kHz). Boat motors were the main anthropogenic sound source. While sporadic, boat motors were responsible for large spikes in the noise, sometimes exceeding the ambient noise (in the absence of a boat) by 20 dB root-mean-squared sound pressure level, and potentially higher at closer distances. Boat motor sounds can potentially mask cues and communication sounds of dolphins. The soundscapes of four acoustically distinct outdoor dolphin facilities in Quintana Roo, Mexico were also characterized based on PAM, and findings compared with one another and with the measurements from the wild dolphin habitat. Recordings were made for at least 24 hours to encompass the range of daily activities. The four facilities differed in non-dolphin species present (biological sounds), bathymetry complexity, and method of water circulation. It was hypothesized that the greater the biological and physical differences of a pool from the ocean habitat, the greater the acoustic differences would be from the natural environment. Spectral analysis and audio playback revealed that the site most biologically and physically distinct from the ocean habitat also differed greatly from the other sites acoustically, with the most common and high amplitude sound being pump noise versus biological sounds at the other sites. Overall the dolphin facilities were neither clearly noisier nor more sterile than the wild site, but rather differed in particular characteristics. The findings are encouraging for dolphin welfare for several reasons. Sound levels measured were unlikely to cause threshold shifts in hearing. At three of four facilities, prominent biological sounds in the wild site -- snapping shrimp and fish sounds -- were present, meaning that the dolphins at these facilities are experiencing biotic features of the soundscape they would experience in the wild. Additionally, the main anthropogenic sounds experienced at the facilities (construction and cleaning sounds) did not reach the levels of the anthropogenic sounds experienced at the wild site (boat motor sounds), and the highest noise levels for anthropogenic sounds fall outside the dolphins' most sensitive range of hearing. However, there are anthropogenic contributors to the soundscape that are of particular interest and possible concern that should be investigated further, particularly pump noise and periodic or intermittent construction noise. These factors need to be considered on a facility-by-facility basis and appropriate mitigation procedures incorporated in animal handling to mitigate potential responses to planned or anticipated sound producing events, e.g. animal relocation or buffering sound producing activities. The central role of bioacoustics for dolphins means that PAM is a basic life support requirement along with water and food testing. Periodic noise is of highest concern, and PAM is needed to inform mitigation of noise from periodic sources. Priority actions are more widespread and long-term standardized monitoring, further research on habituation, preference, coupling and pool acoustics, implementation of acoustics training, standardization of measurements, and improved information access.
On the Possible Detection of Lightning Storms by Elephants
Kelley, Michael C.; Garstang, Michael
2013-01-01
Simple Summary We use data similar to that taken by the International Monitoring System for the detection of nuclear explosions, to determine whether elephants might be capable of detecting and locating the source of sounds generated by thunderstorms. Knowledge that elephants might be capable of responding to such storms, particularly at the end of the dry season when migrations are initiated, is of considerable interest to management and conservation. Abstract Theoretical calculations suggest that sounds produced by thunderstorms and detected by a system similar to the International Monitoring System (IMS) for the detection of nuclear explosions at distances ≥100 km, are at sound pressure levels equal to or greater than 6 × 10−3 Pa. Such sound pressure levels are well within the range of elephant hearing. Frequencies carrying these sounds might allow for interaural time delays such that adult elephants could not only hear but could also locate the source of these sounds. Determining whether it is possible for elephants to hear and locate thunderstorms contributes to the question of whether elephant movements are triggered or influenced by these abiotic sounds. PMID:26487406
The Confirmation of the Inverse Square Law Using Diffraction Gratings
ERIC Educational Resources Information Center
Papacosta, Pangratios; Linscheid, Nathan
2014-01-01
Understanding the inverse square law, how for example the intensity of light or sound varies with distance, presents conceptual and mathematical challenges. Students know intuitively that intensity decreases with distance. A light source appears dimmer and sound gets fainter as the distance from the source increases. The difficulty is in…
Beranek, Leo
2011-05-01
The parameter, "Strength of Sound G" is closely related to loudness. Its magnitude is dependent, inversely, on the total sound absorption in a room. By comparison, the reverberation time (RT) is both inversely related to the total sound absorption in a hall and directly related to its cubic volume. Hence, G and RT in combination are vital in planning the acoustics of a concert hall. A newly proposed "Bass Index" is related to the loudness of the bass sound and equals the value of G at 125 Hz in decibels minus its value at mid-frequencies. Listener envelopment (LEV) is shown for most halls to be directly related to the mid-frequency value of G. The broadening of sound, i.e., apparent source width (ASW) is given by degree of source broadening (DSB) which is determined from the combined effect of early lateral reflections as measured by binaural quality index (BQI) and strength G. The optimum values and limits of these parameters are discussed.
Felix II, Richard A.; Gourévitch, Boris; Gómez-Álvarez, Marcelo; Leijon, Sara C. M.; Saldaña, Enrique; Magnusson, Anna K.
2017-01-01
Auditory streaming enables perception and interpretation of complex acoustic environments that contain competing sound sources. At early stages of central processing, sounds are segregated into separate streams representing attributes that later merge into acoustic objects. Streaming of temporal cues is critical for perceiving vocal communication, such as human speech, but our understanding of circuits that underlie this process is lacking, particularly at subcortical levels. The superior paraolivary nucleus (SPON), a prominent group of inhibitory neurons in the mammalian brainstem, has been implicated in processing temporal information needed for the segmentation of ongoing complex sounds into discrete events. The SPON requires temporally precise and robust excitatory input(s) to convey information about the steep rise in sound amplitude that marks the onset of voiced sound elements. Unfortunately, the sources of excitation to the SPON and the impact of these inputs on the behavior of SPON neurons have yet to be resolved. Using anatomical tract tracing and immunohistochemistry, we identified octopus cells in the contralateral cochlear nucleus (CN) as the primary source of excitatory input to the SPON. Cluster analysis of miniature excitatory events also indicated that the majority of SPON neurons receive one type of excitatory input. Precise octopus cell-driven onset spiking coupled with transient offset spiking make SPON responses well-suited to signal transitions in sound energy contained in vocalizations. Targets of octopus cell projections, including the SPON, are strongly implicated in the processing of temporal sound features, which suggests a common pathway that conveys information critical for perception of complex natural sounds. PMID:28620283
Clark, Christopher James
2014-01-01
Models of character evolution often assume a single mode of evolutionary change, such as continuous, or discrete. Here I provide an example in which a character exhibits both types of change. Hummingbirds in the genus Selasphorus produce sound with fluttering tail-feathers during courtship. The ancestral character state within Selasphorus is production of sound with an inner tail-feather, R2, in which the sound usually evolves gradually. Calliope and Allen's Hummingbirds have evolved autapomorphic acoustic mechanisms that involve feather-feather interactions. I develop a source-filter model of these interactions. The ‘source’ comprises feather(s) that are both necessary and sufficient for sound production, and are aerodynamically coupled to neighboring feathers, which act as filters. Filters are unnecessary or insufficient for sound production, but may evolve to become sources. Allen's Hummingbird has evolved to produce sound with two sources, one with feather R3, another frequency-modulated sound with R4, and their interaction frequencies. Allen's R2 retains the ancestral character state, a ∼1 kHz “ghost” fundamental frequency masked by R3, which is revealed when R3 is experimentally removed. In the ancestor to Allen's Hummingbird, the dominant frequency has ‘hopped’ to the second harmonic without passing through intermediate frequencies. This demonstrates that although the fundamental frequency of a communication sound may usually evolve gradually, occasional jumps from one character state to another can occur in a discrete fashion. Accordingly, mapping acoustic characters on a phylogeny may produce misleading results if the physical mechanism of production is not known. PMID:24722049
Automatic adventitious respiratory sound analysis: A systematic review
Bowyer, Stuart; Rodriguez-Villegas, Esther
2017-01-01
Background Automatic detection or classification of adventitious sounds is useful to assist physicians in diagnosing or monitoring diseases such as asthma, Chronic Obstructive Pulmonary Disease (COPD), and pneumonia. While computerised respiratory sound analysis, specifically for the detection or classification of adventitious sounds, has recently been the focus of an increasing number of studies, a standardised approach and comparison has not been well established. Objective To provide a review of existing algorithms for the detection or classification of adventitious respiratory sounds. This systematic review provides a complete summary of methods used in the literature to give a baseline for future works. Data sources A systematic review of English articles published between 1938 and 2016, searched using the Scopus (1938-2016) and IEEExplore (1984-2016) databases. Additional articles were further obtained by references listed in the articles found. Search terms included adventitious sound detection, adventitious sound classification, abnormal respiratory sound detection, abnormal respiratory sound classification, wheeze detection, wheeze classification, crackle detection, crackle classification, rhonchi detection, rhonchi classification, stridor detection, stridor classification, pleural rub detection, pleural rub classification, squawk detection, and squawk classification. Study selection Only articles were included that focused on adventitious sound detection or classification, based on respiratory sounds, with performance reported and sufficient information provided to be approximately repeated. Data extraction Investigators extracted data about the adventitious sound type analysed, approach and level of analysis, instrumentation or data source, location of sensor, amount of data obtained, data management, features, methods, and performance achieved. Data synthesis A total of 77 reports from the literature were included in this review. 55 (71.43%) of the studies focused on wheeze, 40 (51.95%) on crackle, 9 (11.69%) on stridor, 9 (11.69%) on rhonchi, and 18 (23.38%) on other sounds such as pleural rub, squawk, as well as the pathology. Instrumentation used to collect data included microphones, stethoscopes, and accelerometers. Several references obtained data from online repositories or book audio CD companions. Detection or classification methods used varied from empirically determined thresholds to more complex machine learning techniques. Performance reported in the surveyed works were converted to accuracy measures for data synthesis. Limitations Direct comparison of the performance of surveyed works cannot be performed as the input data used by each was different. A standard validation method has not been established, resulting in different works using different methods and performance measure definitions. Conclusion A review of the literature was performed to summarise different analysis approaches, features, and methods used for the analysis. The performance of recent studies showed a high agreement with conventional non-automatic identification. This suggests that automated adventitious sound detection or classification is a promising solution to overcome the limitations of conventional auscultation and to assist in the monitoring of relevant diseases. PMID:28552969
Kogan, Pablo; Arenas, Jorge P; Bermejo, Fernando; Hinalaf, María; Turra, Bruno
2018-06-13
Urban soundscapes are dynamic and complex multivariable environmental systems. Soundscapes can be organized into three main entities containing the multiple variables: Experienced Environment (EE), Acoustic Environment (AE), and Extra-Acoustic Environment (XE). This work applies a multidimensional and synchronic data-collecting methodology at eight urban environments in the city of Córdoba, Argentina. The EE was assessed by means of surveys, the AE by acoustic measurements and audio recordings, and the XE by photos, video, and complementary sources. In total, 39 measurement locations were considered, where data corresponding to 61 AE and 203 EE were collected. Multivariate analysis and GIS techniques were used for data processing. The types of sound sources perceived, and their extents make up part of the collected variables that belong to the EE, i.e. traffic, people, natural sounds, and others. Sources explaining most of the variance were traffic noise and natural sounds. Thus, a Green Soundscape Index (GSI) is defined here as the ratio of the perceived extents of natural sounds to traffic noise. Collected data were divided into three ranges according to GSI value: 1) perceptual predominance of traffic noise, 2) balanced perception, and 3) perceptual predominance of natural sounds. For each group, three additional variables from the EE and three from the AE were applied, which reported significant differences, especially between ranges 1 and 2 with 3. These results confirm the key role of perceiving natural sounds in a town environment and also support the proposal of a GSI as a valuable indicator to classify urban soundscapes. In addition, the collected GSI-related data significantly helps to assess the overall soundscape. It is noted that this proposed simple perceptual index not only allows one to assess and classify urban soundscapes but also contributes greatly toward a technique for separating environmental sound sources. Copyright © 2018 Elsevier B.V. All rights reserved.
Sound produced by an oscillating arc in a high-pressure gas
NASA Astrophysics Data System (ADS)
Popov, Fedor K.; Shneider, Mikhail N.
2017-08-01
We suggest a simple theory to describe the sound generated by small periodic perturbations of a cylindrical arc in a dense gas. Theoretical analysis was done within the framework of the non-self-consistent channel arc model and supplemented with time-dependent gas dynamic equations. It is shown that an arc with power amplitude oscillations on the order of several percent is a source of sound whose intensity is comparable with external ultrasound sources used in experiments to increase the yield of nanoparticles in the high pressure arc systems for nanoparticle synthesis.
High-frequency monopole sound source for anechoic chamber qualification
NASA Astrophysics Data System (ADS)
Saussus, Patrick; Cunefare, Kenneth A.
2003-04-01
Anechoic chamber qualification procedures require the use of an omnidirectional monopole sound source. Required characteristics for these monopole sources are explicitly listed in ISO 3745. Building a high-frequency monopole source that meets these characteristics has proved difficult due to the size limitations imposed by small wavelengths at high frequency. A prototype design developed for use in hemianechoic chambers employs telescoping tubes, which act as an inverse horn. This same design can be used in anechoic chambers, with minor adaptations. A series of gradually decreasing brass telescoping tubes is attached to the throat of a well-insulated high-frequency compression driver. Therefore, all of the sound emitted from the driver travels through the horn and exits through an opening of approximately 2.5 mm. Directivity test data show that this design meets all of the requirements set forth by ISO 3745.
NASA Astrophysics Data System (ADS)
Murphy, Paul G.
River hydrokinetic turbines may be an economical alternative to traditional energy sources for small communities on Alaskan rivers. However, there is concern that sound from these turbines could affect sockeye salmon (Oncorhynchus nerka), an important resource for small, subsistence based communities, commercial fisherman, and recreational anglers. The hearing sensitivity of sockeye salmon has not been quantified, but behavioral responses to sounds at frequencies less than a few hundred Hertz have been documented for Atlantic salmon (Salmo salar), and particle motion is thought to be the primary mode of stimulation. Methods of measuring acoustic particle motion are well-established, but have rarely been necessary in energetic areas, such as river and tidal current environments. In this study, the acoustic pressure in the vicinity of an operating river current turbine is measured using a freely drifting hydrophone array. Analysis of turbine sound reveals tones that vary in frequency and magnitude with turbine rotation rate, and that may sockeye salmon may sense. In addition to pressure, the vertical components of particle acceleration and velocity are estimated by calculating the finite difference of the pressure signals from the hydrophone array. A method of determining source bearing using an array of hydrophones is explored. The benefits and challenges of deploying drifting hydrophone arrays for marine renewable energy converter monitoring are discussed.
McQuinn, Ian H; Lesage, Véronique; Carrier, Dominic; Larrivée, Geneviève; Samson, Yves; Chartrand, Sylvain; Michaud, Robert; Theriault, James
2011-12-01
The threatened resident beluga population of the St. Lawrence Estuary shares the Saguenay-St. Lawrence Marine Park with significant anthropogenic noise sources, including marine commercial traffic and a well-established, vessel-based whale-watching industry. Frequency-dependent (FD) weighting was used to approximate beluga hearing sensitivity to determine how noise exposure varied in time and space at six sites of high beluga summer residency. The relative contribution of each source to acoustic habitat degradation was estimated by measuring noise levels throughout the summer and noise signatures of typical vessel classes with respect to traffic volume and sound propagation characteristics. Rigid-hulled inflatable boats were the dominant noise source with respect to estimated beluga hearing sensitivity in the studied habitats due to their high occurrence and proximity, high correlation with site-specific FD-weighted sound levels, and the dominance of mid-frequencies (0.3-23 kHz) in their noise signatures. Median C-weighted sound pressure level (SPL(RMS)) had a range of 19 dB re 1 μPa between the noisiest and quietest sites. Broadband SPL(RMS) exceeded 120 dB re 1 μPa 8-32% of the time depending on the site. Impacts of these noise levels on St. Lawrence beluga will depend on exposure recurrence and individual responsiveness. © 2011 Acoustical Society of America
Global Marine Gravity and Bathymetry at 1-Minute Resolution
NASA Astrophysics Data System (ADS)
Sandwell, D. T.; Smith, W. H.
2008-12-01
We have developed global gravity and bathymetry grids at 1-minute resolution. Three approaches are used to reduce the error in the satellite-derived marine gravity anomalies. First, we have retracked the raw waveforms from the ERS-1 and Geosat/GM missions resulting in improvements in range precision of 40% and 27%, respectively. Second, we have used the recently published EGM2008 global gravity model as a reference field to provide a seamless gravity transition from land to ocean. Third we have used a biharmonic spline interpolation method to construct residual vertical deflection grids. Comparisons between shipboard gravity and the global gravity grid show errors ranging from 2.0 mGal in the Gulf of Mexico to 4.0 mGal in areas with rugged seafloor topography. The largest errors occur on the crests of narrow large seamounts. The bathymetry grid is based on prediction from satellite gravity and available ship soundings. Global soundings were assembled from a wide variety of sources including NGDC/GEODAS, NOAA Coastal Relief, CCOM, IFREMER, JAMSTEC, NSF Polar Programs, UKHO, LDEO, HIG, SIO and numerous miscellaneous contributions. The National Geospatial-intelligence Agency and other volunteering hydrographic offices within the International Hydrographic Organization provided global significant shallow water (< 300 m) soundings derived from their nautical charts. All soundings were converted to a common format and were hand-edited in relation to a smooth bathymetric model. Land elevations and shoreline location are based on a combination SRTM30, GTOPO30, and ICESAT data. A new feature of the bathymetry grid is a matching grid of source identification number that enables one to establish the origin of the depth estimate in each grid cell. Both the gravity and bathymetry grids are freely available.
Measurement of sound emitted by flying projectiles with aeroacoustic sources
NASA Technical Reports Server (NTRS)
Cho, Y. I.; Shakkottai, P.; Harstad, K. G.; Back, L. H.
1988-01-01
Training projectiles with axisymmetric ring cavities that produce intense tones in an airstream were shot in a straight-line trajectory. A ground-based microphone was used to obtain the angular distribution of sound intensity produced from the flying projectile. Data reduction required calculation of Doppler and attenuation factors. Also, the directional sensitivity of the ground-mounted microphone was measured and used in the data reduction. A rapid angular variation of sound intensity produced from the projectile was found that can be used to plot an intensity contour map on the ground. A full-scale field test confirmed the validity of the aeroacoustic concept of producing a relatively intense whistle from the projectile, and the usefulness of short-range flight tests that yield acoustic data free of uncertainties associated with diffraction, reflection, and refraction at jet boundaries in free-jet tests.
NASA Technical Reports Server (NTRS)
Meredith, R. W.; Zuckerwar, A. J.
1984-01-01
A low-cost digital system based on an 8-bit Apple II microcomputer has been designed to provide on-line control, data acquisition, and evaluation of sound absorption measurements in gases. The measurements are conducted in a resonant tube, in which an acoustical standing wave is excited, the excitation removed, and the sound absorption evaluated from the free decay envelope. The free decay is initiated from the computer keyboard after the standing wave is established, and the microphone response signal is the source of the analog signal for the A/D converter. The acquisition software is written in ASSEMBLY language and the evaluation software in BASIC. This paper describes the acoustical measurement, hardware, software, and system performance and presents measurements of sound absorption in air as an example.
Riva, Giuseppe; Carelli, Laura; Gaggioli, Andrea; Gorini, Alessandra; Vigna, Cinzia; Corsi, Riccardo; Faletti, Gianluca; Vezzadini, Luca
2009-01-01
At MMVR 2007 we presented NeuroVR (http://www.neurovr.org) a free virtual reality platform based on open-source software. The software allows non-expert users to adapt the content of 14 pre-designed virtual environments to the specific needs of the clinical or experimental setting. Following the feedbacks of the 700 users who downloaded the first version, we developed a new version - NeuroVR 1.5 - that improves the possibility for the therapist to enhance the patient's feeling of familiarity and intimacy with the virtual scene, by using external sounds, photos or videos. Specifically, the new version now includes full sound support and the ability of triggering external sounds and videos using the keyboard. The outcomes of different trials made using NeuroVR will be presented and discussed.
Sound segregation via embedded repetition is robust to inattention.
Masutomi, Keiko; Barascud, Nicolas; Kashino, Makio; McDermott, Josh H; Chait, Maria
2016-03-01
The segregation of sound sources from the mixture of sounds that enters the ear is a core capacity of human hearing, but the extent to which this process is dependent on attention remains unclear. This study investigated the effect of attention on the ability to segregate sounds via repetition. We utilized a dual task design in which stimuli to be segregated were presented along with stimuli for a "decoy" task that required continuous monitoring. The task to assess segregation presented a target sound 10 times in a row, each time concurrent with a different distractor sound. McDermott, Wrobleski, and Oxenham (2011) demonstrated that repetition causes the target sound to be segregated from the distractors. Segregation was queried by asking listeners whether a subsequent probe sound was identical to the target. A control task presented similar stimuli but probed discrimination without engaging segregation processes. We present results from 3 different decoy tasks: a visual multiple object tracking task, a rapid serial visual presentation (RSVP) digit encoding task, and a demanding auditory monitoring task. Load was manipulated by using high- and low-demand versions of each decoy task. The data provide converging evidence of a small effect of attention that is nonspecific, in that it affected the segregation and control tasks to a similar extent. In all cases, segregation performance remained high despite the presence of a concurrent, objectively demanding decoy task. The results suggest that repetition-based segregation is robust to inattention. (c) 2016 APA, all rights reserved).
Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano
2013-01-01
The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard “condition-based” designs, as well as “computational” methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli. PMID:24194828
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xuebao, E-mail: lxb08357x@ncepu.edu.cn; Cui, Xiang, E-mail: x.cui@ncepu.edu.cn; Ma, Wenzuo
The corona-generated audible noise (AN) has become one of decisive factors in the design of high voltage direct current (HVDC) transmission lines. The AN from transmission lines can be attributed to sound pressure pulses which are generated by the multiple corona sources formed on the conductor, i.e., transmission lines. In this paper, a detailed time-domain characteristics of the sound pressure pulses, which are generated by the DC corona discharges formed over the surfaces of a stranded conductors, are investigated systematically in a laboratory settings using a corona cage structure. The amplitude of sound pressure pulse and its time intervals aremore » extracted by observing a direct correlation between corona current pulses and corona-generated sound pressure pulses. Based on the statistical characteristics, a stochastic model is presented for simulating the sound pressure pulses due to DC corona discharges occurring on conductors. The proposed stochastic model is validated by comparing the calculated and measured A-weighted sound pressure level (SPL). The proposed model is then used to analyze the influence of the pulse amplitudes and pulse rate on the SPL. Furthermore, a mathematical relationship is found between the SPL and conductor diameter, electric field, and radial distance.« less
Statistics of natural binaural sounds.
Młynarski, Wiktor; Jost, Jürgen
2014-01-01
Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.
Statistics of Natural Binaural Sounds
Młynarski, Wiktor; Jost, Jürgen
2014-01-01
Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction. PMID:25285658
Determination of equivalent sound speed profiles for ray tracing in near-ground sound propagation.
Prospathopoulos, John M; Voutsinas, Spyros G
2007-09-01
The determination of appropriate sound speed profiles in the modeling of near-ground propagation using a ray tracing method is investigated using a ray tracing model which is capable of performing axisymmetric calculations of the sound field around an isolated source. Eigenrays are traced using an iterative procedure which integrates the trajectory equations for each ray launched from the source at a specific direction. The calculation of sound energy losses is made by introducing appropriate coefficients to the equations representing the effect of ground and atmospheric absorption and the interaction with the atmospheric turbulence. The model is validated against analytical and numerical predictions of other methodologies for simple cases, as well as against measurements for nonrefractive atmospheric environments. A systematic investigation for near-ground propagation in downward and upward refractive atmosphere is made using experimental data. Guidelines for the suitable simulation of the wind velocity profile are derived by correlating predictions with measurements.
NASA Astrophysics Data System (ADS)
Li, Xuebao; Cui, Xiang; Lu, Tiebing; Wang, Donglai
2017-10-01
The directivity and lateral profile of corona-generated audible noise (AN) from a single corona source are measured through experiments carried out in the semi-anechoic laboratory. The experimental results show that the waveform of corona-generated AN consists of a series of random sound pressure pulses whose pulse amplitudes decrease with the increase of measurement distance. A single corona source can be regarded as a non-directional AN source, and the A-weighted SPL (sound pressure level) decreases 6 dB(A) as doubling the measurement distance. Then, qualitative explanations for the rationality of treating the single corona source as a point source are given on the basis of the Ingard's theory for sound generation in corona discharge. Furthermore, we take into consideration of the ground reflection and the air attenuation to reconstruct the propagation features of AN from the single corona source. The calculated results agree with the measurement well, which validates the propagation model. Finally, the influence of the ground reflection on the SPL is presented in the paper.
A practical, low-noise coil system for magnetotellurics
Stanley, William D.; Tinkler, Richard D.
1983-01-01
Magnetotellurics is a geophysical technique which was developed by Cagnaird (1953) and Tikhonov (1950) and later refined by other scientists worldwide. The technique is a method of electromagnetic sounding of the Earth and is based upon the skin depth effect in conductive media. The electric and magnetic fields arising from natural sources are measured at the surface of the earth over broad frequency bands. An excellent review of the technique is provided in the paper by Vozoff (1972). The sources of the natural fields are found in two basic mechanisms. At frequencies above a few hertz, most of the energy arises from lightning in thunderstorm belts around the equatorial regions. This energy is propagated in a wave-guide formed by the earthionospheric cavity. Energy levels are higher at fundamental modes for this cavity, but sufficient energy exists over most of the audio range to be useful for sounding at these frequencies, in which case the technique is generally referred to as audio-magnetotellurics or AMT. At frequencies lower than audio, and in general below 1 Hz, the source of naturally occuring electromagnetic energy is found in ionospheric currents. Current systems flowing in the ionosphere generate EM waves which can be used in sounding of the earth. These fields generate a relatively complete spectrum of electromagnetic energy that extends from around 1 Hz to periods of one day. Figure 1 shows an amplitude spectrum characteristic of both the ionospheric and lightning sources, covering a frequency range from 0.0001 Hz to 1000 Hz. It can be seen that there is a minimum in signal levels that occurs at about 1 Hz, in the gap between the two sources, and that signal level increases with a decrease in frequency.
A generalized sound extrapolation method for turbulent flows
NASA Astrophysics Data System (ADS)
Zhong, Siyang; Zhang, Xin
2018-02-01
Sound extrapolation methods are often used to compute acoustic far-field directivities using near-field flow data in aeroacoustics applications. The results may be erroneous if the volume integrals are neglected (to save computational cost), while non-acoustic fluctuations are collected on the integration surfaces. In this work, we develop a new sound extrapolation method based on an acoustic analogy using Taylor's hypothesis (Taylor 1938 Proc. R. Soc. Lon. A 164, 476-490. (doi:10.1098/rspa.1938.0032)). Typically, a convection operator is used to filter out the acoustically inefficient components in the turbulent flows, and an acoustics dominant indirect variable Dcp‧ is solved. The sound pressure p' at the far field is computed from Dcp‧ based on the asymptotic properties of the Green's function. Validations results for benchmark problems with well-defined sources match well with the exact solutions. For aeroacoustics applications: the sound predictions by the aerofoil-gust interaction are close to those by an earlier method specially developed to remove the effect of vortical fluctuations (Zhong & Zhang 2017 J. Fluid Mech. 820, 424-450. (doi:10.1017/jfm.2017.219)); for the case of vortex shedding noise from a cylinder, the off-body predictions by the proposed method match well with the on-body Ffowcs-Williams and Hawkings result; different integration surfaces yield close predictions (of both spectra and far-field directivities) for a co-flowing jet case using an established direct numerical simulation database. The results suggest that the method may be a potential candidate for sound projection in aeroacoustics applications.
ERIC Educational Resources Information Center
Lalonde, Kaylah; Holt, Rachael Frush
2014-01-01
Purpose: This preliminary investigation explored potential cognitive and linguistic sources of variance in 2- year-olds' speech-sound discrimination by using the toddler change/no-change procedure and examined whether modifications would result in a procedure that can be used consistently with younger 2-year-olds. Method: Twenty typically…
The use of an intraoral electrolarynx for an edentulous patient: a clinical report.
Wee, Alvin G; Wee, Lisa A; Cheng, Ansgar C; Cwynar, Roger B
2004-06-01
This clinical report describes the clinical requirements, treatment sequence, and use of a relatively new intraoral electrolarynx for a completely edentulous patient. This device consists of a sound source attached to the maxilla and a hand-held controller unit that controls the pitch and volume of the intraoral sound source via transmitted radio waves.
Spherical beamforming for spherical array with impedance surface
NASA Astrophysics Data System (ADS)
Tontiwattanakul, Khemapat
2018-01-01
Spherical microphone array beamforming has been a popular research topic for recent years. Due to their isotropic beam in three dimensional spaces as well as a certain frequency range, the arrays are widely used in many applications such as sound field recording, acoustic beamforming, and noise source localisation. The body of a spherical array is usually considered perfectly rigid. A sound field captured by the sensors on spherical array can be decomposed into a series of spherical harmonics. In noise source localisation, the amplitude density of sound sources is estimated and illustrated by mean of colour maps. In this work, a rigid spherical array covered by fibrous materials is studied via numerical simulation and the performance of the spherical beamforming is discussed.
Stochastic sediment property inversion in Shallow Water 06.
Michalopoulou, Zoi-Heleni
2017-11-01
Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.
NASA Astrophysics Data System (ADS)
Sugimoto, Tsuneyoshi; Uechi, Itsuki; Sugimoto, Kazuko; Utagawa, Noriyuki; Katakura, Kageyoshi
Hammering test is widely used to inspect the defects in concrete structures. However, this method has a major difficulty in inspect at high-places, such as a tunnel ceiling or a bridge girder. Moreover, its detection accuracy is dependent on a tester's experience. Therefore, we study about the non-contact acoustic inspection method of the concrete structure using the air borne sound wave and a laser Doppler vibrometer. In this method, the concrete surface is excited by air-borne sound wave emitted with a long range acoustic device (LRAD), and the vibration velocity on the concrete surface is measured by a laser Doppler vibrometer. A defect part is detected by the same flexural resonance as the hammer method. It is already shown clearly that detection of a defect can be performed from a long distance of 5 m or more using a concrete test object. Moreover, it is shown that a real concrete structure can also be applied. However, when the conventional LRAD was used as a sound source, there were problems, such as restrictions of a measurement angle and the surrounding noise. In order to solve these problems, basic examination which used the strong ultrasonic wave sound source was carried out. In the experiment, the concrete test object which includes an imitation defect from 5-m distance was used. From the experimental result, when the ultrasonic sound source was used, restrictions of a measurement angle become less severe and it was shown that circumference noise also falls dramatically.
75 FR 39915 - Marine Mammals; File No. 15483
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-13
... whales adjust their bearing to avoid received sound pressure levels greater than 120 dB, which would... marine mammals may be taken by Level B harassment as researchers attempt to provoke an avoidance response through sound transmission into their environment. The sound source consists of a transmitter and...
24 CFR 51.103 - Criteria and standards.
Code of Federal Regulations, 2011 CFR
2011-04-01
...-night average sound level produced as the result of the accumulation of noise from all sources contributing to the external noise environment at the site. Day-night average sound level, abbreviated as DNL and symbolized as Ldn, is the 24-hour average sound level, in decibels, obtained after addition of 10...
Complete data listings for CSEM soundings on Kilauea Volcano, Hawaii
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kauahikaua, J.; Jackson, D.B.; Zablocki, C.J.
1983-01-01
This document contains complete data from a controlled-source electromagnetic (CSEM) sounding/mapping project at Kilauea volcano, Hawaii. The data were obtained at 46 locations about a fixed-location, horizontal, polygonal loop source in the summit area of the volcano. The data consist of magnetic field amplitudes and phases at excitation frequencies between 0.04 and 8 Hz. The vector components were measured in a cylindrical coordinate system centered on the loop source. 5 references.
Binaural model-based dynamic-range compression.
Ernst, Stephan M A; Kortlang, Steffen; Grimm, Giso; Bisitz, Thomas; Kollmeier, Birger; Ewert, Stephan D
2018-01-26
Binaural cues such as interaural level differences (ILDs) are used to organise auditory perception and to segregate sound sources in complex acoustical environments. In bilaterally fitted hearing aids, dynamic-range compression operating independently at each ear potentially alters these ILDs, thus distorting binaural perception and sound source segregation. A binaurally-linked model-based fast-acting dynamic compression algorithm designed to approximate the normal-hearing basilar membrane (BM) input-output function in hearing-impaired listeners is suggested. A multi-center evaluation in comparison with an alternative binaural and two bilateral fittings was performed to assess the effect of binaural synchronisation on (a) speech intelligibility and (b) perceived quality in realistic conditions. 30 and 12 hearing impaired (HI) listeners were aided individually with the algorithms for both experimental parts, respectively. A small preference towards the proposed model-based algorithm in the direct quality comparison was found. However, no benefit of binaural-synchronisation regarding speech intelligibility was found, suggesting a dominant role of the better ear in all experimental conditions. The suggested binaural synchronisation of compression algorithms showed a limited effect on the tested outcome measures, however, linking could be situationally beneficial to preserve a natural binaural perception of the acoustical environment.
The Physiological Basis of Chinese Höömii Generation.
Li, Gelin; Hou, Qian
2017-01-01
The study aimed to investigate the physiological basis of vibration mode of sound source of a variety of Mongolian höömii forms of singing in China. The participant is a Mongolian höömii performing artist who was recommended by the Chinese Medical Association of Art. He used three types of höömii, namely vibration höömii, whistle höömii, and overtone höömii, which were compared with general comfortable pronunciation of /i:/ as control. Phonation was observed during /i:/. A laryngostroboscope (Storz) was used to determine vibration source-mucosal wave in the throat. For vibration höömii, bilateral ventricular folds approximated to the midline and made contact at the midline during pronunciation. Ventricular and vocal folds oscillated together as a single unit to form a composite vibration (double oscillator) sound source. For whistle höömii, ventricular folds approximated to the midline to cover part of vocal folds, but did not contact each other. It did not produce mucosal wave. The vocal folds produced mucosal wave to form a single vibration sound source. For overtone höömii, the anterior two-thirds of ventricular folds touched each other during pronunciation. The last one-third produced the mucosal wave. The vocal folds produced mucosal wave at the same time, which was a composite vibration (double oscillator) sound source mode. The Höömii form of singing, including mixed voices and multivoice, was related to the presence of dual vibration sound sources. Its high overtone form of singing (whistle höömii) was related to stenosis at the resonance chambers' initiation site (ventricular folds level). Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
The meaning of city noises: Investigating sound quality in Paris (France)
NASA Astrophysics Data System (ADS)
Dubois, Daniele; Guastavino, Catherine; Maffiolo, Valerie; Guastavino, Catherine; Maffiolo, Valerie
2004-05-01
The sound quality of Paris (France) was investigated by using field inquiries in actual environments (open questionnaires) and using recordings under laboratory conditions (free-sorting tasks). Cognitive categories of soundscapes were inferred by means of psycholinguistic analyses of verbal data and of mathematical analyses of similarity judgments. Results show that auditory judgments mainly rely on source identification. The appraisal of urban noise therefore depends on the qualitative evaluation of noise sources. The salience of human sounds in public spaces has been demonstrated, in relation to pleasantness judgments: soundscapes with human presence tend to be perceived as more pleasant than soundscapes consisting solely of mechanical sounds. Furthermore, human sounds are qualitatively processed as indicators of human outdoor activities, such as open markets, pedestrian areas, and sidewalk cafe districts that reflect city life. In contrast, mechanical noises (mainly traffic noise) are commonly described in terms of physical properties (temporal structure, intensity) of a permanent background noise that also characterizes urban areas. This connotes considering both quantitative and qualitative descriptions to account for the diversity of cognitive interpretations of urban soundscapes, since subjective evaluations depend both on the meaning attributed to noise sources and on inherent properties of the acoustic signal.
NASA Astrophysics Data System (ADS)
Nikolić, Dalibor; Milošević, Žarko; Saveljić, Igor; Filipović, Nenad
2015-12-01
Vibration of the skull causes a hearing sensation. We call it Bone Conduction (BC) sound. There are several investigations about transmission properties of bone conducted sound. The aim of this study was to develop a software tool for easy generation of the finite element (FE) model of the human head with different materials based on human head anatomy and to calculate sound conduction through the head. Developed software tool generates a model in a few steps. The first step is to do segmentation of CT medical images (DICOM) and to generate a surface mesh files (STL). Each STL file presents a different layer of human head with different material properties (brain, CSF, different layers of the skull bone, skin, etc.). The next steps are to make tetrahedral mesh from obtained STL files, to define FE model boundary conditions and to solve FE equations. This tool uses PAK solver, which is the open source software implemented in SIFEM FP7 project, for calculations of the head vibration. Purpose of this tool is to show impact of the bone conduction sound of the head on the hearing system and to estimate matching of obtained results with experimental measurements.
Impact of low-frequency sound on historic structures
NASA Astrophysics Data System (ADS)
Sutherland, Louis C.; Horonjeff, Richard D.
2005-09-01
In common usage, the term soundscape usually refers to portions of the sound spectrum audible to human observers, and perhaps more broadly other members of the animal kingdom. There is, however, a soundscape regime at the low end of the frequency spectrum (e.g., 10-25 Hz), which is inaudible to humans, where nonindigenous sound energy may cause noise-induced vibrations in structures. Such low frequency components may be of sufficient magnitude to pose damage risk potential to historic structures and cultural resources. Examples include Anasazi cliff and cave dwellings, and pueblo structures of vega type roof construction. Both are susceptible to noise induced vibration from low-frequency sound pressures that excite resonant frequencies in these structures. The initial damage mechanism is usually fatigue cracking. Many mechanisms are subtle, temporally multiphased, and not initially evident to the naked eye. This paper reviews the types of sources posing the greatest potential threat, their low-frequency spectral characteristics, typical structural responses, and the damage risk mechanisms involved. Measured sound and vibration levels, case history studies, and conditions favorable to damage risk are presented. The paper concludes with recommendations for increasing the damage risk knowledge base to better protect these resources.
Prediction of break-out sound from a rectangular cavity via an elastically mounted panel.
Wang, Gang; Li, Wen L; Du, Jingtao; Li, Wanyou
2016-02-01
The break-out sound from a cavity via an elastically mounted panel is predicted in this paper. The vibroacoustic system model is derived based on the so-called spectro-geometric method in which the solution over each sub-domain is invariably expressed as a modified Fourier series expansion. Unlike the traditional modal superposition methods, the continuity of the normal velocities is faithfully enforced on the interfaces between the flexible panel and the (interior and exterior) acoustic media. A fully coupled vibro-acoustic system is obtained by taking into account the strong coupling between the vibration of the elastic panel and the sound fields on the both sides. The typical time-consuming calculations of quadruple integrals encountered in determining the sound power radiation from a panel has been effectively avoided by reducing them, via discrete cosine transform, into a number of single integrals which are subsequently calculated analytically in a closed form. Several numerical examples are presented to validate the system model, understand the effects on the sound transmissions of panel mounting conditions, and demonstrate the dependence on the size of source room of the "measured" transmission loss.
Acoustic positioning for space processing experiments
NASA Technical Reports Server (NTRS)
Whymark, R. R.
1974-01-01
An acoustic positioning system is described that is adaptable to a range of processing chambers and furnace systems. Operation at temperatures exceeding 1000 C is demonstrated in experiments involving the levitation of liquid and solid glass materials up to several ounces in weight. The system consists of a single source of sound that is beamed at a reflecting surface placed a distance away. Stable levitation is achieved at a succession of discrete energy minima contained throughout the volume between the reflector and the sound source. Several specimens can be handled at one time. Metal discs up to 3 inches in diameter can be levitated, solid spheres of dense material up to 0.75 inches diameter, and liquids can be freely suspended in l-g in the form of near-spherical droplets up to 0.25 inch diameter, or flattened liquid discs up to 0.6 inches diameter. Larger specimens may be handled by increasing the size of the sound source or by reducing the sound frequency.
Sound source localization on an axial fan at different operating points
NASA Astrophysics Data System (ADS)
Zenger, Florian J.; Herold, Gert; Becker, Stefan; Sarradj, Ennes
2016-08-01
A generic fan with unskewed fan blades is investigated using a microphone array method. The relative motion of the fan with respect to the stationary microphone array is compensated by interpolating the microphone data to a virtual rotating array with the same rotational speed as the fan. Hence, beamforming algorithms with deconvolution, in this case CLEAN-SC, could be applied. Sound maps and integrated spectra of sub-components are evaluated for five operating points. At selected frequency bands, the presented method yields sound maps featuring a clear circular source pattern corresponding to the nine fan blades. Depending on the adjusted operating point, sound sources are located on the leading or trailing edges of the fan blades. Integrated spectra show that in most cases leading edge noise is dominant for the low-frequency part and trailing edge noise for the high-frequency part. The shift from leading to trailing edge noise is strongly dependent on the operating point and frequency range considered.
Study of the Acoustic Effects of Hydrokinetic Tidal Turbines in Admiralty Inlet, Puget Sound
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brian Polagye; Jim Thomson; Chris Bassett
2012-03-30
Hydrokinetic turbines will be a source of noise in the marine environment - both during operation and during installation/removal. High intensity sound can cause injury or behavioral changes in marine mammals and may also affect fish and invertebrates. These noise effects are, however, highly dependent on the individual marine animals; the intensity, frequency, and duration of the sound; and context in which the sound is received. In other words, production of sound is a necessary, but not sufficient, condition for an environmental impact. At a workshop on the environmental effects of tidal energy development, experts identified sound produced by turbinesmore » as an area of potentially significant impact, but also high uncertainty. The overall objectives of this project are to improve our understanding of the potential acoustic effects of tidal turbines by: (1) Characterizing sources of existing underwater noise; (2) Assessing the effectiveness of monitoring technologies to characterize underwater noise and marine mammal responsiveness to noise; (3) Evaluating the sound profile of an operating tidal turbine; and (4) Studying the effect of turbine sound on surrogate species in a laboratory environment. This study focuses on a specific case study for tidal energy development in Admiralty Inlet, Puget Sound, Washington (USA), but the methodologies and results are applicable to other turbine technologies and geographic locations. The project succeeded in achieving the above objectives and, in doing so, substantially contributed to the body of knowledge around the acoustic effects of tidal energy development in several ways: (1) Through collection of data from Admiralty Inlet, established the sources of sound generated by strong currents (mobilizations of sediment and gravel) and determined that low-frequency sound recorded during periods of strong currents is non-propagating pseudo-sound. This helped to advance the debate within the marine and hydrokinetics acoustic community as to whether strong currents produce propagating sound. (2) Analyzed data collected from a tidal turbine operating at the European Marine Energy Center to develop a profile of turbine sound and developed a framework to evaluate the acoustic effects of deploying similar devices in other locations. This framework has been applied to Public Utility District No. 1 of Snohomish Country's demonstration project in Admiralty Inlet to inform postinstallation acoustic and marine mammal monitoring plans. (3) Demonstrated passive acoustic techniques to characterize the ambient noise environment at tidal energy sites (fixed, long-term observations recommended) and characterize the sound from anthropogenic sources (drifting, short-term observations recommended). (4) Demonstrated the utility and limitations of instrumentation, including bottom mounted instrumentation packages, infrared cameras, and vessel monitoring systems. In doing so, also demonstrated how this type of comprehensive information is needed to interpret observations from each instrument (e.g., hydrophone data can be combined with vessel tracking data to evaluate the contribution of vessel sound to ambient noise). (5) Conducted a study that suggests harbor porpoise in Admiralty Inlet may be habituated to high levels of ambient noise due to omnipresent vessel traffic. The inability to detect behavioral changes associated with a high intensity source of opportunity (passenger ferry) has informed the approach for post-installation marine mammal monitoring. (6) Conducted laboratory exposure experiments of juvenile Chinook salmon and showed that exposure to a worse than worst case acoustic dose of turbine sound does not result in changes to hearing thresholds or biologically significant tissue damage. Collectively, this means that Chinook salmon may be at a relatively low risk of injury from sound produced by tidal turbines located in or near their migration path. In achieving these accomplishments, the project has significantly advanced the District's goals of developing a demonstration-scale tidal energy project in Admiralty Inlet. Pilot demonstrations of this type are an essential step in the development of commercial-scale tidal energy in the United States. This is a renewable resource capable of producing electricity in a highly predictable manner.« less
A High-Resolution Stopwatch for Cents
ERIC Educational Resources Information Center
Gingl, Z.; Kopasz, K.
2011-01-01
A very low-cost, easy-to-make stopwatch is presented to support various experiments in mechanics. The high-resolution stopwatch is based on two photodetectors connected directly to the microphone input of a sound card. Dedicated free open-source software has been developed and made available to download. The efficiency is demonstrated by a free…
Environmentally Sound Small-Scale Agricultural Projects: Guidelines for Planning. Revised Edition.
ERIC Educational Resources Information Center
Altieri, Miguel; Vukasin, Helen L., Ed.
Environmental planning requires more than finding the right technology and a source of funds. Planning involves consideration of the social, cultural, economic, and natural environments in which the project occurs. The challenge is to develop sustainable food systems that have reasonable production but do not degrade the resource base and upset…
Neural Coding of Relational Invariance in Speech: Human Language Analogs to the Barn Owl.
ERIC Educational Resources Information Center
Sussman, Harvey M.
1989-01-01
The neuronal model shown to code sound-source azimuth in the barn owl by H. Wagner et al. in 1987 is used as the basis for a speculative brain-based human model, which can establish contrastive phonetic categories to solve the problem of perception "non-invariance." (SLD)
Exploring Future Energy Choices with Young People
ERIC Educational Resources Information Center
MacGarry, Ann
2014-01-01
The article outlines a couple of the most recent resources developed by the Centre for Alternative Technology for teaching about energy. The key elements are providing sound information on all the significant sources and inspiring pupils to make their own decisions about energy futures based on evidence. Our experience is that engaging pupils in…
A Revenue Planning Tool for Charter School Operators
ERIC Educational Resources Information Center
Keller, Eric; Hayes, Cheryl D.
2009-01-01
This revenue planning tool aims to help charter school operators develop a sound revenue base that can meet their school's current and future funding needs. It helps identify and assess potential public (federal, state, and local) and private funding sources. The tool incorporates a four-step revenue planning process which includes: (1)…
Progress Toward Improving Jet Noise Predictions in Hot Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Kenzakowski, Donald C.
2007-01-01
An acoustic analogy methodology for improving noise predictions in hot round jets is presented. Past approaches have often neglected the impact of temperature fluctuations on the predicted sound spectral density, which could be significant for heated jets, and this has yielded noticeable acoustic under-predictions in such cases. The governing acoustic equations adopted here are a set of linearized, inhomogeneous Euler equations. These equations are combined into a single third order linear wave operator when the base flow is considered as a locally parallel mean flow. The remaining second-order fluctuations are regarded as the equivalent sources of sound and are modeled. It is shown that the hot jet effect may be introduced primarily through a fluctuating velocity/enthalpy term. Modeling this additional source requires specialized inputs from a RANS-based flowfield simulation. The information is supplied using an extension to a baseline two equation turbulence model that predicts total enthalpy variance in addition to the standard parameters. Preliminary application of this model to a series of unheated and heated subsonic jets shows significant improvement in the acoustic predictions at the 90 degree observer angle.
NASA Technical Reports Server (NTRS)
Groza, A.; Calciu, J.; Nicola, I.; Ionasek, A.
1974-01-01
Sound level measurements on noise sources on buses are used to observe the effects of attenuating acoustic pressure levels inside the bus by sound-proofing during complete repair. A spectral analysis of the sound level as a function of motor speed, bus speed along the road, and the category of the road is reported.
Functional morphology of the sound-generating labia in the syrinx of two songbird species.
Riede, Tobias; Goller, Franz
2010-01-01
In songbirds, two sound sources inside the syrinx are used to produce the primary sound. Laterally positioned labia are passively set into vibration, thus interrupting a passing air stream. Together with subsyringeal pressure, the size and tension of the labia determine the spectral characteristics of the primary sound. Very little is known about how the histological composition and morphology of the labia affect their function as sound generators. Here we related the size and microstructure of the labia to their acoustic function in two songbird species with different acoustic characteristics, the white-crowned sparrow and zebra finch. Histological serial sections of the syrinx and different staining techniques were used to identify collagen, elastin and hyaluronan as extracellular matrix components. The distribution and orientation of elastic fibers indicated that the labia in white-crowned sparrows are multi-layered structures, whereas they are more uniformly structured in the zebra finch. Collagen and hyaluronan were evenly distributed in both species. A multi-layered composition could give rise to complex viscoelastic properties of each sound source. We also measured labia size. Variability was found along the dorso-ventral axis in both species. Lateral asymmetry was identified in some individuals but not consistently at the species level. Different size between the left and right sound sources could provide a morphological basis for the acoustic specialization of each sound generator, but only in some individuals. The inconsistency of its presence requires the investigation of alternative explanations, e.g. differences in viscoelastic properties of the labia of the left and right syrinx. Furthermore, we identified attachments of syringeal muscles to the labia as well as to bronchial half rings and suggest a mechanism for their biomechanical function.
Functional morphology of the sound-generating labia in the syrinx of two songbird species
Riede, Tobias; Goller, Franz
2010-01-01
In songbirds, two sound sources inside the syrinx are used to produce the primary sound. Laterally positioned labia are passively set into vibration, thus interrupting a passing air stream. Together with subsyringeal pressure, the size and tension of the labia determine the spectral characteristics of the primary sound. Very little is known about how the histological composition and morphology of the labia affect their function as sound generators. Here we related the size and microstructure of the labia to their acoustic function in two songbird species with different acoustic characteristics, the white-crowned sparrow and zebra finch. Histological serial sections of the syrinx and different staining techniques were used to identify collagen, elastin and hyaluronan as extracellular matrix components. The distribution and orientation of elastic fibers indicated that the labia in white-crowned sparrows are multi-layered structures, whereas they are more uniformly structured in the zebra finch. Collagen and hyaluronan were evenly distributed in both species. A multi-layered composition could give rise to complex viscoelastic properties of each sound source. We also measured labia size. Variability was found along the dorso-ventral axis in both species. Lateral asymmetry was identified in some individuals but not consistently at the species level. Different size between the left and right sound sources could provide a morphological basis for the acoustic specialization of each sound generator, but only in some individuals. The inconsistency of its presence requires the investigation of alternative explanations, e.g. differences in viscoelastic properties of the labia of the left and right syrinx. Furthermore, we identified attachments of syringeal muscles to the labia as well as to bronchial half rings and suggest a mechanism for their biomechanical function. PMID:19900184
NASA Technical Reports Server (NTRS)
Kontos, Karen B.; Kraft, Robert E.; Gliebe, Philip R.
1996-01-01
The Aircraft Noise Predication Program (ANOPP) is an industry-wide tool used to predict turbofan engine flyover noise in system noise optimization studies. Its goal is to provide the best currently available methods for source noise prediction. As part of a program to improve the Heidmann fan noise model, models for fan inlet and fan exhaust noise suppression estimation that are based on simple engine and acoustic geometry inputs have been developed. The models can be used to predict sound power level suppression and sound pressure level suppression at a position specified relative to the engine inlet.
Galindo-Romero, Marta; Lippert, Tristan; Gavrilov, Alexander
2015-12-01
This paper presents an empirical linear equation to predict peak pressure level of anthropogenic impulsive signals based on its correlation with the sound exposure level. The regression coefficients are shown to be weakly dependent on the environmental characteristics but governed by the source type and parameters. The equation can be applied to values of the sound exposure level predicted with a numerical model, which provides a significant improvement in the prediction of the peak pressure level. Part I presents the analysis for airgun arrays signals, and Part II considers the application of the empirical equation to offshore impact piling noise.
Recovery of burner acoustic source structure from far-field sound spectra
NASA Technical Reports Server (NTRS)
Mahan, J. R.; Jones, J. D.
1984-01-01
A method is presented that permits the thermal-acoustic efficiency spectrum in a long turbulent burner to be recovered from the corresponding far-field sound spectrum. An acoustic source/propagation model is used based on the perturbation solution of the equations describing the unsteady one-dimensional flow of an inviscid ideal gas with a distributed heat source. The technique is applied to a long cylindrical hydrogen-flame burner operating over power levels of 4.5-22.3 kW. The results show that the thermal-acoustic efficiency at a given frequency, defined as the fraction of the total burner power converted to acoustic energy at that frequency, is rather insensitive to burner power, having a maximum value on the order of 10 to the -4th at 150 Hz and rolling off steeply with increasing frequency. Evidence is presented that acoustic agitation of the flame at low frequencies enhances the mixing of the unburned fuel and air with the hot products of combustion. The paper establishes the potential of the technique as a useful tool for characterizing the acoustic source structure in any burner, such as a gas turbine combustor, for which a reasonable acoustic propagation model can be postulated.
Miller, Patrick J O
2006-05-01
Signal source intensity and detection range, which integrates source intensity with propagation loss, background noise and receiver hearing abilities, are important characteristics of communication signals. Apparent source levels were calculated for 819 pulsed calls and 24 whistles produced by free-ranging resident killer whales by triangulating the angles-of-arrival of sounds on two beamforming arrays towed in series. Levels in the 1-20 kHz band ranged from 131 to 168 dB re 1 microPa at 1 m, with differences in the means of different sound classes (whistles: 140.2+/-4.1 dB; variable calls: 146.6+/-6.6 dB; stereotyped calls: 152.6+/-5.9 dB), and among stereotyped call types. Repertoire diversity carried through to estimates of active space, with "long-range" stereotyped calls all containing overlapping, independently-modulated high-frequency components (mean estimated active space of 10-16 km in sea state zero) and "short-range" sounds (5-9 km) included all stereotyped calls without a high-frequency component, whistles, and variable calls. Short-range sounds are reported to be more common during social and resting behaviors, while long-range stereotyped calls predominate in dispersed travel and foraging behaviors. These results suggest that variability in sound pressure levels may reflect diverse social and ecological functions of the acoustic repertoire of killer whales.
Delgutte, Bertrand
2015-01-01
At lower levels of sensory processing, the representation of a stimulus feature in the response of a neural population can vary in complex ways across different stimulus intensities, potentially changing the amount of feature-relevant information in the response. How higher-level neural circuits could implement feature decoding computations that compensate for these intensity-dependent variations remains unclear. Here we focused on neurons in the inferior colliculus (IC) of unanesthetized rabbits, whose firing rates are sensitive to both the azimuthal position of a sound source and its sound level. We found that the azimuth tuning curves of an IC neuron at different sound levels tend to be linear transformations of each other. These transformations could either increase or decrease the mutual information between source azimuth and spike count with increasing level for individual neurons, yet population azimuthal information remained constant across the absolute sound levels tested (35, 50, and 65 dB SPL), as inferred from the performance of a maximum-likelihood neural population decoder. We harnessed evidence of level-dependent linear transformations to reduce the number of free parameters in the creation of an accurate cross-level population decoder of azimuth. Interestingly, this decoder predicts monotonic azimuth tuning curves, broadly sensitive to contralateral azimuths, in neurons at higher levels in the auditory pathway. PMID:26490292
Aircraft Engine Noise Scattering - A Discontinuous Spectral Element Approach
NASA Technical Reports Server (NTRS)
Stanescu, D.; Hussaini, M. Y.; Farassat, F.
2002-01-01
The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far-field. The effects of nonuniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. Our approach is based on the discretization of the inviscid flow equations through a collocation form of the Discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Largescale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic con.guration, with and without a wing.
Aircraft Engine Noise Scattering By Fuselage and Wings: A Computational Approach
NASA Technical Reports Server (NTRS)
Stanescu, D.; Hussaini, M. Y.; Farassat, F.
2003-01-01
The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far-field. The effects of nonuniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. The approach is based on the discretization of the inviscid flow equations through a collocation form of the Discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Large-scale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic configuration, with and without a wing.
Aircraft Engine Noise Scattering by Fuselage and Wings: A Computational Approach
NASA Technical Reports Server (NTRS)
Stanescu, D.; Hussaini, M. Y.; Farassat, F.
2003-01-01
The paper presents a time-domain method for computation of sound radiation from aircraft engine sources to the far-field. The effects of nonuniform flow around the aircraft and scattering of sound by fuselage and wings are accounted for in the formulation. The approach is based on the discretization of the inviscid flow equations through a collocation form of the Discontinuous Galerkin spectral element method. An isoparametric representation of the underlying geometry is used in order to take full advantage of the spectral accuracy of the method. Large-scale computations are made possible by a parallel implementation based on message passing. Results obtained for radiation from an axisymmetric nacelle alone are compared with those obtained when the same nacelle is installed in a generic configuration, with and without a wing.
A mechanism study of sound wave-trapping barriers.
Yang, Cheng; Pan, Jie; Cheng, Li
2013-09-01
The performance of a sound barrier is usually degraded if a large reflecting surface is placed on the source side. A wave-trapping barrier (WTB), with its inner surface covered by wedge-shaped structures, has been proposed to confine waves within the area between the barrier and the reflecting surface, and thus improve the performance. In this paper, the deterioration in performance of a conventional sound barrier due to the reflecting surface is first explained in terms of the resonance effect of the trapped modes. At each resonance frequency, a strong and mode-controlled sound field is generated by the noise source both within and in the vicinity outside the region bounded by the sound barrier and the reflecting surface. It is found that the peak sound pressures in the barrier's shadow zone, which correspond to the minimum values in the barrier's insertion loss, are largely determined by the resonance frequencies and by the shapes and losses of the trapped modes. These peak pressures usually result in high sound intensity component impinging normal to the barrier surface near the top. The WTB can alter the sound wave diffraction at the top of the barrier if the wavelengths of the sound wave are comparable or smaller than the dimensions of the wedge. In this case, the modified barrier profile is capable of re-organizing the pressure distribution within the bounded domain and altering the acoustic properties near the top of the sound barrier.
Measurement of attenuation coefficients of the fundamental and second harmonic waves in water
NASA Astrophysics Data System (ADS)
Zhang, Shuzeng; Jeong, Hyunjo; Cho, Sungjong; Li, Xiongbing
2016-02-01
Attenuation corrections in nonlinear acoustics play an important role in the study of nonlinear fluids, biomedical imaging, or solid material characterization. The measurement of attenuation coefficients in a nonlinear regime is not easy because they depend on the source pressure and requires accurate diffraction corrections. In this work, the attenuation coefficients of the fundamental and second harmonic waves which come from the absorption of water are measured in nonlinear ultrasonic experiments. Based on the quasilinear theory of the KZK equation, the nonlinear sound field equations are derived and the diffraction correction terms are extracted. The measured sound pressure amplitudes are adjusted first for diffraction corrections in order to reduce the impact on the measurement of attenuation coefficients from diffractions. The attenuation coefficients of the fundamental and second harmonics are calculated precisely from a nonlinear least squares curve-fitting process of the experiment data. The results show that attenuation coefficients in a nonlinear condition depend on both frequency and source pressure, which are much different from a linear regime. In a relatively lower drive pressure, the attenuation coefficients increase linearly with frequency. However, they present the characteristic of nonlinear growth in a high drive pressure. As the diffraction corrections are obtained based on the quasilinear theory, it is important to use an appropriate source pressure for accurate attenuation measurements.
Surface acoustical intensity measurements on a diesel engine
NASA Technical Reports Server (NTRS)
Mcgary, M. C.; Crocker, M. J.
1980-01-01
The use of surface intensity measurements as an alternative to the conventional selective wrapping technique of noise source identification and ranking on diesel engines was investigated. A six cylinder, in line turbocharged, 350 horsepower diesel engine was used. Sound power was measured under anechoic conditions for eight separate parts of the engine at steady state operating conditions using the conventional technique. Sound power measurements were repeated on five separate parts of the engine using the surface intensity at the same steady state operating conditions. The results were compared by plotting sound power level against frequency and noise source rankings for the two methods.
NASA Astrophysics Data System (ADS)
Cowan, James
This chapter summarizes and explains key concepts of building acoustics. These issues include the behavior of sound waves in rooms, the most commonly used rating systems for sound and sound control in buildings, the most common noise sources found in buildings, practical noise control methods for these sources, and the specific topic of office acoustics. Common noise issues for multi-dwelling units can be derived from most of the sections of this chapter. Books can be and have been written on each of these topics, so the purpose of this chapter is to summarize this information and provide appropriate resources for further exploration of each topic.
Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin
2017-02-04
The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin
The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less
On Identifying the Sound Sources in a Turbulent Flow
NASA Technical Reports Server (NTRS)
Goldstein, M. E.
2008-01-01
A space-time filtering approach is used to divide an unbounded turbulent flow into its radiating and non-radiating components. The result is then used to clarify a number of issues including the possibility of identifying the sources of the sound in such flows. It is also used to investigate the efficacy of some of the more recent computational approaches.
The sound field of a rotating dipole in a plug flow.
Wang, Zhao-Huan; Belyaev, Ivan V; Zhang, Xiao-Zheng; Bi, Chuan-Xing; Faranosov, Georgy A; Dowell, Earl H
2018-04-01
An analytical far field solution for a rotating point dipole source in a plug flow is derived. The shear layer of the jet is modelled as an infinitely thin cylindrical vortex sheet and the far field integral is calculated by the stationary phase method. Four numerical tests are performed to validate the derived solution as well as to assess the effects of sound refraction from the shear layer. First, the calculated results using the derived formulations are compared with the known solution for a rotating dipole in a uniform flow to validate the present model in this fundamental test case. After that, the effects of sound refraction for different rotating dipole sources in the plug flow are assessed. Then the refraction effects on different frequency components of the signal at the observer position, as well as the effects of the motion of the source and of the type of source are considered. Finally, the effect of different sound speeds and densities outside and inside the plug flow is investigated. The solution obtained may be of particular interest for propeller and rotor noise measurements in open jet anechoic wind tunnels.
Influence of Music on the Behaviors of Crowd in Urban Open Public Spaces
Meng, Qi; Zhao, Tingting; Kang, Jian
2018-01-01
Sound environment plays an important role in urban open spaces, yet studies on the effects of perception of the sound environment on crowd behaviors have been limited. The aim of this study, therefore, is to explore how music, which is considered an important soundscape element, affects crowd behaviors in urban open spaces. On-site observations were performed at a 100 m × 70 m urban leisure square in Harbin, China. Typical music was used to study the effects of perception of the sound environment on crowd behaviors; then, these behaviors were classified into movement (passing by and walking around) and non-movement behaviors (sitting). The results show that the path of passing by in an urban leisure square with music was more centralized than without music. Without music, 8.3% of people passing by walked near the edge of the square, whereas with music, this percentage was zero. In terms of the speed of passing by behavior, no significant difference was observed with the presence or absence of background music. Regarding the effect of music on walking around behavior in the square, the mean area and perimeter when background music was played were smaller than without background music. The mean speed of those exhibiting walking around behavior with background music in the square was 0.296 m/s slower than when no background music was played. For those exhibiting sitting behavior, when background music was not present, crowd density showed no variation based on the distance from the sound source. When music was present, it was observed that as the distance from the sound source increased, crowd density of those sitting behavior decreased accordingly. PMID:29755390
Influence of Music on the Behaviors of Crowd in Urban Open Public Spaces.
Meng, Qi; Zhao, Tingting; Kang, Jian
2018-01-01
Sound environment plays an important role in urban open spaces, yet studies on the effects of perception of the sound environment on crowd behaviors have been limited. The aim of this study, therefore, is to explore how music, which is considered an important soundscape element, affects crowd behaviors in urban open spaces. On-site observations were performed at a 100 m × 70 m urban leisure square in Harbin, China. Typical music was used to study the effects of perception of the sound environment on crowd behaviors; then, these behaviors were classified into movement (passing by and walking around) and non-movement behaviors (sitting). The results show that the path of passing by in an urban leisure square with music was more centralized than without music. Without music, 8.3% of people passing by walked near the edge of the square, whereas with music, this percentage was zero. In terms of the speed of passing by behavior, no significant difference was observed with the presence or absence of background music. Regarding the effect of music on walking around behavior in the square, the mean area and perimeter when background music was played were smaller than without background music. The mean speed of those exhibiting walking around behavior with background music in the square was 0.296 m/s slower than when no background music was played. For those exhibiting sitting behavior, when background music was not present, crowd density showed no variation based on the distance from the sound source. When music was present, it was observed that as the distance from the sound source increased, crowd density of those sitting behavior decreased accordingly.
Active Exhaust Silencing Systen For the Management of Auxillary Power Unit Sound Signatures
2014-08-01
conceptual mass-less pistons are introduced into the system before and after the injection site, such that they will move exactly with the plane wave...Unit Sound Signatures, Helminen, et al. Page 2 of 7 either the primary source or the injected source. It is assumed that the pistons are ‘close...source, it causes both pistons to move identically. The pressures induced by the flow on the pistons do not affect the flow generated by the
The rotary subwoofer: a controllable infrasound source.
Park, Joseph; Garcés, Milton; Thigpen, Bruce
2009-04-01
The rotary subwoofer is a novel acoustic transducer capable of projecting infrasonic signals at high sound pressure levels. The projector produces higher acoustic particle velocities than conventional transducers which translate into higher radiated sound pressure levels. This paper characterizes measured performance of a rotary subwoofer and presents a model to predict sound pressure levels.
Majdak, Piotr; Goupell, Matthew J; Laback, Bernhard
2010-02-01
The ability to localize sound sources in three-dimensional space was tested in humans. In Experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE; darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In Experiment 2, subjects were provided with sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of Experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies.
Shen, Xiaohan; Lu, Zonghuan; Timalsina, Yukta P; Lu, Toh-Ming; Washington, Morris; Yamaguchi, Masashi
2018-05-04
We experimentally demonstrated a narrowband acoustic phonon source with simultaneous tunabilities of the centre frequency and the spectral bandwidth in the GHz-sub THz frequency range based on photoacoustic excitation using intensity-modulated optical pulses. The centre frequency and bandwidth are tunable from 65 to 381 GHz and 17 to 73 GHz, respectively. The dispersion of the sound velocity and the attenuation of acoustic phonons in silicon dioxide (SiO 2 ) and indium tin oxide (ITO) thin films were investigated using the acoustic phonon source. The sound velocities of SiO 2 and ITO films were frequency-independent in the measured frequency range. On the other hand, the phonon attenuations of both of SiO 2 and ITO films showed quadratic frequency dependences, and polycrystalline ITO showed several times larger attenuation than those in amorphous SiO 2 . In addition, the selective excitation of mechanical resonance modes was demonstrated in nanoscale tungsten (W) film using acoustic pulses with various centre frequencies and spectral widths.
Neural plasticity associated with recently versus often heard objects.
Bourquin, Nathalie M-P; Spierer, Lucas; Murray, Micah M; Clarke, Stephanie
2012-09-01
In natural settings the same sound source is often heard repeatedly, with variations in spectro-temporal and spatial characteristics. We investigated how such repetitions influence sound representations and in particular how auditory cortices keep track of recently vs. often heard objects. A set of 40 environmental sounds was presented twice, i.e. as prime and as repeat, while subjects categorized the corresponding sound sources as living vs. non-living. Electrical neuroimaging analyses were applied to auditory evoked potentials (AEPs) comparing primes vs. repeats (effect of presentation) and the four experimental sections. Dynamic analysis of distributed source estimations revealed i) a significant main effect of presentation within the left temporal convexity at 164-215 ms post-stimulus onset; and ii) a significant main effect of section in the right temporo-parietal junction at 166-213 ms. A 3-way repeated measures ANOVA (hemisphere×presentation×section) applied to neural activity of the above clusters during the common time window confirmed the specificity of the left hemisphere for the effect of presentation, but not that of the right hemisphere for the effect of section. In conclusion, spatio-temporal dynamics of neural activity encode the temporal history of exposure to sound objects. Rapidly occurring plastic changes within the semantic representations of the left hemisphere keep track of objects heard a few seconds before, independent of the more general sound exposure history. Progressively occurring and more long-lasting plastic changes occurring predominantly within right hemispheric networks, which are known to code for perceptual, semantic and spatial aspects of sound objects, keep track of multiple exposures. Copyright © 2012 Elsevier Inc. All rights reserved.
Multistability in auditory stream segregation: a predictive coding view
Winkler, István; Denham, Susan; Mill, Robert; Bőhm, Tamás M.; Bendixen, Alexandra
2012-01-01
Auditory stream segregation involves linking temporally separate acoustic events into one or more coherent sequences. For any non-trivial sequence of sounds, many alternative descriptions can be formed, only one or very few of which emerge in awareness at any time. Evidence from studies showing bi-/multistability in auditory streaming suggest that some, perhaps many of the alternative descriptions are represented in the brain in parallel and that they continuously vie for conscious perception. Here, based on a predictive coding view, we consider the nature of these sound representations and how they compete with each other. Predictive processing helps to maintain perceptual stability by signalling the continuation of previously established patterns as well as the emergence of new sound sources. It also provides a measure of how well each of the competing representations describes the current acoustic scene. This account of auditory stream segregation has been tested on perceptual data obtained in the auditory streaming paradigm. PMID:22371621
Correlation Factors Describing Primary and Spatial Sensations of Sound Fields
NASA Astrophysics Data System (ADS)
ANDO, Y.
2002-11-01
The theory of subjective preference of the sound field in a concert hall is established based on the model of human auditory-brain system. The model consists of the autocorrelation function (ACF) mechanism and the interaural crosscorrelation function (IACF) mechanism for signals arriving at two ear entrances, and the specialization of human cerebral hemispheres. This theory can be developed to describe primary sensations such as pitch or missing fundamental, loudness, timbre and, in addition, duration sensation which is introduced here as a fourth. These four primary sensations may be formulated by the temporal factors extracted from the ACF associated with the left hemisphere and, spatial sensations such as localization in the horizontal plane, apparent source width and subjective diffuseness are described by the spatial factors extracted from the IACF associated with the right hemisphere. Any important subjective responses of sound fields may be described by both temporal and spatial factors.
Torija, Antonio J; Ruiz, Diego P
2012-10-01
Road traffic has a heavy impact on the urban sound environment, constituting the main source of noise and widely dominating its spectral composition. In this context, our research investigates the use of recorded sound spectra as input data for the development of real-time short-term road traffic flow estimation models. For this, a series of models based on the use of Multilayer Perceptron Neural Networks, multiple linear regression, and the Fisher linear discriminant were implemented to estimate road traffic flow as well as to classify it according to the composition of heavy vehicles and motorcycles/mopeds. In view of the results, the use of the 50-400 Hz and 1-2.5 kHz frequency ranges as input variables in multilayer perceptron-based models successfully estimated urban road traffic flow with an average percentage of explained variance equal to 86%, while the classification of the urban road traffic flow gave an average success rate of 96.1%. Copyright © 2012 Elsevier B.V. All rights reserved.
Optimizing acoustical conditions for speech intelligibility in classrooms
NASA Astrophysics Data System (ADS)
Yang, Wonyoung
High speech intelligibility is imperative in classrooms where verbal communication is critical. However, the optimal acoustical conditions to achieve a high degree of speech intelligibility have previously been investigated with inconsistent results, and practical room-acoustical solutions to optimize the acoustical conditions for speech intelligibility have not been developed. This experimental study validated auralization for speech-intelligibility testing, investigated the optimal reverberation for speech intelligibility for both normal and hearing-impaired listeners using more realistic room-acoustical models, and proposed an optimal sound-control design for speech intelligibility based on the findings. The auralization technique was used to perform subjective speech-intelligibility tests. The validation study, comparing auralization results with those of real classroom speech-intelligibility tests, found that if the room to be auralized is not very absorptive or noisy, speech-intelligibility tests using auralization are valid. The speech-intelligibility tests were done in two different auralized sound fields---approximately diffuse and non-diffuse---using the Modified Rhyme Test and both normal and hearing-impaired listeners. A hybrid room-acoustical prediction program was used throughout the work, and it and a 1/8 scale-model classroom were used to evaluate the effects of ceiling barriers and reflectors. For both subject groups, in approximately diffuse sound fields, when the speech source was closer to the listener than the noise source, the optimal reverberation time was zero. When the noise source was closer to the listener than the speech source, the optimal reverberation time was 0.4 s (with another peak at 0.0 s) with relative output power levels of the speech and noise sources SNS = 5 dB, and 0.8 s with SNS = 0 dB. In non-diffuse sound fields, when the noise source was between the speaker and the listener, the optimal reverberation time was 0.6 s with SNS = 4 dB and increased to 0.8 and 1.2 s with decreased SNS = 0 dB, for both normal and hearing-impaired listeners. Hearing-impaired listeners required more early energy than normal-hearing listeners. Reflective ceiling barriers and ceiling reflectors---in particular, parallel front-back rows of semi-circular reflectors---achieved the goal of decreasing reverberation with the least speech-level reduction.
Greene, Nathaniel T; Anbuhl, Kelsey L; Ferber, Alexander T; DeGuzman, Marisa; Allen, Paul D; Tollin, Daniel J
2018-08-01
Despite the common use of guinea pigs in investigations of the neural mechanisms of binaural and spatial hearing, their behavioral capabilities in spatial hearing tasks have surprisingly not been thoroughly investigated. To begin to fill this void, we tested the spatial hearing of adult male guinea pigs in several experiments using a paradigm based on the prepulse inhibition (PPI) of the acoustic startle response. In the first experiment, we presented continuous broadband noise from one speaker location and switched to a second speaker location (the "prepulse") along the azimuth prior to presenting a brief, ∼110 dB SPL startle-eliciting stimulus. We found that the startle response amplitude was systematically reduced for larger changes in speaker swap angle (i.e., greater PPI), indicating that using the speaker "swap" paradigm is sufficient to assess stimulus detection of spatially separated sounds. In a second set of experiments, we swapped low- and high-pass noise across the midline to estimate their ability to utilize interaural time- and level-difference cues, respectively. The results reveal that guinea pigs can utilize both binaural cues to discriminate azimuthal sound sources. A third set of experiments examined spatial release from masking using a continuous broadband noise masker and a broadband chirp signal, both presented concurrently at various speaker locations. In general, animals displayed an increase in startle amplitude (i.e., lower PPI) when the masker was presented at speaker locations near that of the chirp signal, and reduced startle amplitudes (increased PPI) indicating lower detection thresholds when the noise was presented from more distant speaker locations. In summary, these results indicate that guinea pigs can: 1) discriminate changes in source location within a hemifield as well as across the midline, 2) discriminate sources of low- and high-pass sounds, demonstrating that they can effectively utilize both low-frequency interaural time and high-frequency level difference sound localization cues, and 3) utilize spatial release from masking to discriminate sound sources. This report confirms the guinea pig as a suitable spatial hearing model and reinforces prior estimates of guinea pig hearing ability from acoustical and physiological measurements. Copyright © 2018 Elsevier B.V. All rights reserved.
Underwater sound radiation patterns of contemporary merchant ships
NASA Astrophysics Data System (ADS)
Gassmann, M.; Wiggins, S. M.; Hildebrand, J. A.
2016-12-01
Merchant ships radiate underwater sound as an unintended by-product of their operation and as consequence contribute significantly to low-frequency, man-made noise in the ocean. Current measurement standards for the description of underwater sound from ships (ISO 17208-1:2016 and ANSI S12.64-2009) require nominal hydrophone depths of 15°, 30° and 45° at the starboard and portside of the test vessel.To opportunistically study the underwater sound of contemporary merchant ships that were tracked by the Automatic Identification System (AIS), an array of seven high-frequency acoustic recording packages (HARPs) with a sampling frequency of 200 kHz was deployed in the Santa Barbara Channel in the primary outgoing shipping lane for the port of Los Angeles and Long Beach. The vertical and horizontal aperture of the array allowed for starboard and portside measurements at all standard-required nominal hydrophone depths in addition to measurements taken at the keel aspect. Based on these measurements, frequency-dependent radiation patterns of contemporary merchant ships were estimated and used to evaluate current standards for computing ship source levels.
Reid, Andrew; Marin-Cudraz, Thibaut
2016-01-01
Small animals typically localize sound sources by means of complex internal connections and baffles that effectively increase time or intensity differences between the two ears. However, some miniature acoustic species achieve directional hearing without such devices, indicating that other mechanisms have evolved. Using 3D laser vibrometry to measure tympanum deflection, we show that female lesser waxmoths (Achroia grisella) can orient toward the 100-kHz male song, because each ear functions independently as an asymmetric pressure gradient receiver that responds sharply to high-frequency sound arriving from an azimuth angle 30° contralateral to the animal's midline. We found that females presented with a song stimulus while running on a locomotion compensation sphere follow a trajectory 20°–40° to the left or right of the stimulus heading but not directly toward it, movement consistent with the tympanum deflections and suggestive of a monaural mechanism of auditory tracking. Moreover, females losing their track typically regain it by auditory scanning—sudden, wide deviations in their heading—and females initially facing away from the stimulus quickly change their general heading toward it, orientation indicating superior ability to resolve the front–rear ambiguity in source location. X-ray computer-aided tomography (CT) scans of the moths did not reveal any internal coupling between the two ears, confirming that an acoustic insect can localize a sound source based solely on the distinct features of each ear. PMID:27849607
Blue whales respond to simulated mid-frequency military sonar.
Goldbogen, Jeremy A; Southall, Brandon L; DeRuiter, Stacy L; Calambokidis, John; Friedlaender, Ari S; Hazen, Elliott L; Falcone, Erin A; Schorr, Gregory S; Douglas, Annie; Moretti, David J; Kyburg, Chris; McKenna, Megan F; Tyack, Peter L
2013-08-22
Mid-frequency military (1-10 kHz) sonars have been associated with lethal mass strandings of deep-diving toothed whales, but the effects on endangered baleen whale species are virtually unknown. Here, we used controlled exposure experiments with simulated military sonar and other mid-frequency sounds to measure behavioural responses of tagged blue whales (Balaenoptera musculus) in feeding areas within the Southern California Bight. Despite using source levels orders of magnitude below some operational military systems, our results demonstrate that mid-frequency sound can significantly affect blue whale behaviour, especially during deep feeding modes. When a response occurred, behavioural changes varied widely from cessation of deep feeding to increased swimming speed and directed travel away from the sound source. The variability of these behavioural responses was largely influenced by a complex interaction of behavioural state, the type of mid-frequency sound and received sound level. Sonar-induced disruption of feeding and displacement from high-quality prey patches could have significant and previously undocumented impacts on baleen whale foraging ecology, individual fitness and population health.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-18
... sound waves emanating from the pile, thereby reducing the sound energy. A confined bubble curtain... physically block sound waves and they prevent air bubbles from migrating away from the pile. The literature... acoustic pressure wave propagates out from a source, was estimated as so-called ``practical spreading loss...
ERIC Educational Resources Information Center
Rossing, Thomas D.
1980-01-01
Described are the components for a high-fidelity sound-reproducing system which focuses on various program sources, the amplifier, and loudspeakers. Discussed in detail are amplifier power and distortion, air suspension, loudspeaker baffles and enclosures, bass-reflex enclosure, drone cones, rear horn and acoustic labyrinth enclosures, horn…
Understanding a Basic Biological Process: Expert and Novice Models of Meiosis.
ERIC Educational Resources Information Center
Kindfield, Ann C. H.
The results of a study of the meiosis models utilized by individuals at varying levels of expertise while reasoning about the process of meiosis are presented. Based on these results, the issues of sources of misconceptions/difficulties and the construction of a sound understanding of meiosis are discussed. Five individuals from each of three…
Auditory enhancement of increments in spectral amplitude stems from more than one source.
Carcagno, Samuele; Semal, Catherine; Demany, Laurent
2012-10-01
A component of a test sound consisting of simultaneous pure tones perceptually "pops out" if the test sound is preceded by a copy of itself with that component attenuated. Although this "enhancement" effect was initially thought to be purely monaural, it is also observable when the test sound and the precursor sound are presented contralaterally (i.e., to opposite ears). In experiment 1, we assessed the magnitude of ipsilateral and contralateral enhancement as a function of the time interval between the precursor and test sounds (10, 100, or 600 ms). The test sound, randomly transposed in frequency from trial to trial, was followed by a probe tone, either matched or mismatched in frequency to the test sound component which was the target of enhancement. Listeners' ability to discriminate matched probes from mismatched probes was taken as an index of enhancement magnitude. The results showed that enhancement decays more rapidly for ipsilateral than for contralateral precursors, suggesting that ipsilateral enhancement and contralateral enhancement stem from at least partly different sources. It could be hypothesized that, in experiment 1, contralateral precursors were effective only because they provided attentional cues about the target tone frequency. In experiment 2, this hypothesis was tested by presenting the probe tone before the precursor sound rather than after the test sound. Although the probe tone was then serving as a frequency cue, contralateral precursors were again found to produce enhancement. This indicates that contralateral enhancement cannot be explained by cuing alone and is a genuine sensory phenomenon.
Experiments to investigate the acoustic properties of sound propagation
NASA Astrophysics Data System (ADS)
Dagdeviren, Omur E.
2018-07-01
Propagation of sound waves is one of the fundamental concepts in physics. Some of the properties of sound propagation such as attenuation of sound intensity with increasing distance are familiar to everybody from the experiences of daily life. However, the frequency dependence of sound propagation and the effect of acoustics in confined environments are not straightforward to estimate. In this article, we propose experiments, which can be conducted in a classroom environment with commonly available devices such as smartphones and laptops to measure sound intensity level as a function of the distance between the source and the observer and frequency of the sound. Our experiments and deviations from the theoretical calculations can be used to explain basic concepts of sound propagation and acoustics to a diverse population of students.
Neo, Y Y; Hubert, J; Bolle, L; Winter, H V; Ten Cate, C; Slabbekoorn, H
2016-07-01
Underwater sound from human activities may affect fish behaviour negatively and threaten the stability of fish stocks. However, some fundamental understanding is still lacking for adequate impact assessments and potential mitigation strategies. For example, little is known about the potential contribution of the temporal features of sound, the efficacy of ramp-up procedures, and the generalisability of results from indoor studies to the outdoors. Using a semi-natural set-up, we exposed European seabass in an outdoor pen to four treatments: 1) continuous sound, 2) intermittent sound with a regular repetition interval, 3) irregular repetition intervals and 4) a regular repetition interval with amplitude 'ramp-up'. Upon sound exposure, the fish increased swimming speed and depth, and swam away from the sound source. The behavioural readouts were generally consistent with earlier indoor experiments, but the changes and recovery were more variable and were not significantly influenced by sound intermittency and interval regularity. In addition, the 'ramp-up' procedure elicited immediate diving response, similar to the onset of treatment without a 'ramp-up', but the fish did not swim away from the sound source as expected. Our findings suggest that while sound impact studies outdoors increase ecological and behavioural validity, the inherently higher variability also reduces resolution that may be counteracted by increasing sample size or looking into different individual coping styles. Our results also question the efficacy of 'ramp-up' in deterring marine animals, which warrants more investigation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Choi, Yura; Park, Jeong-Eun; Jeong, Jong Seob; Park, Jung-Keug; Kim, Jongpil; Jeon, Songhee
2016-10-01
Mesenchymal stem cells (MSCs) have shown considerable promise as an adaptable cell source for use in tissue engineering and other therapeutic applications. The aims of this study were to develop methods to test the hypothesis that human MSCs could be differentiated using sound wave stimulation alone and to find the underlying mechanism. Human bone marrow (hBM)-MSCs were stimulated with sound waves (1 kHz, 81 dB) for 7 days and the expression of neural markers were analyzed. Sound waves induced neural differentiation of hBM-MSC at 1 kHz and 81 dB but not at 1 kHz and 100 dB. To determine the signaling pathways involved in the neural differentiation of hBM-MSCs by sound wave stimulation, we examined the Pyk2 and CREB phosphorylation. Sound wave induced an increase in the phosphorylation of Pyk2 and CREB at 45 min and 90 min, respectively, in hBM-MSCs. To find out the upstream activator of Pyk2, we examined the intracellular calcium source that was released by sound wave stimulation. When we used ryanodine as a ryanodine receptor antagonist, sound wave-induced calcium release was suppressed. Moreover, pre-treatment with a Pyk2 inhibitor, PF431396, prevented the phosphorylation of Pyk2 and suppressed sound wave-induced neural differentiation in hBM-MSCs. These results suggest that specific sound wave stimulation could be used as a neural differentiation inducer of hBM-MSCs.
NASA Astrophysics Data System (ADS)
Li, Jingxiang; Zhao, Shengdun; Ishihara, Kunihiko
2013-05-01
A novel approach is presented to study the acoustical properties of sintered bronze material, especially used to suppress the transient noise generated by the pneumatic exhaust of pneumatic friction clutch and brake (PFC/B) systems. The transient exhaust noise is impulsive and harmful due to the large sound pressure level (SPL) that has high-frequency. In this paper, the exhaust noise is related to the transient impulsive exhaust, which is described by a one-dimensional aerodynamic model combining with a pressure drop expression of the Ergun equation. A relation of flow parameters and sound source is set up. Additionally, the piston acoustic source approximation of sintered bronze silencer with cylindrical geometry is presented to predict SPL spectrum at a far-field observation point. A semi-phenomenological model is introduced to analyze the sound propagation and reduction in the sintered bronze materials assumed as an equivalent fluid with rigid frame. Experiment results under different initial cylinder pressures are shown to corroborate the validity of the proposed aerodynamic model. In addition, the calculated sound pressures according to the equivalent sound source are compared with the measured noise signals both in time-domain and frequency-domain. Influences of porosity of the sintered bronze material are also discussed.
Evidence of Cnidarians sensitivity to sound after exposure to low frequency noise underwater sources
NASA Astrophysics Data System (ADS)
Solé, Marta; Lenoir, Marc; Fontuño, José Manuel; Durfort, Mercè; van der Schaar, Mike; André, Michel
2016-12-01
Jellyfishes represent a group of species that play an important role in oceans, particularly as a food source for different taxa and as a predator of fish larvae and planktonic prey. The massive introduction of artificial sound sources in the oceans has become a concern to science and society. While we are only beginning to understand that non-hearing specialists like cephalopods can be affected by anthropogenic noises and regulation is underway to measure European water noise levels, we still don’t know yet if the impact of sound may be extended to other lower level taxa of the food web. Here we exposed two species of Mediterranean Scyphozoan medusa, Cotylorhiza tuberculata and Rhizostoma pulmo to a sweep of low frequency sounds. Scanning electron microscopy (SEM) revealed injuries in the statocyst sensory epithelium of both species after exposure to sound, that are consistent with the manifestation of a massive acoustic trauma observed in other species. The presence of acoustic trauma in marine species that are not hearing specialists, like medusa, shows the magnitude of the problem of noise pollution and the complexity of the task to determine threshold values that would help building up regulation to prevent permanent damage of the ecosystems.
Bevans, Dieter A; Buckingham, Michael J
2017-10-01
The frequency bandwidth of the sound from a light helicopter, such as a Robinson R44, extends from about 13 Hz to 2.5 kHz. As such, the R44 has potential as a low-frequency sound source in underwater acoustics applications. To explore this idea, an experiment was conducted in shallow water off the coast of southern California in which a horizontal line of hydrophones detected the sound of an R44 hovering in an end-fire position relative to the array. Some of the helicopter sound interacted with seabed to excite the head wave in the water column. A theoretical analysis of the sound field in the water column generated by a stationary airborne source leads to an expression for the two-point horizontal coherence function of the head wave, which, apart from frequency, depends only on the sensor separation and the sediment sound speed. By matching the zero crossings of the measured and theoretical horizontal coherence functions, the sound speed in the sediment was recovered and found to take a value of 1682.42 ± 16.20 m/s. This is consistent with the sediment type at the experiment site, which is known from a previous survey to be a fine to very-fine sand.
Variable-Depth Liner Evaluation Using Two NASA Flow Ducts
NASA Technical Reports Server (NTRS)
Jones, M. G.; Nark, D. M.; Watson, W. R.; Howerton, B. M.
2017-01-01
Four liners are investigated experimentally via tests in the NASA Langley Grazing Flow Impedance Tube. These include an axially-segmented liner and three liners that use reordering of the chambers. Chamber reordering is shown to have a strong effect on the axial sound pressure level profiles, but a limited effect on the overall attenuation. It is also shown that bent chambers can be used to reduce the liner depth with minimal effects on the attenuation. A numerical study is also conducted to explore the effects of a planar and three higher-order mode sources based on the NASA Langley Curved Duct Test Rig geometry. A four-segment liner is designed using the NASA Langley CDL code with a Python-based optimizer. Five additional liner designs, four with rearrangements of the first liner segments and one with a redistribution of the individual chambers, are evaluated for each of the four sources. The liner configuration affects the sound pressure level profile much more than the attenuation spectra for the planar and first two higher-order mode sources, but has a much larger effect on the SPL profiles and attenuation spectra for the last higher-order mode source. Overall, axially variable-depth liners offer the potential to provide improved fan noise reduction, regardless of whether the axially variable depths are achieved via a distributed array of chambers (depths vary from chamber to chamber) or a group of zones (groups of chambers for which the depth is constant).