Source and listener directivity for interactive wave-based sound propagation.
Mehra, Ravish; Antani, Lakulish; Kim, Sujeong; Manocha, Dinesh
2014-04-01
We present an approach to model dynamic, data-driven source and listener directivity for interactive wave-based sound propagation in virtual environments and computer games. Our directional source representation is expressed as a linear combination of elementary spherical harmonic (SH) sources. In the preprocessing stage, we precompute and encode the propagated sound fields due to each SH source. At runtime, we perform the SH decomposition of the varying source directivity interactively and compute the total sound field at the listener position as a weighted sum of precomputed SH sound fields. We propose a novel plane-wave decomposition approach based on higher-order derivatives of the sound field that enables dynamic HRTF-based listener directivity at runtime. We provide a generic framework to incorporate our source and listener directivity in any offline or online frequency-domain wave-based sound propagation algorithm. We have integrated our sound propagation system in Valve's Source game engine and use it to demonstrate realistic acoustic effects such as sound amplification, diffraction low-passing, scattering, localization, externalization, and spatial sound, generated by wave-based propagation of directional sources and listener in complex scenarios. We also present results from our preliminary user study.
Directional Hearing and Sound Source Localization in Fishes.
Sisneros, Joseph A; Rogers, Peter H
2016-01-01
Evidence suggests that the capacity for sound source localization is common to mammals, birds, reptiles, and amphibians, but surprisingly it is not known whether fish locate sound sources in the same manner (e.g., combining binaural and monaural cues) or what computational strategies they use for successful source localization. Directional hearing and sound source localization in fishes continues to be important topics in neuroethology and in the hearing sciences, but the empirical and theoretical work on these topics have been contradictory and obscure for decades. This chapter reviews the previous behavioral work on directional hearing and sound source localization in fishes including the most recent experiments on sound source localization by the plainfin midshipman fish (Porichthys notatus), which has proven to be an exceptional species for fish studies of sound localization. In addition, the theoretical models of directional hearing and sound source localization for fishes are reviewed including a new model that uses a time-averaged intensity approach for source localization that has wide applicability with regard to source type, acoustic environment, and time waveform.
A Flexible 360-Degree Thermal Sound Source Based on Laser Induced Graphene
Tao, Lu-Qi; Liu, Ying; Ju, Zhen-Yi; Tian, He; Xie, Qian-Yi; Yang, Yi; Ren, Tian-Ling
2016-01-01
A flexible sound source is essential in a whole flexible system. It’s hard to integrate a conventional sound source based on a piezoelectric part into a whole flexible system. Moreover, the sound pressure from the back side of a sound source is usually weaker than that from the front side. With the help of direct laser writing (DLW) technology, the fabrication of a flexible 360-degree thermal sound source becomes possible. A 650-nm low-power laser was used to reduce the graphene oxide (GO). The stripped laser induced graphene thermal sound source was then attached to the surface of a cylindrical bottle so that it could emit sound in a 360-degree direction. The sound pressure level and directivity of the sound source were tested, and the results were in good agreement with the theoretical results. Because of its 360-degree sound field, high flexibility, high efficiency, low cost, and good reliability, the 360-degree thermal acoustic sound source will be widely applied in consumer electronics, multi-media systems, and ultrasonic detection and imaging. PMID:28335239
Sound localization and auditory response capabilities in round goby (Neogobius melanostomus)
NASA Astrophysics Data System (ADS)
Rollo, Audrey K.; Higgs, Dennis M.
2005-04-01
A fundamental role in vertebrate auditory systems is determining the direction of a sound source. While fish show directional responses to sound, sound localization remains in dispute. The species used in the current study, Neogobius melanostomus (round goby) uses sound in reproductive contexts, with both male and female gobies showing directed movement towards a calling male. The two-choice laboratory experiment was used (active versus quiet speaker) to analyze behavior of gobies in response to sound stimuli. When conspecific male spawning sounds were played, gobies moved in a direct path to the active speaker, suggesting true localization to sound. Of the animals that responded to conspecific sounds, 85% of the females and 66% of the males moved directly to the sound source. Auditory playback of natural and synthetic sounds showed differential behavioral specificity. Of gobies that responded, 89% were attracted to the speaker playing Padogobius martensii sounds, 87% to 100 Hz tone, 62% to white noise, and 56% to Gobius niger sounds. Swimming speed, as well as mean path angle to the speaker, will be presented during the presentation. Results suggest a strong localization of the round goby to a sound source, with some differential sound specificity.
Evolutionary trends in directional hearing
Carr, Catherine E.; Christensen-Dalsgaard, Jakob
2016-01-01
Tympanic hearing is a true evolutionary novelty that arose in parallel within early tetrapods. We propose that in these tetrapods, selection for sound localization in air acted upon pre-existing directionally sensitive brainstem circuits, similar to those in fishes. Auditory circuits in birds and lizards resemble this ancestral, directionally sensitive framework. Despite this anatomically similarity, coding of sound source location differs between birds and lizards. In birds, brainstem circuits compute sound location from interaural cues. Lizards, however, have coupled ears, and do not need to compute source location in the brain. Thus their neural processing of sound direction differs, although all show mechanisms for enhancing sound source directionality. Comparisons with mammals reveal similarly complex interactions between coding strategies and evolutionary history. PMID:27448850
Effects of sound source directivity on auralizations
NASA Astrophysics Data System (ADS)
Sheets, Nathan W.; Wang, Lily M.
2002-05-01
Auralization, the process of rendering audible the sound field in a simulated space, is a useful tool in the design of acoustically sensitive spaces. The auralization depends on the calculation of an impulse response between a source and a receiver which have certain directional behavior. Many auralizations created to date have used omnidirectional sources; the effects of source directivity on auralizations is a relatively unexplored area. To examine if and how the directivity of a sound source affects the acoustical results obtained from a room, we used directivity data for three sources in a room acoustic modeling program called Odeon. The three sources are: violin, piano, and human voice. The results from using directional data are compared to those obtained using omnidirectional source behavior, both through objective measure calculations and subjective listening tests.
Converting a Monopole Emission into a Dipole Using a Subwavelength Structure
NASA Astrophysics Data System (ADS)
Fan, Xu-Dong; Zhu, Yi-Fan; Liang, Bin; Cheng, Jian-chun; Zhang, Likun
2018-03-01
High-efficiency emission of multipoles is unachievable by a source much smaller than the wavelength, preventing compact acoustic devices for generating directional sound beams. Here, we present a primary scheme towards solving this problem by numerically and experimentally enclosing a monopole sound source in a structure with a dimension of around 1 /10 sound wavelength to emit a dipolar field. The radiated sound power is found to be more than twice that of a bare dipole. Our study of efficient emission of directional low-frequency sound from a monopole source in a subwavelength space may have applications such as focused ultrasound for imaging, directional underwater sound beams, miniaturized sonar, etc.
Development of a directivity-controlled piezoelectric transducer for sound reproduction
NASA Astrophysics Data System (ADS)
Bédard, Magella; Berry, Alain
2008-04-01
Present sound reproduction systems do not attempt to simulate the spatial radiation of musical instruments, or sound sources in general, even though the spatial directivity has a strong impact on the psychoacoustic experience. A transducer consisting of 4 piezoelectric elemental sources made from curved PVDF films is used to generate a target directivity pattern in the horizontal plane, in the frequency range of 5-20 kHz. The vibratory and acoustical response of an elemental source is addressed, both theoretically and experimentally. Two approaches to synthesize the input signals to apply to each elemental source are developed in order to create a prescribed, frequency-dependent acoustic directivity. The circumferential Fourier decomposition of the target directivity provides a compromise between the magnitude and the phase reconstruction, whereas the minimization of a quadratic error criterion provides a best magnitude reconstruction. This transducer can improve sound reproduction by introducing the spatial radiation aspect of the original source at high frequency.
NASA Astrophysics Data System (ADS)
Chen, Huaiyu; Cao, Li
2017-06-01
In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.
NASA Technical Reports Server (NTRS)
Rentz, P. E.
1976-01-01
Experimental evaluations of the acoustical characteristics and source sound power and directionality measurement capabilities of the NASA Lewis 9 x 15 foot low speed wind tunnel in the untreated or hardwall configuration were performed. The results indicate that source sound power estimates can be made using only settling chamber sound pressure measurements. The accuracy of these estimates, expressed as one standard deviation, can be improved from + or - 4 db to + or - 1 db if sound pressure measurements in the preparation room and diffuser are also used and source directivity information is utilized. A simple procedure is presented. Acceptably accurate measurements of source direct field acoustic radiation were found to be limited by the test section reverberant characteristics to 3.0 feet for omni-directional and highly directional sources. Wind-on noise measurements in the test section, settling chamber and preparation room were found to depend on the sixth power of tunnel velocity. The levels were compared with various analytic models. Results are presented and discussed.
Relation of sound intensity and accuracy of localization.
Farrimond, T
1989-08-01
Tests were carried out on 17 subjects to determine the accuracy of monaural sound localization when the head is not free to turn toward the sound source. Maximum accuracy of localization for a constant-volume sound source coincided with the position for maximum perceived intensity of the sound in the front quadrant. There was a tendency for sounds to be perceived more often as coming from a position directly toward the ear. That is, for sounds in the front quadrant, errors of localization tended to be predominantly clockwise (i.e., biased toward a line directly facing the ear). Errors for sounds occurring in the rear quadrant tended to be anticlockwise. The pinna's differential effect on sound intensity between front and rear quadrants would assist in identifying the direction of movement of objects, for example an insect, passing the ear.
NASA Astrophysics Data System (ADS)
Nishiura, Takanobu; Nakamura, Satoshi
2002-11-01
It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.
Broad band sound from wind turbine generators
NASA Technical Reports Server (NTRS)
Hubbard, H. H.; Shepherd, K. P.; Grosveld, F. W.
1981-01-01
Brief descriptions are given of the various types of large wind turbines and their sound characteristics. Candidate sources of broadband sound are identified and are rank ordered for a large upwind configuration wind turbine generator for which data are available. The rotor is noted to be the main source of broadband sound which arises from inflow turbulence and from the interactions of the turbulent boundary layer on the blade with its trailing edge. Sound is radiated about equally in all directions but the refraction effects of the wind produce an elongated contour pattern in the downwind direction.
The directivity of the sound radiation from panels and openings.
Davy, John L
2009-06-01
This paper presents a method for calculating the directivity of the radiation of sound from a panel or opening, whose vibration is forced by the incidence of sound from the other side. The directivity of the radiation depends on the angular distribution of the incident sound energy in the room or duct in whose wall or end the panel or opening occurs. The angular distribution of the incident sound energy is predicted using a model which depends on the sound absorption coefficient of the room or duct surfaces. If the sound source is situated in the room or duct, the sound absorption coefficient model is used in conjunction with a model for the directivity of the sound source. For angles of radiation approaching 90 degrees to the normal to the panel or opening, the effect of the diffraction by the panel or opening, or by the finite baffle in which the panel or opening is mounted, is included. A simple empirical model is developed to predict the diffraction of sound into the shadow zone when the angle of radiation is greater than 90 degrees to the normal to the panel or opening. The method is compared with published experimental results.
Calculating far-field radiated sound pressure levels from NASTRAN output
NASA Technical Reports Server (NTRS)
Lipman, R. R.
1986-01-01
FAFRAP is a computer program which calculates far field radiated sound pressure levels from quantities computed by a NASTRAN direct frequency response analysis of an arbitrarily shaped structure. Fluid loading on the structure can be computed directly by NASTRAN or an added-mass approximation to fluid loading on the structure can be used. Output from FAFRAP includes tables of radiated sound pressure levels and several types of graphic output. FAFRAP results for monopole and dipole sources compare closely with an explicit calculation of the radiated sound pressure level for those sources.
Efficient techniques for wave-based sound propagation in interactive applications
NASA Astrophysics Data System (ADS)
Mehra, Ravish
Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data-driven, rotating or time-varying directivity function at runtime. Unlike previous approaches, the listener directivity approach can be used to compute spatial audio (3D audio) for a moving, rotating listener at interactive rates. Lastly, we propose an efficient GPU-based time-domain solver for the wave equation that enables wave simulation up to the mid-frequency range in tens of minutes on a desktop computer. It is demonstrated that by carefully mapping all the components of the wave simulator to match the parallel processing capabilities of the graphics processors, significant improvement in performance can be achieved compared to the CPU-based simulators, while maintaining numerical accuracy. We validate these techniques with offline numerical simulations and measured data recorded in an outdoor scene. We present results of preliminary user evaluations conducted to study the impact of these techniques on user's immersion in virtual environment. We have integrated these techniques with the Half-Life 2 game engine, Oculus Rift head-mounted display, and Xbox game controller to enable users to experience high-quality acoustics effects and spatial audio in the virtual environment.
Noise Source Identification in a Reverberant Field Using Spherical Beamforming
NASA Astrophysics Data System (ADS)
Choi, Young-Chul; Park, Jin-Ho; Yoon, Doo-Byung; Kwon, Hyu-Sang
Identification of noise sources, their locations and strengths, has been taken great attention. The method that can identify noise sources normally assumes that noise sources are located at a free field. However, the sound in a reverberant field consists of that coming directly from the source plus sound reflected or scattered by the walls or objects in the field. In contrast to the exterior sound field, reflections are added to sound field. Therefore, the source location estimated by the conventional methods may give unacceptable error. In this paper, we explain the effects of reverberant field on interior source identification process and propose the method that can identify noise sources in the reverberant field.
2007-01-01
deposition directly to Puget Sound was an important source of PAHs, polybrominated diphenyl ethers (PBDEs), and heavy metals . In most cases, atmospheric...versus Atmospheric Fluxes ........................................................................66 PAH Source Apportionment ...temperature inversions) on air quality during the wet season. A semi-quantitative apportionment study permitted a first-order characterization of source
What the Toadfish Ear Tells the Toadfish Brain About Sound.
Edds-Walton, Peggy L
2016-01-01
Of the three, paired otolithic endorgans in the ear of teleost fishes, the saccule is the one most often demonstrated to have a major role in encoding frequencies of biologically relevant sounds. The toadfish saccule also encodes sound level and sound source direction in the phase-locked activity conveyed via auditory afferents to nuclei of the ipsilateral octaval column in the medulla. Although paired auditory receptors are present in teleost fishes, binaural processes were believed to be unimportant due to the speed of sound in water and the acoustic transparency of the tissues in water. In contrast, there are behavioral and anatomical data that support binaural processing in fishes. Studies in the toadfish combined anatomical tract-tracing and physiological recordings from identified sites along the ascending auditory pathway to document response characteristics at each level. Binaural computations in the medulla and midbrain sharpen the directional information provided by the saccule. Furthermore, physiological studies in the central nervous system indicated that encoding frequency, sound level, temporal pattern, and sound source direction are important components of what the toadfish ear tells the toadfish brain about sound.
Numerical Models for Sound Propagation in Long Spaces
NASA Astrophysics Data System (ADS)
Lai, Chenly Yuen Cheung
Both reverberation time and steady-state sound field are the key elements for assessing the acoustic condition in an enclosed space. They affect the noise propagation, speech intelligibility, clarity index, and definition. Since the sound field in a long space is non diffuse, classical room acoustics theory does not apply in this situation. The ray tracing technique and the image source methods are two common models to fathom both reverberation time and steady-state sound field in long enclosures nowadays. Although both models can give an accurate estimate of reverberation times and steady-state sound field directly or indirectly, they often involve time-consuming calculations. In order to simplify the acoustic consideration, a theoretical formulation has been developed for predicting both steady-state sound fields and reverberation times in street canyons. The prediction model is further developed to predict the steady-state sound field in a long enclosure. Apart from the straight long enclosure, there are other variations such as a cross junction, a long enclosure with a T-intersection, an U-turn long enclosure. In the present study, an theoretical and experimental investigations were conducted to develop formulae for predicting reverberation times and steady-state sound fields in a junction of a street canyon and in a long enclosure with T-intersection. The theoretical models are validated by comparing the numerical predictions with published experimental results. The theoretical results are also compared with precise indoor measurements and large-scale outdoor experimental results. In all of previous acoustical studies related to long enclosure, most of the studies are focused on the monopole sound source. Besides non-directional noise source, many noise sources in long enclosure are dipole like, such as train noise and fan noise. In order to study the characteristics of directional noise sources, a review of available dipole source was conducted. A dipole was constructed which was subsequent used for experimental studies. In additional, a theoretical model was developed for predicting dipole sound fields. The theoretical model can be used to study the effect of a dipole source on the speech intelligibility in long enclosures.
Andreeva, I G; Vartanian, I A
2012-01-01
The ability to evaluate direction of amplitude changes of sound stimuli was studied in adults and in the 11-12- and 15-16-year old teenagers. The stimuli representing sequences of fragments of the tone of 1 kHz, whose amplitude is changing with time, are used as model of approach and withdrawal of the sound sources. The 11-12-year old teenagers at estimation of direction of amplitude changes were shown to make the significantly higher number of errors as compared with two other examined groups, including those in repeated experiments. The structure of errors - the ratio of the portion of errors at estimation of increasing and decreasing by amplitude stimulus - turned out to be different in teenagers and in adults. The question is discussed about the effect of unspecific activation of the large hemisphere cortex in teenagers on processes if taking solution about the complex sound stimulus, including a possibility estimation of approach and withdrawal of the sound source.
SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization
Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah
2014-01-01
Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431
The effect of brain lesions on sound localization in complex acoustic environments.
Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg
2014-05-01
Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.
Independence of Echo-Threshold and Echo-Delay in the Barn Owl
Nelson, Brian S.; Takahashi, Terry T.
2008-01-01
Despite their prevalence in nature, echoes are not perceived as events separate from the sounds arriving directly from an active source, until the echo's delay is long. We measured the head-saccades of barn owls and the responses of neurons in their auditory space-maps while presenting a long duration noise-burst and a simulated echo. Under this paradigm, there were two possible stimulus segments that could potentially signal the location of the echo. One was at the onset of the echo; the other, after the offset of the direct (leading) sound, when only the echo was present. By lengthening the echo's duration, independently of its delay, spikes and saccades were evoked by the source of the echo even at delays that normally evoked saccades to only the direct source. An echo's location thus appears to be signaled by the neural response evoked after the offset of the direct sound. PMID:18974886
A precedence effect resolves phantom sound source illusions in the parasitoid fly Ormia ochracea
Lee, Norman; Elias, Damian O.; Mason, Andrew C.
2009-01-01
Localizing individual sound sources under reverberant environmental conditions can be a challenge when the original source and its acoustic reflections arrive at the ears simultaneously from different paths that convey ambiguous directional information. The acoustic parasitoid fly Ormia ochracea (Diptera: Tachinidae) relies on a pair of ears exquisitely sensitive to sound direction to localize the 5-kHz tone pulsatile calling song of their host crickets. In nature, flies are expected to encounter a complex sound field with multiple sources and their reflections from acoustic clutter potentially masking temporal information relevant to source recognition and localization. In field experiments, O. ochracea were lured onto a test arena and subjected to small random acoustic asymmetries between 2 simultaneous sources. Most flies successfully localize a single source but some localize a ‘phantom’ source that is a summed effect of both source locations. Such misdirected phonotaxis can be elicited reliably in laboratory experiments that present symmetric acoustic stimulation. By varying onset delay between 2 sources, we test whether hyperacute directional hearing in O. ochracea can function to exploit small time differences to determine source location. Selective localization depends on both the relative timing and location of competing sources. Flies preferred phonotaxis to a forward source. With small onset disparities within a 10-ms temporal window of attention, flies selectively localize the leading source while the lagging source has minimal influence on orientation. These results demonstrate the precedence effect as a mechanism to overcome phantom source illusions that arise from acoustic reflections or competing sources. PMID:19332794
3D Sound Techniques for Sound Source Elevation in a Loudspeaker Listening Environment
NASA Astrophysics Data System (ADS)
Kim, Yong Guk; Jo, Sungdong; Kim, Hong Kook; Jang, Sei-Jin; Lee, Seok-Pil
In this paper, we propose several 3D sound techniques for sound source elevation in stereo loudspeaker listening environments. The proposed method integrates a head-related transfer function (HRTF) for sound positioning and early reflection for adding reverberant circumstance. In addition, spectral notch filtering and directional band boosting techniques are also included for increasing elevation perception capability. In order to evaluate the elevation performance of the proposed method, subjective listening tests are conducted using several kinds of sound sources such as white noise, sound effects, speech, and music samples. It is shown from the tests that the degrees of perceived elevation by the proposed method are around the 17º to 21º when the stereo loudspeakers are located on the horizontal plane.
A Numerical Experiment on the Role of Surface Shear Stress in the Generation of Sound
NASA Technical Reports Server (NTRS)
Shariff, Karim; Wang, Meng; Merriam, Marshal (Technical Monitor)
1996-01-01
The sound generated due to a localized flow over an infinite flat surface is considered. It is known that the unsteady surface pressure, while appearing in a formal solution to the Lighthill equation, does not constitute a source of sound but rather represents the effect of image quadrupoles. The question of whether a similar surface shear stress term constitutes a true source of dipole sound is less settled. Some have boldly assumed it is a true source while others have argued that, like the surface pressure, it depends on the sound field (via an acoustic boundary layer) and is therefore not a true source. A numerical experiment based on the viscous, compressible Navier-Stokes equations was undertaken to investigate the issue. A small region of a wall was oscillated tangentially. The directly computed sound field was found to to agree with an acoustic analogy based calculation which regards the surface shear as an acoustically compact dipole source of sound.
Active control of noise on the source side of a partition to increase its sound isolation
NASA Astrophysics Data System (ADS)
Tarabini, Marco; Roure, Alain; Pinhede, Cedric
2009-03-01
This paper describes a local active noise control system that virtually increases the sound isolation of a dividing wall by means of a secondary source array. With the proposed method, sound pressure on the source side of the partition is reduced using an array of loudspeakers that generates destructive interference on the wall surface, where an array of error microphones is placed. The reduction of sound pressure on the incident side of the wall is expected to decrease the sound radiated into the contiguous room. The method efficiency was experimentally verified by checking the insertion loss of the active noise control system; in order to investigate the possibility of using a large number of actuators, a decentralized FXLMS control algorithm was used. Active control performances and stability were tested with different array configurations, loudspeaker directivities and enclosure characteristics (sound source position and absorption coefficient). The influence of all these parameters was investigated with the factorial design of experiments. The main outcome of the experimental campaign was that the insertion loss produced by the secondary source array, in the 50-300 Hz frequency range, was close to 10 dB. In addition, the analysis of variance showed that the active noise control performance can be optimized with a proper choice of the directional characteristics of the secondary source and the distance between loudspeakers and error microphones.
Sound source localization and segregation with internally coupled ears: the treefrog model
Christensen-Dalsgaard, Jakob
2016-01-01
Acoustic signaling plays key roles in mediating many of the reproductive and social behaviors of anurans (frogs and toads). Moreover, acoustic signaling often occurs at night, in structurally complex habitats, such as densely vegetated ponds, and in dense breeding choruses characterized by high levels of background noise and acoustic clutter. Fundamental to anuran behavior is the ability of the auditory system to determine accurately the location from where sounds originate in space (sound source localization) and to assign specific sounds in the complex acoustic milieu of a chorus to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating directions for future research on these animals that will require the collaborative efforts of biologists, physicists, and roboticists. PMID:27730384
Two dimensional sound field reproduction using higher order sources to exploit room reflections.
Betlehem, Terence; Poletti, Mark A
2014-04-01
In this paper, sound field reproduction is performed in a reverberant room using higher order sources (HOSs) and a calibrating microphone array. Previously a sound field was reproduced with fixed directivity sources and the reverberation compensated for using digital filters. However by virtue of their directive properties, HOSs may be driven to not only avoid the creation of excess reverberation but also to use room reflection to contribute constructively to the desired sound field. The manner by which the loudspeakers steer the sound around the room is determined by measuring the acoustic transfer functions. The requirements on the number and order N of HOSs for accurate reproduction in a reverberant room are derived, showing a 2N + 1-fold decrease in the number of loudspeakers in comparison to using monopole sources. HOSs are shown applicable to rooms with a rich variety of wall reflections while in an anechoic room their advantages may be lost. Performance is investigated in a room using extensions of both the diffuse field model and a more rigorous image-source simulation method, which account for the properties of the HOSs. The robustness of the proposed method is validated by introducing measurement errors.
Sound. Physical Science in Action[TM]. Schlessinger Science Library. [Videotape].
ERIC Educational Resources Information Center
2000
A door closes. A horn beeps. A crowd roars. Sound waves travel outward in all directions from the source. They can all be heard, but how? Did they travel directly to the ears? Perhaps they bounced off another object first or traveled through a different medium, changing speed along the way. Students learn how sound waves travel and about their…
Sound source tracking device for telematic spatial sound field reproduction
NASA Astrophysics Data System (ADS)
Cardenas, Bruno
This research describes an algorithm that localizes sound sources for use in telematic applications. The localization algorithm is based on amplitude differences between various channels of a microphone array of directional shotgun microphones. The amplitude differences will be used to locate multiple performers and reproduce their voices, which were recorded at close distance with lavalier microphones, spatially corrected using a loudspeaker rendering system. In order to track multiple sound sources in parallel the information gained from the lavalier microphones will be utilized to estimate the signal-to-noise ratio between each performer and the concurrent performers.
Egocentric and allocentric representations in auditory cortex
Brimijoin, W. Owen; Bizley, Jennifer K.
2017-01-01
A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position. PMID:28617796
Development of a directivity controlled piezoelectric transducer for sound reproduction
NASA Astrophysics Data System (ADS)
Bédard, Magella; Berry, Alain
2005-04-01
One of the inherent limitations of loudspeaker systems in audio reproduction is their inability to reproduce the possibly complex acoustic directivity patterns of real sound sources. For music reproduction for example, it may be desirable to separate diffuse field and direct sound components and project them with different directivity patterns. Because of their properties, poly (vinylidene fluoride) (PVDF) films offer lot of advantages for the development of electroacoustic transducers. A system of piezoelectric transducers made with PVDF that show a controllable directivity was developed. A cylindrical omnidirectional piezoelectric transducer is used to produce an ambient field, and a piezoelectric transducers system, consisting of a series of curved sources placed around a cylinder frame, is used to produce a sound field with a given directivity. To develop the system, a numerical model was generated with ANSYS Multiphysics TM8.1 and used to calculate the mechanical response of the piezoelectric transducer. The acoustic radiation of the driver was then computed using the Kirchoff-Helmoltz theorem. Numerical and experimental results of the mechanical and acoustical response of the system will be shown.
NASA Technical Reports Server (NTRS)
Jhabvala, M.; Lin, H. C.
1989-01-01
Hearing-aid device indicates visually whether sound is coming from left, right, back, or front. Device intended to assist individuals who are deaf in at least one ear and unable to discern naturally directions to sources of sound. Device promotes safety in street traffic, on loading docks, and in presence of sirens, alarms, and other warning sounds. Quadraphonic version of device built into pair of eyeglasses and binaural version built into visor.
NASA Technical Reports Server (NTRS)
Lehnert, H.; Blauert, Jens; Pompetzki, W.
1991-01-01
In every-day listening the auditory event perceived by a listener is determined not only by the sound signal that a sound emits but also by a variety of environmental parameters. These parameters are the position, orientation and directional characteristics of the sound source, the listener's position and orientation, the geometrical and acoustical properties of surfaces which affect the sound field and the sound propagation properties of the surrounding fluid. A complete set of these parameters can be called an Acoustic Environment. If the auditory event perceived by a listener is manipulated in such a way that the listener is shifted acoustically into a different acoustic environment without moving himself physically, a Virtual Acoustic Environment has been created. Here, we deal with a special technique to set up nearly arbitrary Virtual Acoustic Environments, the Binaural Room Simulation. The purpose of the Binaural Room Simulation is to compute the binaural impulse response related to a virtual acoustic environment taking into account all parameters mentioned above. One possible way to describe a Virtual Acoustic Environment is the concept of the virtual sound sources. Each of the virtual sources emits a certain signal which is correlated but not necessarily identical with the signal emitted by the direct sound source. If source and receiver are non moving, the acoustic environment becomes a linear time-invariant system. Then, the Binaural Impulse Response from the source to a listener' s eardrums contains all relevant auditory information related to the Virtual Acoustic Environment. Listening into the simulated environment can easily be achieved by convolving the Binaural Impulse Response with dry signals and representing the results via headphones.
Techniques and instrumentation for the measurement of transient sound energy flux
NASA Astrophysics Data System (ADS)
Watkinson, P. S.; Fahy, F. J.
1983-12-01
The evaluation of sound intensity distributions, and sound powers, of essentially continuous sources such as automotive engines, electric motors, production line machinery, furnaces, earth moving machinery and various types of process plants were studied. Although such systems are important sources of community disturbance and, to a lesser extent, of industrial health hazard, the most serious sources of hearing hazard in industry are machines operating on an impact principle, such as drop forges, hammers and punches. Controlled experiments to identify major noise source regions and mechanisms are difficult because it is normally impossible to install them in quiet, anechoic environments. The potential for sound intensity measurement to provide a means of overcoming these difficulties has given promising results, indicating the possibility of separation of directly radiated and reverberant sound fields. However, because of the complexity of transient sound fields, a fundamental investigation is necessary to establish the practicability of intensity field decomposition, which is basic to source characterization techniques.
NASA Astrophysics Data System (ADS)
Shinn-Cunningham, Barbara
2003-04-01
One of the key functions of hearing is to help us monitor and orient to events in our environment (including those outside the line of sight). The ability to compute the spatial location of a sound source is also important for detecting, identifying, and understanding the content of a sound source, especially in the presence of competing sources from other positions. Determining the spatial location of a sound source poses difficult computational challenges; however, we perform this complex task with proficiency, even in the presence of noise and reverberation. This tutorial will review the acoustic, psychoacoustic, and physiological processes underlying spatial auditory perception. First, the tutorial will examine how the many different features of the acoustic signals reaching a listener's ears provide cues for source direction and distance, both in anechoic and reverberant space. Then we will discuss psychophysical studies of three-dimensional sound localization in different environments and the basic neural mechanisms by which spatial auditory cues are extracted. Finally, ``virtual reality'' approaches for simulating sounds at different directions and distances under headphones will be reviewed. The tutorial will be structured to appeal to a diverse audience with interests in all fields of acoustics and will incorporate concepts from many areas, such as psychological and physiological acoustics, architectural acoustics, and signal processing.
Devore, Sasha; Ihlefeld, Antje; Hancock, Kenneth; Shinn-Cunningham, Barbara; Delgutte, Bertrand
2009-01-01
In reverberant environments, acoustic reflections interfere with the direct sound arriving at a listener’s ears, distorting the spatial cues for sound localization. Yet, human listeners have little difficulty localizing sounds in most settings. Because reverberant energy builds up over time, the source location is represented relatively faithfully during the early portion of a sound, but this representation becomes increasingly degraded later in the stimulus. We show that the directional sensitivity of single neurons in the auditory midbrain of anesthetized cats follows a similar time course, although onset dominance in temporal response patterns results in more robust directional sensitivity than expected, suggesting a simple mechanism for improving directional sensitivity in reverberation. In parallel behavioral experiments, we demonstrate that human lateralization judgments are consistent with predictions from a population rate model decoding the observed midbrain responses, suggesting a subcortical origin for robust sound localization in reverberant environments. PMID:19376072
Je, Yub; Lee, Haksue; Park, Jongkyu; Moon, Wonkyu
2010-06-01
An ultrasonic radiator is developed to generate a difference frequency sound from two frequencies of ultrasound in air with a parametric array. A design method is proposed for an ultrasonic radiator capable of generating highly directive, high-amplitude ultrasonic sound beams at two different frequencies in air based on a modification of the stepped-plate ultrasonic radiator. The stepped-plate ultrasonic radiator was introduced by Gallego-Juarez et al. [Ultrasonics 16, 267-271 (1978)] in their previous study and can effectively generate highly directive, large-amplitude ultrasonic sounds in air, but only at a single frequency. Because parametric array sources must be able to generate sounds at more than one frequency, a design modification is crucial to the application of a stepped-plate ultrasonic radiator as a parametric array source in air. The aforementioned method was employed to design a parametric radiator for use in air. A prototype of this design was constructed and tested to determine whether it could successfully generate a difference frequency sound with a parametric array. The results confirmed that the proposed single small-area transducer was suitable as a parametric radiator in air.
Beranek, Leo
2011-05-01
The parameter, "Strength of Sound G" is closely related to loudness. Its magnitude is dependent, inversely, on the total sound absorption in a room. By comparison, the reverberation time (RT) is both inversely related to the total sound absorption in a hall and directly related to its cubic volume. Hence, G and RT in combination are vital in planning the acoustics of a concert hall. A newly proposed "Bass Index" is related to the loudness of the bass sound and equals the value of G at 125 Hz in decibels minus its value at mid-frequencies. Listener envelopment (LEV) is shown for most halls to be directly related to the mid-frequency value of G. The broadening of sound, i.e., apparent source width (ASW) is given by degree of source broadening (DSB) which is determined from the combined effect of early lateral reflections as measured by binaural quality index (BQI) and strength G. The optimum values and limits of these parameters are discussed.
Adaptive near-field beamforming techniques for sound source imaging.
Cho, Yong Thung; Roan, Michael J
2009-02-01
Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.
Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search
Song, Kai; Liu, Qi; Wang, Qi
2011-01-01
Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401
The Coast Artillery Journal. Volume 65, Number 4, October 1926
1926-10-01
sound. a. Sound location of airplanes by binaural observation in all antiaircraft regiments. b. Sound ranging on report of enemy guns, together with...Direction finding by binaural observation. [Subparagraphs 30 a and 30 c (l).J This applies to continuous sounds such as pro- pellor noises. b. Point...impacts. 32. The so-called binaural sense is our means of sensing the direc- tion of a sound source. When we hear a sound we judge the approxi- mate
Object localization using a biosonar beam: how opening your mouth improves localization.
Arditi, G; Weiss, A J; Yovel, Y
2015-08-01
Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions.
Object localization using a biosonar beam: how opening your mouth improves localization
Arditi, G.; Weiss, A. J.; Yovel, Y.
2015-01-01
Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions. PMID:26361552
NASA Astrophysics Data System (ADS)
Li, Xuebao; Cui, Xiang; Lu, Tiebing; Wang, Donglai
2017-10-01
The directivity and lateral profile of corona-generated audible noise (AN) from a single corona source are measured through experiments carried out in the semi-anechoic laboratory. The experimental results show that the waveform of corona-generated AN consists of a series of random sound pressure pulses whose pulse amplitudes decrease with the increase of measurement distance. A single corona source can be regarded as a non-directional AN source, and the A-weighted SPL (sound pressure level) decreases 6 dB(A) as doubling the measurement distance. Then, qualitative explanations for the rationality of treating the single corona source as a point source are given on the basis of the Ingard's theory for sound generation in corona discharge. Furthermore, we take into consideration of the ground reflection and the air attenuation to reconstruct the propagation features of AN from the single corona source. The calculated results agree with the measurement well, which validates the propagation model. Finally, the influence of the ground reflection on the SPL is presented in the paper.
Statistics of natural reverberation enable perceptual separation of sound and space
Traer, James; McDermott, Josh H.
2016-01-01
In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us. PMID:27834730
Statistics of natural reverberation enable perceptual separation of sound and space.
Traer, James; McDermott, Josh H
2016-11-29
In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us.
The use of an active controlled enclosure to attenuate sound radiation from a heavy radiator
NASA Astrophysics Data System (ADS)
Sun, Yao; Yang, Tiejun; Zhu, Minggang; Pan, Jie
2017-03-01
Active structural acoustical control usually experiences difficulty in the control of heavy sources or sources where direct applications of control forces are not practical. To overcome this difficulty, an active controlled enclosure, which forms a cavity with both flexible and open boundary, is employed. This configuration permits indirect implementation of active control in which the control inputs can be applied to subsidiary structures other than the sources. To determine the control effectiveness of the configuration, the vibro-acoustic behavior of the system, which consists of a top plate with an open, a sound cavity and a source panel, is investigated in this paper. A complete mathematical model of the system is formulated involving modified Fourier series formulations and the governing equations are solved using the Rayleigh-Ritz method. The coupling mechanisms of a partly opened cavity and a plate are analysed in terms of modal responses and directivity patterns. Furthermore, to attenuate sound power radiated from both the top panel and the open, two strategies are studied: minimizing the total radiated power and the cancellation of volume velocity. Moreover, three control configurations are compared, using a point force on the control panel (structural control), using a sound source in the cavity (acoustical control) and applying hybrid structural-acoustical control. In addition, the effects of boundary condition of the control panel on the sound radiation and control performance are discussed.
NASA Astrophysics Data System (ADS)
Rosenbaum, Joyce E.
2011-12-01
Commercial air traffic is anticipated to increase rapidly in the coming years. The impact of aviation noise on communities surrounding airports is, therefore, a growing concern. Accurate prediction of noise can help to mitigate the impact on communities and foster smoother integration of aerospace engineering advances. The problem of accurate sound level prediction requires careful inclusion of all mechanisms that affect propagation, in addition to correct source characterization. Terrain, ground type, meteorological effects, and source directivity can have a substantial influence on the noise level. Because they are difficult to model, these effects are often included only by rough approximation. This dissertation presents a model designed for sound propagation over uneven terrain, with mixed ground type and realistic meteorological conditions. The model is a hybrid of two numerical techniques: the parabolic equation (PE) and fast field program (FFP) methods, which allow for physics-based inclusion of propagation effects and ensure the low frequency content, a factor in community impact, is predicted accurately. Extension of the hybrid model to a pseudo-three-dimensional representation allows it to produce aviation noise contour maps in the standard form. In order for the model to correctly characterize aviation noise sources, a method of representing arbitrary source directivity patterns was developed for the unique form of the parabolic equation starting field. With this advancement, the model can represent broadband, directional moving sound sources, traveling along user-specified paths. This work was prepared for possible use in the research version of the sound propagation module in the Federal Aviation Administration's new standard predictive tool.
A unified approach for the spatial enhancement of sound
NASA Astrophysics Data System (ADS)
Choi, Joung-Woo; Jang, Ji-Ho; Kim, Yang-Hann
2005-09-01
This paper aims to control the sound field spatially, so that the desired or target acoustic variable is enhanced within a zone where a listener is located. This is somewhat analogous to having manipulators that can draw sounds in any place. This also means that one can somehow see the controlled shape of sound in frequency or in real time. The former assures its practical applicability, for example, listening zone control for music. The latter provides a mean of analyzing sound field. With all these regards, a unified approach is proposed that can enhance selected acoustic variables using multiple sources. Three kinds of acoustic variables that have to do with magnitude and direction of sound field are formulated and enhanced. The first one, which has to do with the spatial control of acoustic potential energy, enables one to make a zone of loud sound over an area. Otherwise, one can control directional characteristic of sound field by controlling directional energy density, or one can enhance the magnitude and direction of sound at the same time by controlling acoustic intensity. Throughout various examples, it is shown that these acoustic variables can be controlled successfully by the proposed approach.
Investigation of the sound generation mechanisms for in-duct orifice plates.
Tao, Fuyang; Joseph, Phillip; Zhang, Xin; Stalnov, Oksana; Siercke, Matthias; Scheel, Henning
2017-08-01
Sound generation due to an orifice plate in a hard-walled flow duct which is commonly used in air distribution systems (ADS) and flow meters is investigated. The aim is to provide an understanding of this noise generation mechanism based on measurements of the source pressure distribution over the orifice plate. A simple model based on Curle's acoustic analogy is described that relates the broadband in-duct sound field to the surface pressure cross spectrum on both sides of the orifice plate. This work describes careful measurements of the surface pressure cross spectrum over the orifice plate from which the surface pressure distribution and correlation length is deduced. This information is then used to predict the radiated in-duct sound field. Agreement within 3 dB between the predicted and directly measured sound fields is obtained, providing direct confirmation that the surface pressure fluctuations acting over the orifice plates are the main noise sources. Based on the developed model, the contributions to the sound field from different radial locations of the orifice plate are calculated. The surface pressure is shown to follow a U 3.9 velocity scaling law and the area over which the surface sources are correlated follows a U 1.8 velocity scaling law.
Visual Presentation Effects on Identification of Multiple Environmental Sounds
Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio
2016-01-01
This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478
Active localization of virtual sounds
NASA Technical Reports Server (NTRS)
Loomis, Jack M.; Hebert, C.; Cicinelli, J. G.
1991-01-01
We describe a virtual sound display built around a 12 MHz 80286 microcomputer and special purpose analog hardware. The display implements most of the primary cues for sound localization in the ear-level plane. Static information about direction is conveyed by interaural time differences and, for frequencies above 1800 Hz, by head sound shadow (interaural intensity differences) and pinna sound shadow. Static information about distance is conveyed by variation in sound pressure (first power law) for all frequencies, by additional attenuation in the higher frequencies (simulating atmospheric absorption), and by the proportion of direct to reverberant sound. When the user actively locomotes, the changing angular position of the source occasioned by head rotations provides further information about direction and the changing angular velocity produced by head translations (motion parallax) provides further information about distance. Judging both from informal observations by users and from objective data obtained in an experiment on homing to virtual and real sounds, we conclude that simple displays such as this are effective in creating the perception of external sounds to which subjects can home with accuracy and ease.
The Encoding of Sound Source Elevation in the Human Auditory Cortex.
Trapeau, Régis; Schönwiesner, Marc
2018-03-28
Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation. SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the tuning functions in low-level auditory cortex underlie the perceived elevation of a sound source. Copyright © 2018 the authors 0270-6474/18/383252-13$15.00/0.
Rozhkova, G I; Polishcuk, N A
1976-01-01
Previously it has been shown that some abdominal giant neurones of the cricket have constant preffered directions of sound stimulation in relation not to the cerci (the organs bearing sound receptors) but to the insect body (fig. 1) [1]. Now it is found that the independence of directional sensitivity of giant neurones on the cerci position disappears after cutting all structures connecting the cerci to the body (except cercal nerves) (fig 2). Therefore the constancy of directional sensitivity of the giant nerones is provided by proprioceptive signals about cerci position.
Directional Acoustic Wave Manipulation by a Porpoise via Multiphase Forehead Structure
NASA Astrophysics Data System (ADS)
Zhang, Yu; Song, Zhongchang; Wang, Xianyan; Cao, Wenwu; Au, Whitlow W. L.
2017-12-01
Porpoises are small-toothed whales, and they can produce directional acoustic waves to detect and track prey with high resolution and a wide field of view. Their sound-source sizes are rather small in comparison with the wavelength so that beam control should be difficult according to textbook sonar theories. Here, we demonstrate that the multiphase material structure in a porpoise's forehead is the key to manipulating the directional acoustic field. Computed tomography (CT) derives the multiphase (bone-air-tissue) complex, tissue experiments obtain the density and sound-velocity multiphase gradient distributions, and acoustic fields and beam formation are numerically simulated. The results suggest the control of wave propagations and sound-beam formations is realized by cooperation of the whole forehead's tissues and structures. The melon size significantly impacts the side lobes of the beam and slightly influences the main beams, while the orientation of the vestibular sac mainly adjusts the main beams. By compressing the forehead complex, the sound beam can be expanded for near view. The porpoise's biosonar allows effective wave manipulations for its omnidirectional sound source, which can help the future development of miniaturized biomimetic projectors in underwater sonar, medical ultrasonography, and other ultrasonic imaging applications.
Compression of auditory space during forward self-motion.
Teramoto, Wataru; Sakamoto, Shuichi; Furune, Fumimasa; Gyoba, Jiro; Suzuki, Yôiti
2012-01-01
Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point). In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial shifts in the auditory receptive field locations driven by afferent signals from vestibular system.
NASA Astrophysics Data System (ADS)
Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng
2016-05-01
In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.
Realization of an omnidirectional source of sound using parametric loudspeakers.
Sayin, Umut; Artís, Pere; Guasch, Oriol
2013-09-01
Parametric loudspeakers are often used in beam forming applications where a high directivity is required. Withal, in this paper it is proposed to use such devices to build an omnidirectional source of sound. An initial prototype, the omnidirectional parametric loudspeaker (OPL), consisting of a sphere with hundreds of ultrasonic transducers placed on it has been constructed. The OPL emits audible sound thanks to the parametric acoustic array phenomenon, and the close proximity and the large number of transducers results in the generation of a highly omnidirectional sound field. Comparisons with conventional dodecahedron loudspeakers have been made in terms of directivity, frequency response, and in applications such as the generation of diffuse acoustic fields in reverberant chambers. The OPL prototype has performed better than the conventional loudspeaker especially for frequencies higher than 500 Hz, its main drawback being the difficulty to generate intense pressure levels at low frequencies.
2014-01-01
Localizing a sound source requires the auditory system to determine its direction and its distance. In general, hearing-impaired listeners do less well in experiments measuring localization performance than normal-hearing listeners, and hearing aids often exacerbate matters. This article summarizes the major experimental effects in direction (and its underlying cues of interaural time differences and interaural level differences) and distance for normal-hearing, hearing-impaired, and aided listeners. Front/back errors and the importance of self-motion are noted. The influence of vision on the localization of real-world sounds is emphasized, such as through the ventriloquist effect or the intriguing link between spatial hearing and visual attention. PMID:25492094
NASA Technical Reports Server (NTRS)
Wilson, L. N.
1970-01-01
The mathematical bases for the direct measurement of sound source intensities in turbulent jets using the crossed-beam technique are discussed in detail. It is found that the problems associated with such measurements lie in three main areas: (1) measurement of the correct flow covariance, (2) accounting for retarded time effects in the measurements, and (3) transformation of measurements to a moving frame of reference. The determination of the particular conditions under which these problems can be circumvented is the main goal of the study.
NASA Technical Reports Server (NTRS)
Theobald, M. A.
1978-01-01
The single source location used for helicopter model studies was utilized in a study to determine the distances and directions upstream of the model accurate at which measurements of the direct acoustic field could be obtained. The method used was to measure the decrease of sound pressure levels with distance from a noise source and thereby determine the Hall radius as a function of frequency and direction. Test arrangements and procedures are described. Graphs show the normalized sound pressure level versus distance curves for the glass fiber floor treatment and for the foam floor treatment.
Zeitler, Daniel M; Dorman, Michael F; Natale, Sarah J; Loiselle, Louise; Yost, William A; Gifford, Rene H
2015-09-01
To assess improvements in sound source localization and speech understanding in complex listening environments after unilateral cochlear implantation for single-sided deafness (SSD). Nonrandomized, open, prospective case series. Tertiary referral center. Nine subjects with a unilateral cochlear implant (CI) for SSD (SSD-CI) were tested. Reference groups for the task of sound source localization included young (n = 45) and older (n = 12) normal-hearing (NH) subjects and 27 bilateral CI (BCI) subjects. Unilateral cochlear implantation. Sound source localization was tested with 13 loudspeakers in a 180 arc in front of the subject. Speech understanding was tested with the subject seated in an 8-loudspeaker sound system arrayed in a 360-degree pattern. Directionally appropriate noise, originally recorded in a restaurant, was played from each loudspeaker. Speech understanding in noise was tested using the Azbio sentence test and sound source localization quantified using root mean square error. All CI subjects showed poorer-than-normal sound source localization. SSD-CI subjects showed a bimodal distribution of scores: six subjects had scores near the mean of those obtained by BCI subjects, whereas three had scores just outside the 95th percentile of NH listeners. Speech understanding improved significantly in the restaurant environment when the signal was presented to the side of the CI. Cochlear implantation for SSD can offer improved speech understanding in complex listening environments and improved sound source localization in both children and adults. On tasks of sound source localization, SSD-CI patients typically perform as well as BCI patients and, in some cases, achieve scores at the upper boundary of normal performance.
Poletti, Mark A; Betlehem, Terence; Abhayapala, Thushara D
2014-07-01
Higher order sound sources of Nth order can radiate sound with 2N + 1 orthogonal radiation patterns, which can be represented as phase modes or, equivalently, amplitude modes. This paper shows that each phase mode response produces a spiral wave front with a different spiral rate, and therefore a different direction of arrival of sound. Hence, for a given receiver position a higher order source is equivalent to a linear array of 2N + 1 monopole sources. This interpretation suggests performance similar to a circular array of higher order sources can be produced by an array of sources, each of which consists of a line array having monopoles at the apparent source locations of the corresponding phase modes. Simulations of higher order arrays and arrays of equivalent line sources are presented. It is shown that the interior fields produced by the two arrays are essentially the same, but that the exterior fields differ because the higher order sources produces different equivalent source locations for field positions outside the array. This work provides an explanation of the fact that an array of L Nth order sources can reproduce sound fields whose accuracy approaches the performance of (2N + 1)L monopoles.
Directionality of nose-emitted echolocation calls from bats without a nose leaf (Plecotus auritus).
Jakobsen, Lasse; Hallam, John; Moss, Cynthia F; Hedenström, Anders
2018-02-13
All echolocating bats and whales measured to date emit a directional bio-sonar beam that affords them a number of advantages over an omni-directional beam, i.e. reduced clutter, increased source level and inherent directional information. In this study, we investigated the importance of directional sound emission for navigation through echolocation by measuring the sonar beam of brown long-eared bats, Plecotus auritus Plecotus auritus emits sound through the nostrils but has no external appendages to readily facilitate a directional sound emission as found in most nose emitters. The study shows that P. auritus , despite lacking an external focusing apparatus, emits a directional echolocation beam (directivity index=13 dB) and that the beam is more directional vertically (-6 dB angle at 22 deg) than horizontally (-6 dB angle at 35 deg). Using a simple numerical model, we found that the recorded emission pattern is achievable if P. auritus emits sound through the nostrils as well as the mouth. The study thus supports the hypothesis that a directional echolocation beam is important for perception through echolocation and we propose that animals with similarly non-directional emitter characteristics may facilitate a directional sound emission by emitting sound through both the nostrils and the mouth. © 2018. Published by The Company of Biologists Ltd.
Auditory performance in an open sound field
NASA Astrophysics Data System (ADS)
Fluitt, Kim F.; Letowski, Tomasz; Mermagen, Timothy
2003-04-01
Detection and recognition of acoustic sources in an open field are important elements of situational awareness on the battlefield. They are affected by many technical and environmental conditions such as type of sound, distance to a sound source, terrain configuration, meteorological conditions, hearing capabilities of the listener, level of background noise, and the listener's familiarity with the sound source. A limited body of knowledge about auditory perception of sources located over long distances makes it difficult to develop models predicting auditory behavior on the battlefield. The purpose of the present study was to determine the listener's abilities to detect, recognize, localize, and estimate distances to sound sources from 25 to 800 m from the listing position. Data were also collected for meteorological conditions (wind direction and strength, temperature, atmospheric pressure, humidity) and background noise level for each experimental trial. Forty subjects (men and women, ages 18 to 25) participated in the study. Nine types of sounds were presented from six loudspeakers in random order; each series was presented four times. Partial results indicate that both detection and recognition declined at distances greater than approximately 200 m and distance estimation was grossly underestimated by listeners. Specific results will be presented.
Sound-field reproduction systems using fixed-directivity loudspeakers.
Poletti, M; Fazi, F M; Nelson, P A
2010-06-01
Sound reproduction systems using open arrays of loudspeakers in rooms suffer from degradations due to room reflections. These reflections can be reduced using pre-compensation of the loudspeaker signals, but this requires calibration of the array in the room, and is processor-intensive. This paper examines 3D sound reproduction systems using spherical arrays of fixed-directivity loudspeakers which reduce the sound field radiated outside the array. A generalized form of the simple source formulation and a mode-matching solution are derived for the required loudspeaker weights. The exterior field is derived and expressions for the exterior power and direct to reverberant ratio are derived. The theoretical results and simulations confirm that minimum interference occurs for loudspeakers which have hyper-cardioid polar responses.
Developing a system for blind acoustic source localization and separation
NASA Astrophysics Data System (ADS)
Kulkarni, Raghavendra
This dissertation presents innovate methodologies for locating, extracting, and separating multiple incoherent sound sources in three-dimensional (3D) space; and applications of the time reversal (TR) algorithm to pinpoint the hyper active neural activities inside the brain auditory structure that are correlated to the tinnitus pathology. Specifically, an acoustic modeling based method is developed for locating arbitrary and incoherent sound sources in 3D space in real time by using a minimal number of microphones, and the Point Source Separation (PSS) method is developed for extracting target signals from directly measured mixed signals. Combining these two approaches leads to a novel technology known as Blind Sources Localization and Separation (BSLS) that enables one to locate multiple incoherent sound signals in 3D space and separate original individual sources simultaneously, based on the directly measured mixed signals. These technologies have been validated through numerical simulations and experiments conducted in various non-ideal environments where there are non-negligible, unspecified sound reflections and reverberation as well as interferences from random background noise. Another innovation presented in this dissertation is concerned with applications of the TR algorithm to pinpoint the exact locations of hyper-active neurons in the brain auditory structure that are directly correlated to the tinnitus perception. Benchmark tests conducted on normal rats have confirmed the localization results provided by the TR algorithm. Results demonstrate that the spatial resolution of this source localization can be as high as the micrometer level. This high precision localization may lead to a paradigm shift in tinnitus diagnosis, which may in turn produce a more cost-effective treatment for tinnitus than any of the existing ones.
Calibration of International Space Station (ISS) Node 1 Vibro-Acoustic Model-Report 2
NASA Technical Reports Server (NTRS)
Zhang, Weiguo; Raveendra, Ravi
2014-01-01
Reported here is the capability of the Energy Finite Element Method (E-FEM) to predict the vibro-acoustic sound fields within the International Space Station (ISS) Node 1 and to compare the results with simulated leak sounds. A series of electronically generated structural ultrasonic noise sources were created in the pressure wall to emulate leak signals at different locations of the Node 1 STA module during its period of storage at Stennis Space Center (SSC). The exact sound source profiles created within the pressure wall at the source were unknown, but were estimated from the closest sensor measurement. The E-FEM method represents a reverberant sound field calculation, and of importance to this application is the requirement to correctly handle the direct field effect of the sound generation. It was also important to be able to compute the sound energy fields in the ultrasonic frequency range. This report demonstrates the capability of this technology as applied to this type of application.
Interior and exterior sound field control using general two-dimensional first-order sources.
Poletti, M A; Abhayapala, T D
2011-01-01
Reproduction of a given sound field interior to a circular loudspeaker array without producing an undesirable exterior sound field is an unsolved problem over a broadband of frequencies. At low frequencies, by implementing the Kirchhoff-Helmholtz integral using a circular discrete array of line-source loudspeakers, a sound field can be recreated within the array and produce no exterior sound field, provided that the loudspeakers have azimuthal polar responses with variable first-order responses which are a combination of a two-dimensional (2D) monopole and a radially oriented 2D dipole. This paper examines the performance of circular discrete arrays of line-source loudspeakers which also include a tangential dipole, providing general variable-directivity responses in azimuth. It is shown that at low frequencies, the tangential dipoles are not required, but that near and above the Nyquist frequency, the tangential dipoles can both improve the interior accuracy and reduce the exterior sound field. The additional dipoles extend the useful range of the array by around an octave.
McClaine, Elizabeth M.; Yin, Tom C. T.
2010-01-01
The precedence effect (PE) is an auditory spatial illusion whereby two identical sounds presented from two separate locations with a delay between them are perceived as a fused single sound source whose position depends on the value of the delay. By training cats using operant conditioning to look at sound sources, we have previously shown that cats experience the PE similarly to humans. For delays less than ±400 μs, cats exhibit summing localization, the perception of a “phantom” sound located between the sources. Consistent with localization dominance, for delays from 400 μs to ∼10 ms, cats orient toward the leading source location only, with little influence of the lagging source. Finally, echo threshold was reached for delays >10 ms, where cats first began to orient to the lagging source. It has been hypothesized by some that the neural mechanisms that produce facets of the PE, such as localization dominance and echo threshold, must likely occur at cortical levels. To test this hypothesis, we measured both pinnae position, which were not under any behavioral constraint, and eye position in cats and found that the pinnae orientations to stimuli that produce each of the three phases of the PE illusion was similar to the gaze responses. Although both eye and pinnae movements behaved in a manner that reflected the PE, because the pinnae moved with strikingly short latencies (∼30 ms), these data suggest a subcortical basis for the PE and that the cortex is not likely to be directly involved. PMID:19889848
Tollin, Daniel J; McClaine, Elizabeth M; Yin, Tom C T
2010-01-01
The precedence effect (PE) is an auditory spatial illusion whereby two identical sounds presented from two separate locations with a delay between them are perceived as a fused single sound source whose position depends on the value of the delay. By training cats using operant conditioning to look at sound sources, we have previously shown that cats experience the PE similarly to humans. For delays less than +/-400 mus, cats exhibit summing localization, the perception of a "phantom" sound located between the sources. Consistent with localization dominance, for delays from 400 mus to approximately 10 ms, cats orient toward the leading source location only, with little influence of the lagging source. Finally, echo threshold was reached for delays >10 ms, where cats first began to orient to the lagging source. It has been hypothesized by some that the neural mechanisms that produce facets of the PE, such as localization dominance and echo threshold, must likely occur at cortical levels. To test this hypothesis, we measured both pinnae position, which were not under any behavioral constraint, and eye position in cats and found that the pinnae orientations to stimuli that produce each of the three phases of the PE illusion was similar to the gaze responses. Although both eye and pinnae movements behaved in a manner that reflected the PE, because the pinnae moved with strikingly short latencies ( approximately 30 ms), these data suggest a subcortical basis for the PE and that the cortex is not likely to be directly involved.
Focusing and directional beaming effects of airborne sound through a planar lens with zigzag slits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Kun; Qiu, Chunyin, E-mail: cyqiu@whu.edu.cn; Lu, Jiuyang
2015-01-14
Based on the Huygens-Fresnel principle, we design a planar lens to efficiently realize the interconversion between the point-like sound source and Gaussian beam in ambient air. The lens is constructed by a planar plate perforated elaborately with a nonuniform array of zigzag slits, where the slit exits act as subwavelength-sized secondary sources carrying desired sound responses. The experiments operated at audible regime agree well with the theoretical predictions. This compact device could be useful in daily life applications, such as for medical and detection purposes.
A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS
NASA Astrophysics Data System (ADS)
Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto
At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.
Meteorological effects on long-range outdoor sound propagation
NASA Technical Reports Server (NTRS)
Klug, Helmut
1990-01-01
Measurements of sound propagation over distances up to 1000 m were carried out with an impulse sound source offering reproducible, short time signals. Temperature and wind speed at several heights were monitored simultaneously; the meteorological data are used to determine the sound speed gradients according to the Monin-Obukhov similarity theory. The sound speed profile is compared to a corresponding prediction, gained through the measured travel time difference between direct and ground reflected pulse (which depends on the sound speed gradient). Positive sound speed gradients cause bending of the sound rays towards the ground yielding enhanced sound pressure levels. The measured meteorological effects on sound propagation are discussed and illustrated by ray tracing methods.
Reduced order modeling of head related transfer functions for virtual acoustic displays
NASA Astrophysics Data System (ADS)
Willhite, Joel A.; Frampton, Kenneth D.; Grantham, D. Wesley
2003-04-01
The purpose of this work is to improve the computational efficiency in acoustic virtual applications by creating and testing reduced order models of the head related transfer functions used in localizing sound sources. State space models of varying order were generated from zero-elevation Head Related Impulse Responses (HRIRs) using Kungs Single Value Decomposition (SVD) technique. The inputs to the models are the desired azimuths of the virtual sound sources (from minus 90 deg to plus 90 deg, in 10 deg increments) and the outputs are the left and right ear impulse responses. Trials were conducted in an anechoic chamber in which subjects were exposed to real sounds that were emitted by individual speakers across a numbered speaker array, phantom sources generated from the original HRIRs, and phantom sound sources generated with the different reduced order state space models. The error in the perceived direction of the phantom sources generated from the reduced order models was compared to errors in localization using the original HRIRs.
Direct Measurement of the Speed of Sound Using a Microphone and a Speaker
ERIC Educational Resources Information Center
Gómez-Tejedor, José A.; Castro-Palacio, Juan C.; Monsoriu, Juan A.
2014-01-01
We present a simple and accurate experiment to obtain the speed of sound in air using a conventional speaker and a microphone connected to a computer. A free open source digital audio editor and recording computer software application allows determination of the time-of-flight of the wave for different distances, from which the speed of sound is…
Dietz, Mathias; Marquardt, Torsten; Salminen, Nelli H.; McAlpine, David
2013-01-01
The ability to locate the direction of a target sound in a background of competing sources is critical to the survival of many species and important for human communication. Nevertheless, brain mechanisms that provide for such accurate localization abilities remain poorly understood. In particular, it remains unclear how the auditory brain is able to extract reliable spatial information directly from the source when competing sounds and reflections dominate all but the earliest moments of the sound wave reaching each ear. We developed a stimulus mimicking the mutual relationship of sound amplitude and binaural cues, characteristic to reverberant speech. This stimulus, named amplitude modulated binaural beat, allows for a parametric and isolated change of modulation frequency and phase relations. Employing magnetoencephalography and psychoacoustics it is demonstrated that the auditory brain uses binaural information in the stimulus fine structure only during the rising portion of each modulation cycle, rendering spatial information recoverable in an otherwise unlocalizable sound. The data suggest that amplitude modulation provides a means of “glimpsing” low-frequency spatial cues in a manner that benefits listening in noisy or reverberant environments. PMID:23980161
An investigation of the sound field above the audience in large lecture halls with a scale model.
Kahn, D W; Tichy, J
1986-09-01
Measurements of steady-state sound pressure levels above the audience in large lecture halls show that the classical equation for predicting the sound pressure level is not accurate. The direct field above the seats was measured on a 1:10 scale model and was found to be dependent on the incidence angle and direction of sound propagation across the audience. The reverberant field above the seats in the model was calculated by subtracting the direct field from the measured total field and was found to be dependent on the magnitude and particularly on the placement of absorption. The decrease of sound pressure level versus distance in the total field depends on the angle (controlled by absorption placement) at which the strong reflections are incident upon the audience area. Sound pressure level decreases at a fairly constant rate with distance from the sound source in both the direct and reverberant field, and the decrease rate depends strongly on the absorption placement. The lowest rate of decay occurs when the side walls are absorptive, and both the ceiling and rear wall are reflective. These consequences are discussed with respect to prediction of speech intelligibility.
Perception of Animacy from the Motion of a Single Sound Object.
Nielsen, Rasmus Høll; Vuust, Peter; Wallentin, Mikkel
2015-02-01
Research in the visual modality has shown that the presence of certain dynamics in the motion of an object has a strong effect on whether or not the entity is perceived as animate. Cues for animacy are, among others, self-propelled motion and direction changes that are seemingly not caused by entities external to, or in direct contact with, the moving object. The present study aimed to extend this research into the auditory domain by determining if similar dynamics could influence the perceived animacy of a sound source. In two experiments, participants were presented with single, synthetically generated 'mosquito' sounds moving along trajectories in space, and asked to rate how certain they were that each sound-emitting entity was alive. At a random point on a linear motion trajectory, the sound source would deviate from its initial path and speed. Results confirm findings from the visual domain that a change in the velocity of motion is positively correlated with perceived animacy, and changes in direction were found to influence animacy judgment as well. This suggests that an ability to facilitate and sustain self-movement is perceived as a living quality not only in the visual domain, but in the auditory domain as well. © 2015 SAGE Publications.
Directional Reflective Surface Formed via Gradient-Impeding Acoustic Meta-Surfaces
Song, Kyungjun; Kim, Jedo; Hur, Shin; Kwak, Jun-Hyuk; Lee, Seong-Hyun; Kim, Taesung
2016-01-01
Artificially designed acoustic meta-surfaces have the ability to manipulate sound energy to an extraordinary extent. Here, we report on a new type of directional reflective surface consisting of an array of sub-wavelength Helmholtz resonators with varying internal coiled path lengths, which induce a reflection phase gradient along a planar acoustic meta-surface. The acoustically reshaped reflective surface created by the gradient-impeding meta-surface yields a distinct focal line similar to a parabolic cylinder antenna, and is used for directive sound beamforming. Focused beam steering can be also obtained by repositioning the source (or receiver) off axis, i.e., displaced from the focal line. Besides flat reflective surfaces, complex surfaces such as convex or conformal shapes may be used for sound beamforming, thus facilitating easy application in sound reinforcement systems. Therefore, directional reflective surfaces have promising applications in fields such as acoustic imaging, sonic weaponry, and underwater communication. PMID:27562634
Litovsky, Ruth Y.; Godar, Shelly P.
2010-01-01
The precedence effect refers to the fact that humans are able to localize sound in reverberant environments, because the auditory system assigns greater weight to the direct sound (lead) than the later-arriving sound (lag). In this study, absolute sound localization was studied for single source stimuli and for dual source lead-lag stimuli in 4–5 year old children and adults. Lead-lag delays ranged from 5–100 ms. Testing was conducted in free field, with pink noise bursts emitted from loudspeakers positioned on a horizontal arc in the frontal field. Listeners indicated how many sounds were heard and the perceived location of the first- and second-heard sounds. Results suggest that at short delays (up to 10 ms), the lead dominates sound localization strongly at both ages, and localization errors are similar to those with single-source stimuli. At longer delays errors can be large, stemming from over-integration of the lead and lag, interchanging of perceived locations of the first-heard and second-heard sounds due to temporal order confusion, and dominance of the lead over the lag. The errors are greater for children than adults. Results are discussed in the context of maturation of auditory and non-auditory factors. PMID:20968369
Hearing in three dimensions: Sound localization
NASA Technical Reports Server (NTRS)
Wightman, Frederic L.; Kistler, Doris J.
1990-01-01
The ability to localize a source of sound in space is a fundamental component of the three dimensional character of the sound of audio. For over a century scientists have been trying to understand the physical and psychological processes and physiological mechanisms that subserve sound localization. This research has shown that important information about sound source position is provided by interaural differences in time of arrival, interaural differences in intensity and direction-dependent filtering provided by the pinnae. Progress has been slow, primarily because experiments on localization are technically demanding. Control of stimulus parameters and quantification of the subjective experience are quite difficult problems. Recent advances, such as the ability to simulate a three dimensional sound field over headphones, seem to offer potential for rapid progress. Research using the new techniques has already produced new information. It now seems that interaural time differences are a much more salient and dominant localization cue than previously believed.
Characterisation of structure-borne sound source using reception plate method.
Putra, A; Saari, N F; Bakri, H; Ramlan, R; Dan, R M
2013-01-01
A laboratory-based experiment procedure of reception plate method for structure-borne sound source characterisation is reported in this paper. The method uses the assumption that the input power from the source installed on the plate is equal to the power dissipated by the plate. In this experiment, rectangular plates having high and low mobility relative to that of the source were used as the reception plates and a small electric fan motor was acting as the structure-borne source. The data representing the source characteristics, namely, the free velocity and the source mobility, were obtained and compared with those from direct measurement. Assumptions and constraints employing this method are discussed.
Subjective scaling of spatial room acoustic parameters influenced by visual environmental cues
Valente, Daniel L.; Braasch, Jonas
2010-01-01
Although there have been numerous studies investigating subjective spatial impression in rooms, only a few of those studies have addressed the influence of visual cues on the judgment of auditory measures. In the psychophysical study presented here, video footage of five solo music∕speech performers was shown for four different listening positions within a general-purpose space. The videos were presented in addition to the acoustic signals, which were auralized using binaural room impulse responses (BRIR) that were recorded in the same general-purpose space. The participants were asked to adjust the direct-to-reverberant energy ratio (D∕R ratio) of the BRIR according to their expectation considering the visual cues. They were also directed to rate the apparent source width (ASW) and listener envelopment (LEV) for each condition. Visual cues generated by changing the sound-source position in the multi-purpose space, as well as the makeup of the sound stimuli affected the judgment of spatial impression. Participants also scaled the direct-to-reverberant energy ratio with greater direct sound energy than was measured in the acoustical environment. PMID:20968367
Directive sources in acoustic discrete-time domain simulations based on directivity diagrams.
Escolano, José; López, José J; Pueo, Basilio
2007-06-01
Discrete-time domain methods provide a simple and flexible way to solve initial boundary value problems. With regard to the sources in such methods, only monopoles or dipoles can be considered. However, in many problems such as room acoustics, the radiation of realistic sources is directional-dependent and their directivity patterns have a clear influence on the total sound field. In this letter, a method to synthesize the directivity of sources is proposed, especially in cases where the knowledge is only based on discrete values of the directivity diagram. Some examples have been carried out in order to show the behavior and accuracy of the proposed method.
Localizing nearby sound sources in a classroom: Binaural room impulse responses
NASA Astrophysics Data System (ADS)
Shinn-Cunningham, Barbara G.; Kopco, Norbert; Martin, Tara J.
2005-05-01
Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy. .
Localizing nearby sound sources in a classroom: binaural room impulse responses.
Shinn-Cunningham, Barbara G; Kopco, Norbert; Martin, Tara J
2005-05-01
Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy.
Graphene-on-paper sound source devices.
Tian, He; Ren, Tian-Ling; Xie, Dan; Wang, Yu-Feng; Zhou, Chang-Jian; Feng, Ting-Ting; Fu, Di; Yang, Yi; Peng, Ping-Gang; Wang, Li-Gang; Liu, Li-Tian
2011-06-28
We demonstrate an interesting phenomenon that graphene can emit sound. The application of graphene can be expanded in the acoustic field. Graphene-on-paper sound source devices are made by patterning graphene on paper substrates. Three graphene sheet samples with the thickness of 100, 60, and 20 nm were fabricated. Sound emission from graphene is measured as a function of power, distance, angle, and frequency in the far-field. The theoretical model of air/graphene/paper/PCB board multilayer structure is established to analyze the sound directivity, frequency response, and efficiency. Measured sound pressure level (SPL) and efficiency are in good agreement with theoretical results. It is found that graphene has a significant flat frequency response in the wide ultrasound range 20-50 kHz. In addition, the thinner graphene sheets can produce higher SPL due to its lower heat capacity per unit area (HCPUA). The infrared thermal images reveal that a thermoacoustic effect is the working principle. We find that the sound performance mainly depends on the HCPUA of the conductor and the thermal properties of the substrate. The paper-based graphene sound source devices have highly reliable, flexible, no mechanical vibration, simple structure and high performance characteristics. It could open wide applications in multimedia, consumer electronics, biological, medical, and many other areas.
High-frequency monopole sound source for anechoic chamber qualification
NASA Astrophysics Data System (ADS)
Saussus, Patrick; Cunefare, Kenneth A.
2003-04-01
Anechoic chamber qualification procedures require the use of an omnidirectional monopole sound source. Required characteristics for these monopole sources are explicitly listed in ISO 3745. Building a high-frequency monopole source that meets these characteristics has proved difficult due to the size limitations imposed by small wavelengths at high frequency. A prototype design developed for use in hemianechoic chambers employs telescoping tubes, which act as an inverse horn. This same design can be used in anechoic chambers, with minor adaptations. A series of gradually decreasing brass telescoping tubes is attached to the throat of a well-insulated high-frequency compression driver. Therefore, all of the sound emitted from the driver travels through the horn and exits through an opening of approximately 2.5 mm. Directivity test data show that this design meets all of the requirements set forth by ISO 3745.
A New Mechanism of Sound Generation in Songbirds
NASA Astrophysics Data System (ADS)
Goller, Franz; Larsen, Ole N.
1997-12-01
Our current understanding of the sound-generating mechanism in the songbird vocal organ, the syrinx, is based on indirect evidence and theoretical treatments. The classical avian model of sound production postulates that the medial tympaniform membranes (MTM) are the principal sound generators. We tested the role of the MTM in sound generation and studied the songbird syrinx more directly by filming it endoscopically. After we surgically incapacitated the MTM as a vibratory source, zebra finches and cardinals were not only able to vocalize, but sang nearly normal song. This result shows clearly that the MTM are not the principal sound source. The endoscopic images of the intact songbird syrinx during spontaneous and brain stimulation-induced vocalizations illustrate the dynamics of syringeal reconfiguration before phonation and suggest a different model for sound production. Phonation is initiated by rostrad movement and stretching of the syrinx. At the same time, the syrinx is closed through movement of two soft tissue masses, the medial and lateral labia, into the bronchial lumen. Sound production always is accompanied by vibratory motions of both labia, indicating that these vibrations may be the sound source. However, because of the low temporal resolution of the imaging system, the frequency and phase of labial vibrations could not be assessed in relation to that of the generated sound. Nevertheless, in contrast to the previous model, these observations show that both labia contribute to aperture control and strongly suggest that they play an important role as principal sound generators.
Echolocation versus echo suppression in humans
Wallmeier, Ludwig; Geßele, Nikodemus; Wiegrebe, Lutz
2013-01-01
Several studies have shown that blind humans can gather spatial information through echolocation. However, when localizing sound sources, the precedence effect suppresses spatial information of echoes, and thereby conflicts with effective echolocation. This study investigates the interaction of echolocation and echo suppression in terms of discrimination suppression in virtual acoustic space. In the ‘Listening’ experiment, sighted subjects discriminated between positions of a single sound source, the leading or the lagging of two sources, respectively. In the ‘Echolocation’ experiment, the sources were replaced by reflectors. Here, the same subjects evaluated echoes generated in real time from self-produced vocalizations and thereby discriminated between positions of a single reflector, the leading or the lagging of two reflectors, respectively. Two key results were observed. First, sighted subjects can learn to discriminate positions of reflective surfaces echo-acoustically with accuracy comparable to sound source discrimination. Second, in the Listening experiment, the presence of the leading source affected discrimination of lagging sources much more than vice versa. In the Echolocation experiment, however, the presence of both the lead and the lag strongly affected discrimination. These data show that the classically described asymmetry in the perception of leading and lagging sounds is strongly diminished in an echolocation task. Additional control experiments showed that the effect is owing to both the direct sound of the vocalization that precedes the echoes and owing to the fact that the subjects actively vocalize in the echolocation task. PMID:23986105
Acoustic investigation of wall jet over a backward-facing step using a microphone phased array
NASA Astrophysics Data System (ADS)
Perschke, Raimund F.; Ramachandran, Rakesh C.; Raman, Ganesh
2015-02-01
The acoustic properties of a wall jet over a hard-walled backward-facing step of aspect ratios 6, 3, 2, and 1.5 are studied using a 24-channel microphone phased array at Mach numbers up to M=0.6. The Reynolds number based on inflow velocity and step height assumes values from Reh = 3.0 ×104 to 7.2 ×105. Flow without and with side walls is considered. The experimental setup is open in the wall-normal direction and the expansion ratio is effectively 1. In case of flow through a duct, symmetry of the flow in the spanwise direction is lost downstream of separation at all but the largest aspect ratio as revealed by oil paint flow visualization. Hydrodynamic scattering of turbulence from the trailing edge of the step contributes significantly to the radiated sound. Reflection of acoustic waves from the bottom plate results in a modulation of power spectral densities. Acoustic source localization has been conducted using a 24-channel microphone phased array. Convective mean-flow effects on the apparent source origin have been assessed by placing a loudspeaker underneath a perforated flat plate and evaluating the displacement of the beamforming peak with inflow Mach number. Two source mechanisms are found near the step. One is due to interaction of the turbulent wall jet with the convex edge of the step. Free-stream turbulence sound is found to be peaked downstream of the step. Presence of the side walls increases free-stream sound. Results of the flow visualization are correlated with acoustic source maps. Trailing-edge sound and free-stream turbulence sound can be discriminated using source localization.
Ambient Sound-Based Collaborative Localization of Indeterministic Devices
Kamminga, Jacob; Le, Duc; Havinga, Paul
2016-01-01
Localization is essential in wireless sensor networks. To our knowledge, no prior work has utilized low-cost devices for collaborative localization based on only ambient sound, without the support of local infrastructure. The reason may be the fact that most low-cost devices are indeterministic and suffer from uncertain input latencies. This uncertainty makes accurate localization challenging. Therefore, we present a collaborative localization algorithm (Cooperative Localization on Android with ambient Sound Sources (CLASS)) that simultaneously localizes the position of indeterministic devices and ambient sound sources without local infrastructure. The CLASS algorithm deals with the uncertainty by splitting the devices into subsets so that outliers can be removed from the time difference of arrival values and localization results. Since Android is indeterministic, we select Android devices to evaluate our approach. The algorithm is evaluated with an outdoor experiment and achieves a mean Root Mean Square Error (RMSE) of 2.18 m with a standard deviation of 0.22 m. Estimated directions towards the sound sources have a mean RMSE of 17.5° and a standard deviation of 2.3°. These results show that it is feasible to simultaneously achieve a relative positioning of both devices and sound sources with sufficient accuracy, even when using non-deterministic devices and platforms, such as Android. PMID:27649176
Sound localization in the alligator.
Bierman, Hilary S; Carr, Catherine E
2015-11-01
In early tetrapods, it is assumed that the tympana were acoustically coupled through the pharynx and therefore inherently directional, acting as pressure difference receivers. The later closure of the middle ear cavity in turtles, archosaurs, and mammals is a derived condition, and would have changed the ear by decoupling the tympana. Isolation of the middle ears would then have led to selection for structural and neural strategies to compute sound source localization in both archosaurs and mammalian ancestors. In the archosaurs (birds and crocodilians) the presence of air spaces in the skull provided connections between the ears that have been exploited to improve directional hearing, while neural circuits mediating sound localization are well developed. In this review, we will focus primarily on directional hearing in crocodilians, where vocalization and sound localization are thought to be ecologically important, and indicate important issues still awaiting resolution. Copyright © 2015 Elsevier B.V. All rights reserved.
Personal sound zone reproduction with room reflections
NASA Astrophysics Data System (ADS)
Olik, Marek
Loudspeaker-based sound systems, capable of a convincing reproduction of different audio streams to listeners in the same acoustic enclosure, are a convenient alternative to headphones. Such systems aim to generate "sound zones" in which target sound programmes are to be reproduced with minimum interference from any alternative programmes. This can be achieved with appropriate filtering of the source (loudspeaker) signals, so that the target sound's energy is directed to the chosen zone while being attenuated elsewhere. The existing methods are unable to produce the required sound energy ratio (acoustic contrast) between the zones with a small number of sources when strong room reflections are present. Optimization of parameters is therefore required for systems with practical limitations to improve their performance in reflective acoustic environments. One important parameter is positioning of sources with respect to the zones and room boundaries. The first contribution of this thesis is a comparison of the key sound zoning methods implemented on compact and distributed geometrical source arrangements. The study presents previously unpublished detailed evaluation and ranking of such arrangements for systems with a limited number of sources in a reflective acoustic environment similar to a domestic room. Motivated by the requirement to investigate the relationship between source positioning and performance in detail, the central contribution of this thesis is a study on optimizing source arrangements when strong individual room reflections occur. Small sound zone systems are studied analytically and numerically to reveal relationships between the geometry of source arrays and performance in terms of acoustic contrast and array effort (related to system efficiency). Three novel source position optimization techniques are proposed to increase the contrast, and geometrical means of reducing the effort are determined. Contrary to previously published case studies, this work presents a systematic examination of the key problem of first order reflections and proposes general optimization techniques, thus forming an important contribution. The remaining contribution considers evaluation and comparison of the proposed techniques with two alternative approaches to sound zone generation under reflective conditions: acoustic contrast control (ACC) combined with anechoic source optimization and sound power minimization (SPM). The study provides a ranking of the examined approaches which could serve as a guideline for method selection for rooms with strong individual reflections.
Determination of equivalent sound speed profiles for ray tracing in near-ground sound propagation.
Prospathopoulos, John M; Voutsinas, Spyros G
2007-09-01
The determination of appropriate sound speed profiles in the modeling of near-ground propagation using a ray tracing method is investigated using a ray tracing model which is capable of performing axisymmetric calculations of the sound field around an isolated source. Eigenrays are traced using an iterative procedure which integrates the trajectory equations for each ray launched from the source at a specific direction. The calculation of sound energy losses is made by introducing appropriate coefficients to the equations representing the effect of ground and atmospheric absorption and the interaction with the atmospheric turbulence. The model is validated against analytical and numerical predictions of other methodologies for simple cases, as well as against measurements for nonrefractive atmospheric environments. A systematic investigation for near-ground propagation in downward and upward refractive atmosphere is made using experimental data. Guidelines for the suitable simulation of the wind velocity profile are derived by correlating predictions with measurements.
NASA Technical Reports Server (NTRS)
Fuller, C. R.; Hansen, C. H.; Snyder, S. D.
1991-01-01
Active control of sound radiation from a rectangular panel by two different methods has been experimentally studied and compared. In the first method a single control force applied directly to the structure is used with a single error microphone located in the radiated acoustic field. Global attenuation of radiated sound was observed to occur by two main mechanisms. For 'on-resonance' excitation, the control force had the effect of increasing the total panel input impedance presented to the nosie source, thus reducing all radiated sound. For 'off-resonance' excitation, the control force tends not significantly to modify the panel total response amplitude but rather to restructure the relative phases of the modes leading to a more complex vibration pattern and a decrease in radiation efficiency. For acoustic control, the second method, the number of acoustic sources required for global reduction was seen to increase with panel modal order. The mechanism in this case was that the acoustic sources tended to create an inverse pressure distribution at the panel surface and thus 'unload' the panel by reducing the panel radiation impedance. In general, control by structural inputs appears more effective than control by acoustic sources for structurally radiated noise.
Method and Apparatus for Characterizing Pressure Sensors using Modulated Light Beam Pressure
NASA Technical Reports Server (NTRS)
Youngquist, Robert C. (Inventor)
2003-01-01
Embodiments of apparatuses and methods are provided that use light sources instead of sound sources for characterizing and calibrating sensors for measuring small pressures to mitigate many of the problems with using sound sources. In one embodiment an apparatus has a light source for directing a beam of light on a sensing surface of a pressure sensor for exerting a force on the sensing surface. The pressure sensor generates an electrical signal indicative of the force exerted on the sensing surface. A modulator modulates the beam of light. A signal processor is electrically coupled to the pressure sensor for receiving the electrical signal.
Reciprocity-based experimental determination of dynamic forces and moments: A feasibility study
NASA Technical Reports Server (NTRS)
Ver, Istvan L.; Howe, Michael S.
1994-01-01
BBN Systems and Technologies has been tasked by the Georgia Tech Research Center to carry Task Assignment No. 7 for the NASA Langley Research Center to explore the feasibility of 'In-Situ Experimental Evaluation of the Source Strength of Complex Vibration Sources Utilizing Reciprocity.' The task was carried out under NASA Contract No. NAS1-19061. In flight it is not feasible to connect the vibration sources to their mounting points on the fuselage through force gauges to measure dynamic forces and moments directly. However, it is possible to measure the interior sound field or vibration response caused by these structureborne sound sources at many locations and invoke principle of reciprocity to predict the dynamic forces and moments. The work carried out in the framework of Task 7 was directed to explore the feasibility of reciprocity-based measurements of vibration forces and moments.
The role of reverberation-related binaural cues in the externalization of speech.
Catic, Jasmina; Santurette, Sébastien; Dau, Torsten
2015-08-01
The perception of externalization of speech sounds was investigated with respect to the monaural and binaural cues available at the listeners' ears in a reverberant environment. Individualized binaural room impulse responses (BRIRs) were used to simulate externalized sound sources via headphones. The measured BRIRs were subsequently modified such that the proportion of the response containing binaural vs monaural information was varied. Normal-hearing listeners were presented with speech sounds convolved with such modified BRIRs. Monaural reverberation cues were found to be sufficient for the externalization of a lateral sound source. In contrast, for a frontal source, an increased amount of binaural cues from reflections was required in order to obtain well externalized sound images. It was demonstrated that the interaction between the interaural cues of the direct sound and the reverberation strongly affects the perception of externalization. An analysis of the short-term binaural cues showed that the amount of fluctuations of the binaural cues corresponded well to the externalization ratings obtained in the listening tests. The results further suggested that the precedence effect is involved in the auditory processing of the dynamic binaural cues that are utilized for externalization perception.
Vector Acoustics, Vector Sensors, and 3D Underwater Imaging
NASA Astrophysics Data System (ADS)
Lindwall, D.
2007-12-01
Vector acoustic data has two more dimensions of information than pressure data and may allow for 3D underwater imaging with much less data than with hydrophone data. The vector acoustic sensors measures the particle motions due to passing sound waves and, in conjunction with a collocated hydrophone, the direction of travel of the sound waves. When using a controlled source with known source and sensor locations, the reflection points of the sound field can be determined with a simple trigonometric calculation. I demonstrate this concept with an experiment that used an accelerometer based vector acoustic sensor in a water tank with a short-pulse source and passive scattering targets. The sensor consists of a three-axis accelerometer and a matched hydrophone. The sound source was a standard transducer driven by a short 7 kHz pulse. The sensor was suspended in a fixed location and the hydrophone was moved about the tank by a robotic arm to insonify the tank from many locations. Several floats were placed in the tank as acoustic targets at diagonal ranges of approximately one meter. The accelerometer data show the direct source wave as well as the target scattered waves and reflections from the nearby water surface, tank bottom and sides. Without resorting to the usual methods of seismic imaging, which in this case is only two dimensional and relied entirely on the use of a synthetic source aperture, the two targets, the tank walls, the tank bottom, and the water surface were imaged. A directional ambiguity inherent to vector sensors is removed by using collocated hydrophone data. Although this experiment was in a very simple environment, it suggests that 3-D seismic surveys may be achieved with vector sensors using the same logistics as a 2-D survey that uses conventional hydrophones. This work was supported by the Office of Naval Research, program element 61153N.
Caldwell, Michael S.; Bee, Mark A.
2014-01-01
The ability to reliably locate sound sources is critical to anurans, which navigate acoustically complex breeding choruses when choosing mates. Yet, the factors influencing sound localization performance in frogs remain largely unexplored. We applied two complementary methodologies, open and closed loop playback trials, to identify influences on localization abilities in Cope’s gray treefrog, Hyla chrysoscelis. We examined localization acuity and phonotaxis behavior of females in response to advertisement calls presented from 12 azimuthal angles, at two signal levels, in the presence and absence of noise, and at two noise levels. Orientation responses were consistent with precise localization of sound sources, rather than binary discrimination between sources on either side of the body (lateralization). Frogs were unable to discriminate between sounds arriving from forward and rearward directions, and accurate localization was limited to forward sound presentation angles. Within this region, sound presentation angle had little effect on localization acuity. The presence of noise and low signal-to-noise ratios also did not strongly impair localization ability in open loop trials, but females exhibited reduced phonotaxis performance consistent with impaired localization during closed loop trials. We discuss these results in light of previous work on spatial hearing in anurans. PMID:24504182
Theory of acoustic design of opera house and a design proposal
NASA Astrophysics Data System (ADS)
Ando, Yoichi
2004-05-01
First of all, the theory of subjective preference for sound fields based on the model of auditory-brain system is briefly mentioned. It consists of the temporal factors and spatial factors associated with the left and right cerebral hemispheres, respectively. The temporal criteria are the initial time delay gap between the direct sound and the first Reflection (Dt1) and the subsequent reverberation time (Tsub). These preferred conditions are related to the minimum value of effective duration of the running autocorrelation function of source signals (te)min. The spatial criteria are binaural listening level (LL) and the IACC, which may be extracted from the interaural crosscorrelation function. In the opera house, there are two different kind of sound sources, i.e., the vocal source of relatively short values of (te)min in the stage and the orchestra music of long values of (te)min in the pit. For these sources, a proposal is made here.
Direct measurement of the speed of sound using a microphone and a speaker
NASA Astrophysics Data System (ADS)
Gómez-Tejedor, José A.; Castro-Palacio, Juan C.; Monsoriu, Juan A.
2014-05-01
We present a simple and accurate experiment to obtain the speed of sound in air using a conventional speaker and a microphone connected to a computer. A free open source digital audio editor and recording computer software application allows determination of the time-of-flight of the wave for different distances, from which the speed of sound is calculated. The result is in very good agreement with the reported value in the literature.
NASA Astrophysics Data System (ADS)
Li, Xuebao; Cui, Xiang; Lu, Tiebing; Ma, Wenzuo; Bian, Xingming; Wang, Donglai; Hiziroglu, Huseyin
2016-03-01
The corona-generated audible noise (AN) has become one of decisive factors in the design of high voltage direct current (HVDC) transmission lines. The AN from transmission lines can be attributed to sound pressure pulses which are generated by the multiple corona sources formed on the conductor, i.e., transmission lines. In this paper, a detailed time-domain characteristics of the sound pressure pulses, which are generated by the DC corona discharges formed over the surfaces of a stranded conductors, are investigated systematically in a laboratory settings using a corona cage structure. The amplitude of sound pressure pulse and its time intervals are extracted by observing a direct correlation between corona current pulses and corona-generated sound pressure pulses. Based on the statistical characteristics, a stochastic model is presented for simulating the sound pressure pulses due to DC corona discharges occurring on conductors. The proposed stochastic model is validated by comparing the calculated and measured A-weighted sound pressure level (SPL). The proposed model is then used to analyze the influence of the pulse amplitudes and pulse rate on the SPL. Furthermore, a mathematical relationship is found between the SPL and conductor diameter, electric field, and radial distance.
Acoustic centering of sources measured by surrounding spherical microphone arrays.
Hagai, Ilan Ben; Pollow, Martin; Vorländer, Michael; Rafaely, Boaz
2011-10-01
The radiation patterns of acoustic sources have great significance in a wide range of applications, such as measuring the directivity of loudspeakers and investigating the radiation of musical instruments for auralization. Recently, surrounding spherical microphone arrays have been studied for sound field analysis, facilitating measurement of the pressure around a sphere and the computation of the spherical harmonics spectrum of the sound source. However, the sound radiation pattern may be affected by the location of the source inside the microphone array, which is an undesirable property when aiming to characterize source radiation in a unique manner. This paper presents a theoretical analysis of the spherical harmonics spectrum of spatially translated sources and defines four measures for the misalignment of the acoustic center of a radiating source. Optimization is used to promote optimal alignment based on the proposed measures and the errors caused by numerical and array-order limitations are investigated. This methodology is examined using both simulated and experimental data in order to investigate the performance and limitations of the different alignment methods. © 2011 Acoustical Society of America
NASA Technical Reports Server (NTRS)
Goldstein, Marvin E.; Leib, Stewart J.
1999-01-01
An approximate method for calculating the noise generated by a turbulent flow within a semi-infinite duct of arbitrary cross section is developed. It is based on a previously derived high-frequency solution to Lilley's equation, which describes the sound propagation in a transversely-sheared mean flow. The source term is simplified by assuming the turbulence to be axisymmetric about the mean flow direction. Numerical results are presented for the special case of a ring source in a circular duct with an axisymmetric mean flow. They show that the internally generated noise is suppressed at sufficiently large upstream angles in a hard walled duct, and that acoustic liners can significantly reduce the sound radiated in both the upstream and downstream regions, depending upon the source location and Mach number of the flow.
NASA Technical Reports Server (NTRS)
Goldstein, Marvin E.; Leib, Stewart J.
1999-01-01
An approximate method for calculating the noise generated by a turbulent flow within a semi-infinite duct of arbitrary cross section is developed. It is based on a previously derived high-frequency solution to Lilley's equation, which describes the sound propagation in transversely-sheared mean flow. The source term is simplified by assuming the turbulence to be axisymmetric about the mean flow direction. Numerical results are presented for the special case of a ring source in a circular duct with an axisymmetric mean flow. They show that the internally generated noise is suppressed at sufficiently large upstream angles in a hard walled duct, and that acoustic liners can significantly reduce the sound radiated in both the upstream and downstream regions, depending upon the source location and Mach number of the flow.
Spherical harmonic analysis of the sound radiation from omnidirectional loudspeaker arrays
NASA Astrophysics Data System (ADS)
Pasqual, A. M.
2014-09-01
Omnidirectional sound sources are widely used in room acoustics. These devices are made up of loudspeakers mounted on a spherical or polyhedral cabinet, where the dodecahedral shape prevails. Although such electroacoustic sources have been made readily available to acousticians by many manufacturers, an in-depth investigation of their vibroacoustic behavior has not been provided yet. In order to fulfill this lack, this paper presents a theoretical study of the sound radiation from omnidirectional loudspeaker arrays, which is carried out by using a mathematical model based on the spherical harmonic analysis. Eight different loudspeaker arrangements on the sphere are considered: the well-known five Platonic solid layouts and three extremal system layouts. The latter possess useful properties for spherical loudspeaker arrays used as directivity controlled sound sources, so that these layouts are included here in order to investigate whether or not they could be of interest as omnidirectional sources as well. It is shown through a comparative analysis that the dodecahedral array leads to the lowest error in producing an omnidirectional sound field and to the highest acoustic power, which corroborates the prevalence of such a layout. In addition, if a source with less than 12 loudspeakers is required, it is shown that tetrahedra or hexahedra can be used alternatively, whereas the extremal system layouts are not interesting choices for omnidirectional loudspeaker arrays.
Sugi, Miho; Hagimoto, Yutaka; Nambu, Isao; Gonzalez, Alejandro; Takei, Yoshinori; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro
2018-01-01
Recently, a brain-computer interface (BCI) using virtual sound sources has been proposed for estimating user intention via electroencephalogram (EEG) in an oddball task. However, its performance is still insufficient for practical use. In this study, we examine the impact that shortening the stimulus onset asynchrony (SOA) has on this auditory BCI. While very short SOA might improve its performance, sound perception and task performance become difficult, and event-related potentials (ERPs) may not be induced if the SOA is too short. Therefore, we carried out behavioral and EEG experiments to determine the optimal SOA. In the experiments, participants were instructed to direct attention to one of six virtual sounds (target direction). We used eight different SOA conditions: 200, 300, 400, 500, 600, 700, 800, and 1,100 ms. In the behavioral experiment, we recorded participant behavioral responses to target direction and evaluated recognition performance of the stimuli. In all SOA conditions, recognition accuracy was over 85%, indicating that participants could recognize the target stimuli correctly. Next, using a silent counting task in the EEG experiment, we found significant differences between target and non-target sound directions in all but the 200-ms SOA condition. When we calculated an identification accuracy using Fisher discriminant analysis (FDA), the SOA could be shortened by 400 ms without decreasing the identification accuracies. Thus, improvements in performance (evaluated by BCI utility) could be achieved. On average, higher BCI utilities were obtained in the 400 and 500-ms SOA conditions. Thus, auditory BCI performance can be optimized for both behavioral and neurophysiological responses by shortening the SOA. PMID:29535602
Generation of Underwater Sound by a Moving High-Power Laser Source.
1985-08-01
convolution 63 2. Laser beamwidth effects 67 3. Numerical predictions 69 V Table of Contents (Cont.) Page B. Experimental results 72 1. Pressure...source - II 53 17 Nerfield ef f ects: t as a ftunction of ro and cc 55 19 Nearfield effects : @max as a function of r. and a 56 • 20 Nearfield directivity...of a " photophone " or apparatus for the production of sound by light. Bell’s invention of the photophone was neglected for many years, but recently it
Blue whales respond to simulated mid-frequency military sonar.
Goldbogen, Jeremy A; Southall, Brandon L; DeRuiter, Stacy L; Calambokidis, John; Friedlaender, Ari S; Hazen, Elliott L; Falcone, Erin A; Schorr, Gregory S; Douglas, Annie; Moretti, David J; Kyburg, Chris; McKenna, Megan F; Tyack, Peter L
2013-08-22
Mid-frequency military (1-10 kHz) sonars have been associated with lethal mass strandings of deep-diving toothed whales, but the effects on endangered baleen whale species are virtually unknown. Here, we used controlled exposure experiments with simulated military sonar and other mid-frequency sounds to measure behavioural responses of tagged blue whales (Balaenoptera musculus) in feeding areas within the Southern California Bight. Despite using source levels orders of magnitude below some operational military systems, our results demonstrate that mid-frequency sound can significantly affect blue whale behaviour, especially during deep feeding modes. When a response occurred, behavioural changes varied widely from cessation of deep feeding to increased swimming speed and directed travel away from the sound source. The variability of these behavioural responses was largely influenced by a complex interaction of behavioural state, the type of mid-frequency sound and received sound level. Sonar-induced disruption of feeding and displacement from high-quality prey patches could have significant and previously undocumented impacts on baleen whale foraging ecology, individual fitness and population health.
Azimuthal sound localization in the European starling (Sturnus vulgaris): I. Physical binaural cues.
Klump, G M; Larsen, O N
1992-02-01
The physical measurements reported here test whether the European starling (Sturnus vulgaris) evaluates the azimuth direction of a sound source with a peripheral auditory system composed of two acoustically coupled pressure-difference receivers (1) or of two decoupled pressure receivers (2). A directional pattern of sound intensity in the free-field was measured at the entrance of the auditory meatus using a probe microphone, and at the tympanum using laser vibrometry. The maximum differences in the sound-pressure level measured with the microphone between various speaker positions and the frontal speaker position were 2.4 dB at 1 and 2 kHz, 7.3 dB at 4 kHz, 9.2 dB at 6 kHz, and 10.9 dB at 8 kHz. The directional amplitude pattern measured by laser vibrometry did not differ from that measured with the microphone. Neither did the directional pattern of travel times to the ear. Measurements of the amplitude and phase transfer function of the starling's interaural pathway using a closed sound system were in accord with the results of the free-field measurements. In conclusion, although some sound transmission via the interaural canal occurred, the present experiments support the hypothesis 2 above that the starling's peripheral auditory system is best described as consisting of two functionally decoupled pressure receivers.
Characterization of the MEMS Directional Sound Sensor in the High Frequency (15-20 kHz) Range
2011-12-01
frequency response that is almost flat from 50 Hz to 20 kHz [9]. The sound source is a Selenium loudspeaker type DH200E attached to the internal...University Science Books, 2005. [12] W. Zhang and K. Turner, “Frequency dependent fluid damping of micro/ nano flexural resonators: Experiment, model and
NASA Astrophysics Data System (ADS)
Bolduc, A.; Gauthier, P.-A.; Berry, A.
2017-12-01
While perceptual evaluation and sound quality testing with jury are now recognized as essential parts of acoustical product development, they are rarely implemented with spatial sound field reproduction. Instead, monophonic, stereophonic or binaural presentations are used. This paper investigates the workability and interest of a method to use complete vibroacoustic engineering models for auralization based on 2.5D Wave Field Synthesis (WFS). This method is proposed in order that spatial characteristics such as directivity patterns and direction-of-arrival are part of the reproduced sound field while preserving the model complete formulation that coherently combines frequency and spatial responses. Modifications to the standard 2.5D WFS operators are proposed for extended primary sources, affecting the reference line definition and compensating for out-of-plane elementary primary sources. Reported simulations and experiments of reproductions of two physically-accurate vibroacoustic models of thin plates show that the proposed method allows for an effective reproduction in the horizontal plane: Spatial and frequency domains features are recreated. Application of the method to the sound rendering of a virtual transmission loss measurement setup shows the potential of the method for use in virtual acoustical prototyping for jury testing.
Modeling the utility of binaural cues for underwater sound localization.
Schneider, Jennifer N; Lloyd, David R; Banks, Patchouly N; Mercado, Eduardo
2014-06-01
The binaural cues used by terrestrial animals for sound localization in azimuth may not always suffice for accurate sound localization underwater. The purpose of this research was to examine the theoretical limits of interaural timing and level differences available underwater using computational and physical models. A paired-hydrophone system was used to record sounds transmitted underwater and recordings were analyzed using neural networks calibrated to reflect the auditory capabilities of terrestrial mammals. Estimates of source direction based on temporal differences were most accurate for frequencies between 0.5 and 1.75 kHz, with greater resolution toward the midline (2°), and lower resolution toward the periphery (9°). Level cues also changed systematically with source azimuth, even at lower frequencies than expected from theoretical calculations, suggesting that binaural mechanical coupling (e.g., through bone conduction) might, in principle, facilitate underwater sound localization. Overall, the relatively limited ability of the model to estimate source position using temporal and level difference cues underwater suggests that animals such as whales may use additional cues to accurately localize conspecifics and predators at long distances. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sridhara, Basavapatna Sitaramaiah
In an internal combustion engine, the engine is the noise source and the exhaust pipe is the main transmitter of noise. Mufflers are often used to reduce engine noise level in the exhaust pipe. To optimize a muffler design, a series of experiments could be conducted using various mufflers installed in the exhaust pipe. For each configuration, the radiated sound pressure could be measured. However, this is not a very efficient method. A second approach would be to develop a scheme involving only a few measurements which can predict the radiated sound pressure at a specified distance from the open end of the exhaust pipe. In this work, the engine exhaust system was modelled as a lumped source-muffler-termination system. An expression for the predicted sound pressure level was derived in terms of the source and termination impedances, and the muffler geometry. The pressure source and monopole radiation models were used for the source and the open end of the exhaust pipe. The four pole parameters were used to relate the acoustic properties at two different cross sections of the muffler and the pipe. The developed formulation was verified through a series of experiments. Two loudspeakers and a reciprocating type vacuum pump were used as sound sources during the tests. The source impedance was measured using the direct, two-load and four-load methods. A simple expansion chamber and a side-branch resonator were used as mufflers. Sound pressure level measurements for the prediction scheme were made for several source-muffler and source-straight pipe combinations. The predicted and measured sound pressure levels were compared for all cases considered. In all cases, correlation of the experimental results and those predicted by the developed expressions was good. Predicted and measured values of the insertion loss of the mufflers were compared. The agreement between the two was good. Also, an error analysis of the four-load method was done.
Tiitinen, Hannu; Salminen, Nelli H; Palomäki, Kalle J; Mäkinen, Ville T; Alku, Paavo; May, Patrick J C
2006-03-20
In an attempt to delineate the assumed 'what' and 'where' processing streams, we studied the processing of spatial sound in the human cortex by using magnetoencephalography in the passive and active recording conditions and two kinds of spatial stimuli: individually constructed, highly realistic spatial (3D) stimuli and stimuli containing interaural time difference (ITD) cues only. The auditory P1m, N1m, and P2m responses of the event-related field were found to be sensitive to the direction of sound source in the azimuthal plane. In general, the right-hemispheric responses to spatial sounds were more prominent than the left-hemispheric ones. The right-hemispheric P1m and N1m responses peaked earlier for sound sources in the contralateral than for sources in the ipsilateral hemifield and the peak amplitudes of all responses reached their maxima for contralateral sound sources. The amplitude of the right-hemispheric P2m response reflected the degree of spatiality of sound, being twice as large for the 3D than ITD stimuli. The results indicate that the right hemisphere is specialized in the processing of spatial cues in the passive recording condition. Minimum current estimate (MCE) localization revealed that temporal areas were activated both in the active and passive condition. This initial activation, taking place at around 100 ms, was followed by parietal and frontal activity at 180 and 200 ms, respectively. The latter activations, however, were specific to attentional engagement and motor responding. This suggests that parietal activation reflects active responding to a spatial sound rather than auditory spatial processing as such.
Soundscapes and the sense of hearing of fishes.
Fay, Richard
2009-03-01
Underwater soundscapes have probably played an important role in the adaptation of ears and auditory systems of fishes throughout evolutionary time, and for all species. These sounds probably contain important information about the environment and about most objects and events that confront the receiving fish so that appropriate behavior is possible. For example, the sounds from reefs appear to be used by at least some fishes for their orientation and migration. These sorts of environmental sounds should be considered much like "acoustic daylight," that continuously bathes all environments and contain information that all organisms can potentially use to form a sort of image of the environment. At present, however, we are generally ignorant of the nature of ambient sound fields impinging on fishes, and the adaptive value of processing these fields to resolve the multiple sources of sound. Our field has focused almost exclusively on the adaptive value of processing species-specific communication sounds, and has not considered the informational value of ambient "noise." Since all fishes can detect and process acoustic particle motion, including the directional characteristics of this motion, underwater sound fields are potentially more complex and information-rich than terrestrial acoustic environments. The capacities of one fish species (goldfish) to receive and make use of such sound source information have been demonstrated (sound source segregation and auditory scene analysis), and it is suggested that all vertebrate species have this capacity. A call is made to better understand underwater soundscapes, and the associated behaviors they determine in fishes. © 2009 ISZS, Blackwell Publishing and IOZ/CAS.
Numerical Prediction of Combustion-induced Noise using a hybrid LES/CAA approach
NASA Astrophysics Data System (ADS)
Ihme, Matthias; Pitsch, Heinz; Kaltenbacher, Manfred
2006-11-01
Noise generation in technical devices is an increasingly important problem. Jet engines in particular produce sound levels that not only are a nuisance but may also impair hearing. The noise emitted by such engines is generated by different sources such as jet exhaust, fans or turbines, and combustion. Whereas the former acoustic mechanisms are reasonably well understood, combustion-generated noise is not. A methodology for the prediction of combustion-generated noise is developed. In this hybrid approach unsteady acoustic source terms are obtained from an LES and the propagation of pressure perturbations are obtained using acoustic analogies. Lighthill's acoustic analogy and a non-linear wave equation, accounting for variable speed of sound, have been employed. Both models are applied to an open diffusion flame. The effects on the far field pressure and directivity due to the variation of speed of sound are analyzed. Results for the sound pressure level will be compared with experimental data.
Biologically inspired binaural hearing aid algorithms: Design principles and effectiveness
NASA Astrophysics Data System (ADS)
Feng, Albert
2002-05-01
Despite rapid advances in the sophistication of hearing aid technology and microelectronics, listening in noise remains problematic for people with hearing impairment. To solve this problem two algorithms were designed for use in binaural hearing aid systems. The signal processing strategies are based on principles in auditory physiology and psychophysics: (a) the location/extraction (L/E) binaural computational scheme determines the directions of source locations and cancels noise by applying a simple subtraction method over every frequency band; and (b) the frequency-domain minimum-variance (FMV) scheme extracts a target sound from a known direction amidst multiple interfering sound sources. Both algorithms were evaluated using standard metrics such as signal-to-noise-ratio gain and articulation index. Results were compared with those from conventional adaptive beam-forming algorithms. In free-field tests with multiple interfering sound sources our algorithms performed better than conventional algorithms. Preliminary intelligibility and speech reception results in multitalker environments showed gains for every listener with normal or impaired hearing when the signals were processed in real time with the FMV binaural hearing aid algorithm. [Work supported by NIH-NIDCD Grant No. R21DC04840 and the Beckman Institute.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xuebao, E-mail: lxb08357x@ncepu.edu.cn; Cui, Xiang, E-mail: x.cui@ncepu.edu.cn; Ma, Wenzuo
The corona-generated audible noise (AN) has become one of decisive factors in the design of high voltage direct current (HVDC) transmission lines. The AN from transmission lines can be attributed to sound pressure pulses which are generated by the multiple corona sources formed on the conductor, i.e., transmission lines. In this paper, a detailed time-domain characteristics of the sound pressure pulses, which are generated by the DC corona discharges formed over the surfaces of a stranded conductors, are investigated systematically in a laboratory settings using a corona cage structure. The amplitude of sound pressure pulse and its time intervals aremore » extracted by observing a direct correlation between corona current pulses and corona-generated sound pressure pulses. Based on the statistical characteristics, a stochastic model is presented for simulating the sound pressure pulses due to DC corona discharges occurring on conductors. The proposed stochastic model is validated by comparing the calculated and measured A-weighted sound pressure level (SPL). The proposed model is then used to analyze the influence of the pulse amplitudes and pulse rate on the SPL. Furthermore, a mathematical relationship is found between the SPL and conductor diameter, electric field, and radial distance.« less
Long-range sound propagation: A review of some experimental data
NASA Technical Reports Server (NTRS)
Sutherland, Louis C.
1990-01-01
Three experimental studies of long range sound propagation carried out or sponsored in the past by NASA are briefly reviewed to provide a partial prospective for some of the analytical studies presented in this symposium. The three studies reviewed cover (1) a unique test of two large rocket engines conducted in such a way as to provide an indication of possible atmospheric scattering loss from a large low-frequency directive sound source, (2) a year-long measurement of low frequency sound propagation which clearly demonstrated the dominant influence of the vertical gradient in the vector sound velocity towards the receiver in defining excess sound attenuation due to refraction, and (3), a series of excess ground attenuation measurements over grass and asphalt surfaces replicated several times under very similar inversion weather conditions.
Satellite sound broadcasting system, portable reception
NASA Technical Reports Server (NTRS)
Golshan, Nasser; Vaisnys, Arvydas
1990-01-01
Studies are underway at JPL in the emerging area of Satellite Sound Broadcast Service (SSBS) for direct reception by low cost portable, semi portable, mobile and fixed radio receivers. This paper addresses the portable reception of digital broadcasting of monophonic audio with source material band limited to 5 KHz (source audio comparable to commercial AM broadcasting). The proposed system provides transmission robustness, uniformity of performance over the coverage area and excellent frequency reuse. Propagation problems associated with indoor portable reception are considered in detail and innovative antenna concepts are suggested to mitigate these problems. It is shown that, with the marriage of proper technologies a single medium power satellite can provide substantial direct satellite audio broadcast capability to CONUS in UHF or L Bands, for high quality portable indoor reception by low cost radio receivers.
An Auditory Illusion of Proximity of the Source Induced by Sonic Crystals
Spiousas, Ignacio; Etchemendy, Pablo E.; Vergara, Ramiro O.; Calcagno, Esteban R.; Eguia, Manuel C.
2015-01-01
In this work we report an illusion of proximity of a sound source created by a sonic crystal placed between the source and a listener. This effect seems, at first, paradoxical to naïve listeners since the sonic crystal is an obstacle formed by almost densely packed cylindrical scatterers. Even when the singular acoustical properties of these periodic composite materials have been studied extensively (including band gaps, deaf bands, negative refraction, and birrefringence), the possible perceptual effects remain unexplored. The illusion reported here is studied through acoustical measurements and a psychophysical experiment. The results of the acoustical measurements showed that, for a certain frequency range and region in space where the focusing phenomenon takes place, the sonic crystal induces substantial increases in binaural intensity, direct-to-reverberant energy ratio and interaural cross-correlation values, all cues involved in the auditory perception of distance. Consistently, the results of the psychophysical experiment revealed that the presence of the sonic crystal between the sound source and the listener produces a significant reduction of the perceived relative distance to the sound source. PMID:26222281
An Auditory Illusion of Proximity of the Source Induced by Sonic Crystals.
Spiousas, Ignacio; Etchemendy, Pablo E; Vergara, Ramiro O; Calcagno, Esteban R; Eguia, Manuel C
2015-01-01
In this work we report an illusion of proximity of a sound source created by a sonic crystal placed between the source and a listener. This effect seems, at first, paradoxical to naïve listeners since the sonic crystal is an obstacle formed by almost densely packed cylindrical scatterers. Even when the singular acoustical properties of these periodic composite materials have been studied extensively (including band gaps, deaf bands, negative refraction, and birrefringence), the possible perceptual effects remain unexplored. The illusion reported here is studied through acoustical measurements and a psychophysical experiment. The results of the acoustical measurements showed that, for a certain frequency range and region in space where the focusing phenomenon takes place, the sonic crystal induces substantial increases in binaural intensity, direct-to-reverberant energy ratio and interaural cross-correlation values, all cues involved in the auditory perception of distance. Consistently, the results of the psychophysical experiment revealed that the presence of the sonic crystal between the sound source and the listener produces a significant reduction of the perceived relative distance to the sound source.
Assessment of sound levels in a neonatal intensive care unit in tabriz, iran.
Valizadeh, Sousan; Bagher Hosseini, Mohammad; Alavi, Nasrinsadat; Asadollahi, Malihe; Kashefimehr, Siamak
2013-03-01
High levels of sound have several negative effects, such as noise-induced hearing loss and delayed growth and development, on premature infants in neonatal intensive care units (NICUs). In order to reduce sound levels, they should first be measured. This study was performed to assess sound levels and determine sources of noise in the NICU of Alzahra Teaching Hospital (Tabriz, Iran). In a descriptive study, 24 hours in 4 workdays were randomly selected. Equivalent continuous sound level (Leq), sound level that is exceeded only 10% of the time (L10), maximum sound level (Lmax), and peak instantaneous sound pressure level (Lzpeak) were measured by CEL-440 sound level meter (SLM) at 6 fixed locations in the NICU. Data was collected using a questionnaire. SPSS13 was then used for data analysis. Mean values of Leq, L10, and Lmax were determined as 63.46 dBA, 65.81 dBA, and 71.30 dBA, respectively. They were all higher than standard levels (Leq < 45 dB, L10 ≤50 dB, and Lmax ≤65 dB). The highest Leq was measured at the time of nurse rounds. Leq was directly correlated with the number of staff members present in the ward. Finally, sources of noise were ordered based on their intensity. Considering that sound levels were higher than standard levels in our studied NICU, it is necessary to adopt policies to reduce sound.
Assessment of Sound Levels in a Neonatal Intensive Care Unit in Tabriz, Iran
Valizadeh, Sousan; Bagher Hosseini, Mohammad; Alavi, Nasrinsadat; Asadollahi, Malihe; Kashefimehr, Siamak
2013-01-01
Introduction: High levels of sound have several negative effects, such as noise-induced hearing loss and delayed growth and development, on premature infants in neonatal intensive care units (NICUs). In order to reduce sound levels, they should first be measured. This study was performed to assess sound levels and determine sources of noise in the NICU of Alzahra Teaching Hospital (Tabriz, Iran). Methods: In a descriptive study, 24 hours in 4 workdays were randomly selected. Equivalent continuous sound level (Leq), sound level that is exceeded only 10% of the time (L10), maximum sound level (Lmax), and peak instantaneous sound pressure level (Lzpeak) were measured by CEL-440 sound level meter (SLM) at 6 fixed locations in the NICU. Data was collected using a questionnaire. SPSS13 was then used for data analysis. Results: Mean values of Leq, L10, and Lmax were determined as 63.46 dBA, 65.81 dBA, and 71.30 dBA, respectively. They were all higher than standard levels (Leq < 45 dB, L10 ≤50 dB, and Lmax ≤65 dB). The highest Leq was measured at the time of nurse rounds. Leq was directly correlated with the number of staff members present in the ward. Finally, sources of noise were ordered based on their intensity. Conclusion: Considering that sound levels were higher than standard levels in our studied NICU, it is necessary to adopt policies to reduce sound. PMID:25276706
An experimental investigation of sound radiation from a duct with a circumferentially varying liner
NASA Technical Reports Server (NTRS)
Fuller, C. R.; Silcox, R. J.
1983-01-01
The radiation of sound from an asymmetrically lined duct is experimentally studied for various hard-walled standing mode sources. Measurements were made of the directivity of the radiated field and amplitude reflection coefficients in the hard-walled source section. These measurements are compared with baseline hardwall and uniformly lined duct data. The dependence of these characteristics on mode number and angular location of the source is investigated. A comparison between previous theoretical calculations and the experimentally measured results is made and in general good agreement is obtained. For the several cases presented an asymmetry in the liner impedance distribution was found to produce related asymmetries in the radiated acoustic field.
Interactive physically-based sound simulation
NASA Astrophysics Data System (ADS)
Raghuvanshi, Nikunj
The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation behind obstructions, reverberation, scattering from complex geometry and sound focusing. This is enabled by a novel compact representation that takes a thousand times less memory than a direct scheme, thus reducing memory footprints to fit within available main memory. To the best of my knowledge, this is the only technique and system in existence to demonstrate auralization of physical wave-based effects in real-time on large, complex 3D scenes.
Sound field separation with sound pressure and particle velocity measurements.
Fernandez-Grande, Efren; Jacobsen, Finn; Leclère, Quentin
2012-12-01
In conventional near-field acoustic holography (NAH) it is not possible to distinguish between sound from the two sides of the array, thus, it is a requirement that all the sources are confined to only one side and radiate into a free field. When this requirement cannot be fulfilled, sound field separation techniques make it possible to distinguish between outgoing and incoming waves from the two sides, and thus NAH can be applied. In this paper, a separation method based on the measurement of the particle velocity in two layers and another method based on the measurement of the pressure and the velocity in a single layer are proposed. The two methods use an equivalent source formulation with separate transfer matrices for the outgoing and incoming waves, so that the sound from the two sides of the array can be modeled independently. A weighting scheme is proposed to account for the distance between the equivalent sources and measurement surfaces and for the difference in magnitude between pressure and velocity. Experimental and numerical studies have been conducted to examine the methods. The double layer velocity method seems to be more robust to noise and flanking sound than the combined pressure-velocity method, although it requires an additional measurement surface. On the whole, the separation methods can be useful when the disturbance of the incoming field is significant. Otherwise the direct reconstruction is more accurate and straightforward.
Possibilities of psychoacoustics to determine sound quality
NASA Astrophysics Data System (ADS)
Genuit, Klaus
For some years, acoustic engineers have increasingly become aware of the importance of analyzing and minimizing noise problems not only with regard to the A-weighted sound pressure level, but to design sound quality. It is relatively easy to determine the A-weighted SPL according to international standards. However, the objective evaluation to describe subjectively perceived sound quality - taking into account psychoacoustic parameters such as loudness, roughness, fluctuation strength, sharpness and so forth - is more difficult. On the one hand, the psychoacoustic measurement procedures which are known so far have yet not been standardized. On the other hand, they have only been tested in laboratories by means of listening tests in the free-field and one sound source and simple signals. Therefore, the results achieved cannot be transferred to complex sound situations with several spatially distributed sound sources without difficulty. Due to the directional hearing and selectivity of human hearing, individual sound events can be selected among many. Already in the late seventies a new binaural Artificial Head Measurement System was developed which met the requirements of the automobile industry in terms of measurement technology. The first industrial application of the Artificial Head Measurement System was in 1981. Since that time the system was further developed, particularly by the cooperation between HEAD acoustics and Mercedes-Benz. In addition to a calibratable Artificial Head Measurement System which is compatible with standard measurement technologies and has transfer characteristics comparable to human hearing, a Binaural Analysis System is now also available. This system permits the analysis of binaural signals regarding physical and psychoacoustic procedures. Moreover, the signals to be analyzed can be simultaneously monitored through headphones and manipulated in the time and frequency domain so that those signal components being responsible for noise annoyance can be found. Especially in complex sound situations with several spatially distributed sound sources, standard, one-channel measurements methods cannot adequately determine sound quality, the acoustic comfort, or annoyance of sound events.
Phonotactic flight of the parasitoid fly Emblemasoma auditrix (Diptera: Sarcophagidae).
Tron, Nanina; Lakes-Harlan, Reinhard
2017-01-01
The parasitoid fly Emblemasoma auditrix locates its hosts using acoustic cues from sound producing males of the cicada Okanagana rimosa. Here, we experimentally analysed the flight path of the phonotaxis from a landmark to the target, a hidden loudspeaker in the field. During flight, the fly showed only small lateral deviations. The vertical flight direction angles were initially negative (directed downwards relative to starting position), grew positive (directed upwards) in the second half of the flight, and finally flattened (directed horizontally or slightly upwards), typically resulting in a landing above the loudspeaker. This phonotactic flight pattern was largely independent from sound pressure level or target distance, but depended on the elevation of the sound source. The flight velocity was partially influenced by sound pressure level and distance, but also by elevation. The more elevated the target, the lower was the speed. The accuracy of flight increased with elevation of the target as well as the landing precision. The minimal vertical angle difference eliciting differences in behaviour was 10°. By changing the elevation of the acoustic target after take-off, we showed that the fly is able to orientate acoustically while flying.
Yost, William A; Zhong, Xuan; Najam, Anbar
2015-11-01
In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.
Near-Field Sound Localization Based on the Small Profile Monaural Structure
Kim, Youngwoong; Kim, Keonwook
2015-01-01
The acoustic wave around a sound source in the near-field area presents unconventional properties in the temporal, spectral, and spatial domains due to the propagation mechanism. This paper investigates a near-field sound localizer in a small profile structure with a single microphone. The asymmetric structure around the microphone provides a distinctive spectral variation that can be recognized by the dedicated algorithm for directional localization. The physical structure consists of ten pipes of different lengths in a vertical fashion and rectangular wings positioned between the pipes in radial directions. The sound from an individual direction travels through the nearest open pipe, which generates the particular fundamental frequency according to the acoustic resonance. The Cepstral parameter is modified to evaluate the fundamental frequency. Once the system estimates the fundamental frequency of the received signal, the length of arrival and angle of arrival (AoA) are derived by the designed model. From an azimuthal distance of 3–15 cm from the outer body of the pipes, the extensive acoustic experiments with a 3D-printed structure show that the direct and side directions deliver average hit rates of 89% and 73%, respectively. The closer positions to the system demonstrate higher accuracy, and the overall hit rate performance is 78% up to 15 cm away from the structure body. PMID:26580618
Physics of Acoustic Radiation from Jet Engine Inlets
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.; Parrish, Sarah A.; Envia, Edmane; Chien, Eugene W.
2012-01-01
Numerical simulations of acoustic radiation from a jet engine inlet are performed using advanced computational aeroacoustics (CAA) algorithms and high-quality numerical boundary treatments. As a model of modern commercial jet engine inlets, the inlet geometry of the NASA Source Diagnostic Test (SDT) is used. Fan noise consists of tones and broadband sound. This investigation considers the radiation of tones associated with upstream propagating duct modes. The primary objective is to identify the dominant physical processes that determine the directivity of the radiated sound. Two such processes have been identified. They are acoustic diffraction and refraction. Diffraction is the natural tendency for an acoustic wave to follow a curved solid surface as it propagates. Refraction is the turning of the direction of propagation of sound waves by mean flow gradients. Parametric studies on the changes in the directivity of radiated sound due to variations in forward flight Mach number and duct mode frequency, azimuthal mode number, and radial mode number are carried out. It is found there is a significant difference in directivity for the radiation of the same duct mode from an engine inlet when operating in static condition and in forward flight. It will be shown that the large change in directivity is the result of the combined effects of diffraction and refraction.
Blue whales respond to simulated mid-frequency military sonar
Goldbogen, Jeremy A.; Southall, Brandon L.; DeRuiter, Stacy L.; Calambokidis, John; Friedlaender, Ari S.; Hazen, Elliott L.; Falcone, Erin A.; Schorr, Gregory S.; Douglas, Annie; Moretti, David J.; Kyburg, Chris; McKenna, Megan F.; Tyack, Peter L.
2013-01-01
Mid-frequency military (1–10 kHz) sonars have been associated with lethal mass strandings of deep-diving toothed whales, but the effects on endangered baleen whale species are virtually unknown. Here, we used controlled exposure experiments with simulated military sonar and other mid-frequency sounds to measure behavioural responses of tagged blue whales (Balaenoptera musculus) in feeding areas within the Southern California Bight. Despite using source levels orders of magnitude below some operational military systems, our results demonstrate that mid-frequency sound can significantly affect blue whale behaviour, especially during deep feeding modes. When a response occurred, behavioural changes varied widely from cessation of deep feeding to increased swimming speed and directed travel away from the sound source. The variability of these behavioural responses was largely influenced by a complex interaction of behavioural state, the type of mid-frequency sound and received sound level. Sonar-induced disruption of feeding and displacement from high-quality prey patches could have significant and previously undocumented impacts on baleen whale foraging ecology, individual fitness and population health. PMID:23825206
NASA Astrophysics Data System (ADS)
Rasmussen, Karsten B.; Juhl, Peter
2004-05-01
Boundary element method (BEM) calculations are used for the purpose of predicting the acoustic influence of the human head in two cases. In the first case the sound source is the mouth and in the second case the sound is plane waves arriving from different directions in the horizontal plane. In both cases the sound field is studied in relation to two positions above the right ear being representative of hearing aid microphone positions. Both cases are relevant for hearing aid development. The calculations are based upon a direct BEM implementation in Matlab. The meshing is based on the original geometrical data files describing the B&K Head and Torso Simulator 4128 combined with a 3D scan of the pinna.
Ultrasound Analysis of Slurries
Soong, Yee and Blackwell, Arthur G.
2005-11-01
An autoclave reactor allows for the ultrasonic analysis of slurry concentration and particle size distribution at elevated temperatures and pressures while maintaining the temperature- and pressure-sensitive ultrasonic transducers under ambient conditions. The reactor vessel is a hollow stainless steel cylinder containing the slurry which includes a stirrer and a N, gas source for directing gas bubbles through the slurry. Input and output transducers are connected to opposed lateral portions of the hollow cylinder for respectively directing sound waves through the slurry and receiving these sound waves after transmission through the slurry, where changes in sound wave velocity and amplitude can be used to measure slurry parameters. Ultrasonic adapters connect the transducers to the reactor vessel in a sealed manner and isolate the transducers from the hostile conditions within the vessel without ultrasonic signal distortion or losses.
Ultrasound Analysis Of Slurries
Soong, Yee; Blackwell, Arthur G.
2005-11-01
An autoclave reactor allows for the ultrasonic analysis of slurry concentration and particle size distribution at elevated temperatures and pressures while maintaining the temperature- and pressure-sensitive ultrasonic transducers under ambient conditions. The reactor vessel is a hollow stainless steel cylinder containing the slurry which includes a stirrer and a N.sub.2 gas source for directing gas bubbles through the slurry. Input and output transducers are connected to opposed lateral portions of the hollow cylinder for respectively directing sound waves through the slurry and receiving these sound waves after transmission through the slurry, where changes in sound wave velocity and amplitude can be used to measure slurry parameters. Ultrasonic adapters connect the transducers to the reactor vessel in a sealed manner and isolate the transducers from the hostile conditions within the vessel without ultrasonic signal distortion or losses.
Reproduction of a higher-order circular harmonic field using a linear array of loudspeakers.
Lee, Jung-Min; Choi, Jung-Woo; Kim, Yang-Hann
2015-03-01
This paper presents a direct formula for reproducing a sound field consisting of higher-order circular harmonics with polar phase variation. Sound fields with phase variation can be used for synthesizing various spatial attributes, such as the perceived width or the location of a virtual sound source. To reproduce such a sound field using a linear loudspeaker array, the driving function of the array is derived in the format of an integral formula. The proposed function shows fewer reproduction errors than a conventional formula focused on magnitude variations. In addition, analysis of the sweet spot reveals that its shape can be asymmetric, depending on the order of harmonics.
Mathematically trivial control of sound using a parametric beam focusing source.
Tanaka, Nobuo; Tanaka, Motoki
2011-01-01
By exploiting a case regarded as trivial, this paper presents global active noise control using a parametric beam focusing source (PBFS). As with a dipole model, one is used for a primary sound source and the other for a control sound source, the control effect for minimizing a total acoustic power depends on the distance between the two. When the distance becomes zero, the total acoustic power becomes null, hence nothing less than a trivial case. Because of the constraints in practice, there exist difficulties in placing a control source close enough to a primary source. However, by projecting a sound beam of a parametric array loudspeaker onto the target sound source (primary source), a virtual sound source may be created on the target sound source, thereby enabling the collocation of the sources. In order to further ensure feasibility of the trivial case, a PBFS is then introduced in an effort to meet the size of the two sources. Reflected sound wave of the PBFS, which is tantamount to the virtual sound source output, aims to suppress the primary sound. Finally, a numerical analysis as well as an experiment is conducted, verifying the validity of the proposed methodology.
Spatial Hearing with Incongruent Visual or Auditory Room Cues
Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten
2016-01-01
In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli. PMID:27853290
Mellow, Tim; Kärkkäinen, Leo
2014-03-01
An acoustic curtain is an array of microphones used for recording sound which is subsequently reproduced through an array of loudspeakers in which each loudspeaker reproduces the signal from its corresponding microphone. Here the sound originates from a point source on the axis of symmetry of the circular array. The Kirchhoff-Helmholtz integral for a plane circular curtain is solved analytically as fast-converging expansions, assuming an ideal continuous array, to speed up computations and provide insight. By reversing the time sequence of the recording (or reversing the direction of propagation of the incident wave so that the point source becomes an "ideal" point sink), the curtain becomes a time reversal mirror and the analytical solution for this is given simultaneously. In the case of an infinite planar array, it is demonstrated that either a monopole or dipole curtain will reproduce the diverging sound field of the point source on the far side. However, although the real part of the sound field of the infinite time-reversal mirror is reproduced, the imaginary part is an approximation due to the missing singularity. It is shown that the approximation may be improved by using the appropriate combination of monopole and dipole sources in the mirror.
The Robustness of Acoustic Analogies
NASA Technical Reports Server (NTRS)
Freund, J. B.; Lele, S. K.; Wei, M.
2004-01-01
Acoustic analogies for the prediction of flow noise are exact rearrangements of the flow equations N(right arrow q) = 0 into a nominal sound source S(right arrow q) and sound propagation operator L such that L(right arrow q) = S(right arrow q). In practice, the sound source is typically modeled and the propagation operator inverted to make predictions. Since the rearrangement is exact, any sufficiently accurate model of the source will yield the correct sound, so other factors must determine the merits of any particular formulation. Using data from a two-dimensional mixing layer direct numerical simulation (DNS), we evaluate the robustness of two analogy formulations to different errors intentionally introduced into the source. The motivation is that since S can not be perfectly modeled, analogies that are less sensitive to errors in S are preferable. Our assessment is made within the framework of Goldstein's generalized acoustic analogy, in which different choices of a base flow used in constructing L give different sources S and thus different analogies. A uniform base flow yields a Lighthill-like analogy, which we evaluate against a formulation in which the base flow is the actual mean flow of the DNS. The more complex mean flow formulation is found to be significantly more robust to errors in the energetic turbulent fluctuations, but its advantage is less pronounced when errors are made in the smaller scales.
Reid, Andrew; Marin-Cudraz, Thibaut
2016-01-01
Small animals typically localize sound sources by means of complex internal connections and baffles that effectively increase time or intensity differences between the two ears. However, some miniature acoustic species achieve directional hearing without such devices, indicating that other mechanisms have evolved. Using 3D laser vibrometry to measure tympanum deflection, we show that female lesser waxmoths (Achroia grisella) can orient toward the 100-kHz male song, because each ear functions independently as an asymmetric pressure gradient receiver that responds sharply to high-frequency sound arriving from an azimuth angle 30° contralateral to the animal's midline. We found that females presented with a song stimulus while running on a locomotion compensation sphere follow a trajectory 20°–40° to the left or right of the stimulus heading but not directly toward it, movement consistent with the tympanum deflections and suggestive of a monaural mechanism of auditory tracking. Moreover, females losing their track typically regain it by auditory scanning—sudden, wide deviations in their heading—and females initially facing away from the stimulus quickly change their general heading toward it, orientation indicating superior ability to resolve the front–rear ambiguity in source location. X-ray computer-aided tomography (CT) scans of the moths did not reveal any internal coupling between the two ears, confirming that an acoustic insect can localize a sound source based solely on the distinct features of each ear. PMID:27849607
Etchemendy, Pablo E; Spiousas, Ignacio; Calcagno, Esteban R; Abregú, Ezequiel; Eguia, Manuel C; Vergara, Ramiro O
2018-06-01
In this study we evaluated whether a method of direct location is an appropriate response method for measuring auditory distance perception of far-field sound sources. We designed an experimental set-up that allows participants to indicate the distance at which they perceive the sound source by moving a visual marker. We termed this method Cross-Modal Direct Location (CMDL) since the response procedure involves the visual modality while the stimulus is presented through the auditory modality. Three experiments were conducted with sound sources located from 1 to 6 m. The first one compared the perceived distances obtained using either the CMDL device or verbal report (VR), which is the response method more frequently used for reporting auditory distance in the far field, and found differences on response compression and bias. In Experiment 2, participants reported visual distance estimates to the visual marker that were found highly accurate. Then, we asked the same group of participants to report VR estimates of auditory distance and found that the spatial visual information, obtained from the previous task, did not influence their reports. Finally, Experiment 3 compared the same responses that Experiment 1 but interleaving the methods, showing a weak, but complex, mutual influence. However, the estimates obtained with each method remained statistically different. Our results show that the auditory distance psychophysical functions obtained with the CMDL method are less susceptible to previously reported underestimation for distances over 2 m.
A numerical study of fundamental shock noise mechanisms. Ph.D. Thesis - Cornell Univ.
NASA Technical Reports Server (NTRS)
Meadows, Kristine R.
1995-01-01
The results of this thesis demonstrate that direct numerical simulation can predict sound generation in unsteady aerodynamic flows containing shock waves. Shock waves can be significant sources of sound in high speed jet flows, on helicopter blades, and in supersonic combustion inlets. Direct computation of sound permits the prediction of noise levels in the preliminary design stage and can be used as a tool to focus experimental studies, thereby reducing cost and increasing the probability of a successfully quiet product in less time. This thesis reveals and investigates two mechanisms fundamental to sound generation by shocked flows: shock motion and shock deformation. Shock motion is modeled by the interaction of a sound wave with a shock. During the interaction, the shock wave begins to move and the sound pressure is amplified as the wave passes through the shock. The numerical approach presented in this thesis is validated by the comparison of results obtained in a quasi-one dimensional simulation with linear theory. Analysis of the perturbation energy demonstrated for the first time that acoustic energy is generated by the interaction. Shock deformation is investigated by the numerical simulation of a ring vortex interacting with a shock. This interaction models the passage of turbulent structures through the shock wave. The simulation demonstrates that both acoustic waves and contact surfaces are generated downstream during the interaction. Analysis demonstrates that the acoustic wave spreads cylindrically, that the sound intensity is highly directional, and that the sound pressure level increases significantly with increasing shock strength. The effect of shock strength on sound pressure level is consistent with experimental observations of shock noise, indicating that the interaction of a ring vortex with a shock wave correctly models a dominant mechanism of shock noise generation.
Fluid dynamic aspects of jet noise generation
NASA Technical Reports Server (NTRS)
1974-01-01
The location of the noise sources within jet flows, their relative importance to the overall radiated field, and the mechanisms by which noise generation occurs, are studied by detailed measurements of the level and spectral composition of the radiated sound in the far field. Directional microphones are used to isolate the contribution to the radiated sound of small regions of the flow, and for cross-correlation between the radiated acoustic field and either the velocity fluctuations or the pressure fluctuations in the source field. Acquired data demonstrate the supersonic convection of the acoustic field and the resulting limited upstream influence of the signal source, as well as a possible increase of signal strength as it propagates toward the centerline of the flow.
NASA Technical Reports Server (NTRS)
Swift, G.; Mungur, P.
1979-01-01
General procedures for the prediction of component noise levels incident upon airframe surfaces during cruise are developed. Contributing noise sources are those associated with the propulsion system, the airframe and the laminar flow control (LFC) system. Transformation procedures from the best prediction base of each noise source to the transonic cruise condition are established. Two approaches to LFC/acoustic criteria are developed. The first is a semi-empirical extension of the X-21 LFC/acoustic criteria to include sensitivity to the spectrum and directionality of the sound field. In the second, the more fundamental problem of how sound excites boundary layer disturbances is analyzed by deriving and solving an inhomogeneous Orr-Sommerfeld equation in which the source terms are proportional to the production and dissipation of sound induced fluctuating vorticity. Numerical solutions are obtained and compared with corresponding measurements. Recommendations are made to improve and validate both the cruise noise prediction methods and the LFC/acoustic criteria.
On some nonlinear effects in ultrasonic fields
Tjotta
2000-03-01
Nonlinear effects associated with intense sound fields in fluids are considered theoretically. Special attention is directed to the study of higher effects that cannot be described within the standard propagation models of nonlinear acoustics (the KZK and Burgers equations). The analysis is based on the fundamental equations of motion for a thermoviscous fluid, for which thermal equations of state exist. Model equations are derived and used to analyze nonlinear sources for generation of flow and heat, and other changes in the ambient state of the fluid. Fluctuations in the coefficients of viscosity and thermal conductivity caused by the sound field, are accounted for. Also considered are nonlinear effects induced in the fluid by flexural vibrations. The intensity and absorption of finite amplitude sound waves are calculated, and related to the sources for generation of higher order effects.
Characterizing, synthesizing, and/or canceling out acoustic signals from sound sources
Holzrichter, John F [Berkeley, CA; Ng, Lawrence C [Danville, CA
2007-03-13
A system for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate and animate sound sources. Electromagnetic sensors monitor excitation sources in sound producing systems, such as animate sound sources such as the human voice, or from machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The systems disclosed enable accurate calculation of transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Lo, Kam W
2017-03-01
When an airborne sound source travels past a stationary ground-based acoustic sensor node in a straight line at constant altitude and constant speed that is not much less than the speed of sound in air, the movement of the source during the propagation of the signal from the source to the sensor node (commonly referred to as the "retardation effect") enables the full set of flight parameters of the source to be estimated by measuring the direction of arrival (DOA) of the signal at the sensor node over a sufficiently long period of time. This paper studies the possibility of using instantaneous frequency (IF) measurements from the sensor node to improve the precision of the flight parameter estimates when the source spectrum contains a harmonic line of constant frequency. A simplified Cramer-Rao lower bound analysis shows that the standard deviations in the estimates of the flight parameters can be reduced when IF measurements are used together with DOA measurements. Two flight parameter estimation algorithms that utilize both IF and DOA measurements are described and their performances are evaluated using both simulated data and real data.
On the sound field radiated by a tuning fork
NASA Astrophysics Data System (ADS)
Russell, Daniel A.
2000-12-01
When a sounding tuning fork is brought close to the ear, and rotated about its long axis, four distinct maxima and minima are heard. However, when the same tuning fork is rotated while being held at arm's length from the ear only two maxima and minima are heard. Misconceptions concerning this phenomenon are addressed and the fundamental mode of the fork is described in terms of a linear quadrupole source. Measured directivity patterns in the near field and far field of several forks agree very well with theoretical predictions for a linear quadrupole. Other modes of vibration are shown to radiate as dipole and lateral quadrupole sources.
Physiological correlates of sound localization in a parasitoid fly, Ormia ochracea
NASA Astrophysics Data System (ADS)
Oshinsky, Michael Lee
A major focus of research in the nervous system is the investigation of neural circuits. The question of how neurons connect to form functional units has driven modern neuroscience research from its inception. From the beginning, the neural circuits of the auditory system and specifically sound localization were used as a model system for investigating neural connectivity and computation. Sound localization lends itself to this task because there is no mapping of spatial information on a receptor sheet as in vision. With only one eye, an animal would still have positional information for objects. Since the receptor sheet in the ear is frequency oriented and not spatially oriented, positional information for a sound source does not exist with only one ear. The nervous system computes the location of a sound source based on differences in the physiology of the two ears. In this study, I investigated the neural circuits for sound localization in a fly, Ormia ochracea (Diptera, Tachinidae, Ormiini), which is a parasitoid of crickets. This fly possess a unique mechanically coupled hearing organ. The two ears are contained in one air sac and a cuticular bridge, that has a flexible spring-like structure at its center, connects them. This mechanical coupling preprocesses the sound before it is detected by the nervous system and provides the fly with directional information. The subject of this study is the neural coding of the location of sound stimuli by a mechanically coupled auditory system. In chapter 1, I present the natural history of an acoustic parasitoid and I review the peripheral processing of sound by the Ormian ear. In chapter 2, I describe the anatomy and physiology of the auditory afferents. I present this physiology in the context of sound localization. In chapter 3, I describe the directional dependent physiology for the thoracic local and ascending acoustic interneurons. In chapter 4, I quantify the threshold and I detail the kinematics of the phonotactic walking behavior in Ormia ochracea. I also quantify the angular resolution of the phonotactic turning behavior. Using a model, I show that the temporal coding properties of the afferents provide most of the information required by the fly to localize a singing cricket.
Modeling of Turbulence Generated Noise in Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Bridges, James
2004-01-01
A numerically calculated Green's function is used to predict jet noise spectrum and its far-field directivity. A linearized form of Lilley's equation governs the non-causal Green s function of interest, with the non-linear terms on the right hand side identified as the source. In this paper, contributions from the so-called self- and shear-noise source terms will be discussed. A Reynolds-averaged Navier-Stokes solution yields the required mean flow as well as time- and length scales of a noise-generating turbulent eddy. A non-compact source, with exponential temporal and spatial functions, is used to describe the turbulence velocity correlation tensors. It is shown that while an exact non-causal Green's function accurately predicts the observed shift in the location of the spectrum peak with angle as well as the angularity of sound at moderate Mach numbers, at high subsonic and supersonic acoustic Mach numbers the polar directivity of radiated sound is not entirely captured by this Green's function. Results presented for Mach 0.5 and 0.9 isothermal jets, as well as a Mach 0.8 hot jet conclude that near the peak radiation angle a different source/Green's function convolution integral may be required in order to capture the peak observed directivity of jet noise.
Sound localization by echolocating bats
NASA Astrophysics Data System (ADS)
Aytekin, Murat
Echolocating bats emit ultrasonic vocalizations and listen to echoes reflected back from objects in the path of the sound beam to build a spatial representation of their surroundings. Important to understanding the representation of space through echolocation are detailed studies of the cues used for localization, the sonar emission patterns and how this information is assembled. This thesis includes three studies, one on the directional properties of the sonar receiver, one on the directional properties of the sonar transmitter, and a model that demonstrates the role of action in building a representation of auditory space. The general importance of this work to a broader understanding of spatial localization is discussed. Investigations of the directional properties of the sonar receiver reveal that interaural level difference and monaural spectral notch cues are both dependent on sound source azimuth and elevation. This redundancy allows flexibility that an echolocating bat may need when coping with complex computational demands for sound localization. Using a novel method to measure bat sonar emission patterns from freely behaving bats, I show that the sonar beam shape varies between vocalizations. Consequently, the auditory system of a bat may need to adapt its computations to accurately localize objects using changing acoustic inputs. Extra-auditory signals that carry information about pinna position and beam shape are required for auditory localization of sound sources. The auditory system must learn associations between extra-auditory signals and acoustic spatial cues. Furthermore, the auditory system must adapt to changes in acoustic input that occur with changes in pinna position and vocalization parameters. These demands on the nervous system suggest that sound localization is achieved through the interaction of behavioral control and acoustic inputs. A sensorimotor model demonstrates how an organism can learn space through auditory-motor contingencies. The model also reveals how different aspects of sound localization, such as experience-dependent acquisition, adaptation, and extra-auditory influences, can be brought together under a comprehensive framework. This thesis presents a foundation for understanding the representation of auditory space that builds upon acoustic cues, motor control, and learning dynamic associations between action and auditory inputs.
Sound-diffracting flap in the ear of a bat generates spatial information.
Müller, Rolf; Lu, Hongwang; Buck, John R
2008-03-14
Sound diffraction by the mammalian ear generates source-direction information. We have obtained an immediate quantification of this information from numerical predictions. We demonstrate the power of our approach by showing that a small flap in a bat's pinna generates useful information over a large set of directions in a central band of frequencies: presence of the flap more than doubled the solid angle with direction information above a given threshold. From the workings of the employed information measure, the Cramér-Rao lower bound, we can explain how physical shape is linked to sensory information via a strong sidelobe with frequency-dependent orientation in the directivity pattern. This method could be applied to any other mammal species with pinnae to quantify the relative importance of pinna structures' contributions to directional information and to facilitate interspecific comparisons of pinna directivity patterns.
An evaluation of differences due to changing source directivity in room acoustic computer modeling
NASA Astrophysics Data System (ADS)
Vigeant, Michelle C.; Wang, Lily M.
2004-05-01
This project examines the effects of changing source directivity in room acoustic computer models on objective parameters and subjective perception. Acoustic parameters and auralizations calculated from omnidirectional versus directional sources were compared. Three realistic directional sources were used, measured in a limited number of octave bands from a piano, singing voice, and violin. A highly directional source that beams only within a sixteenth-tant of a sphere was also tested. Objectively, there were differences of 5% or more in reverberation time (RT) between the realistic directional and omnidirectional sources. Between the beaming directional and omnidirectional sources, differences in clarity were close to the just-noticeable-difference (jnd) criterion of 1 dB. Subjectively, participants had great difficulty distinguishing between the realistic and omnidirectional sources; very few could discern the differences in RTs. However, a larger percentage (32% vs 20%) could differentiate between the beaming and omnidirectional sources, as well as the respective differences in clarity. Further studies of the objective results from different beaming sources have been pursued. The direction of the beaming source in the room is changed, as well as the beamwidth. The objective results are analyzed to determine if differences fall within the jnd of sound-pressure level, RT, and clarity.
NASA Technical Reports Server (NTRS)
Cook, R. K.
1969-01-01
The propagation of sound waves at infrasonic frequencies (oscillation periods 1.0 - 1000 seconds) in the atmosphere is being studied by a network of seven stations separated geographically by distances of the order of thousands of kilometers. The stations measure the following characteristics of infrasonic waves: (1) the amplitude and waveform of the incident sound pressure, (2) the direction of propagation of the wave, (3) the horizontal phase velocity, and (4) the distribution of sound wave energy at various frequencies of oscillation. Some infrasonic sources which were identified and studied include the aurora borealis, tornadoes, volcanos, gravity waves on the oceans, earthquakes, and atmospheric instability waves caused by winds at the tropopause. Waves of unknown origin seem to radiate from several geographical locations, including one in the Argentine.
Mode detection in turbofan inlets from near field sensor arrays.
Castres, Fabrice O; Joseph, Phillip F
2007-02-01
Knowledge of the modal content of the sound field radiated from a turbofan inlet is important for source characterization and for helping to determine noise generation mechanisms in the engine. An inverse technique for determining the mode amplitudes at the duct outlet is proposed using pressure measurements made in the near field. The radiated sound pressure from a duct is modeled by directivity patterns of cut-on modes in the near field using a model based on the Kirchhoff approximation for flanged ducts with no flow. The resulting system of equations is ill posed and it is shown that the presence of modes with eigenvalues close to a cutoff frequency results in a poorly conditioned directivity matrix. An analysis of the conditioning of this directivity matrix is carried out to assess the inversion robustness and accuracy. A physical interpretation of the singular value decomposition is given and allows us to understand the issues of ill conditioning as well as the detection performance of the radiated sound field by a given sensor array.
Experimental verification of enhanced sound transmission from water to air at low frequencies.
Calvo, David C; Nicholas, Michael; Orris, Gregory J
2013-11-01
Laboratory measurements of enhanced sound transmission from water to air at low frequencies are presented. The pressure at a monitoring hydrophone is found to decrease for shallow source depths in agreement with the classical theory of a monopole source in proximity to a pressure release interface. On the other hand, for source depths below 1/10 of an acoustic wavelength in water, the radiation pattern in the air measured by two microphones becomes progressively omnidirectional in contrast to the classical geometrical acoustics picture in which sound is contained within a cone of 13.4° half angle. The measured directivities agree with wavenumber integration results for a point source over a range of frequencies and source depths. The wider radiation pattern owes itself to the conversion of evanescent waves in the water into propagating waves in the air that fill the angular space outside the cone. A ratio of pressure measurements made using an on-axis microphone and a near-axis hydrophone are also reported and compared with theory. Collectively, these pressure measurements are consistent with the theory of anomalous transparency of the water-air interface in which a large fraction of acoustic power emitted by a shallow source is radiated into the air.
Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang
2015-05-01
Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.
NASA Astrophysics Data System (ADS)
Breitzke, Monika; Bohlen, Thomas
2010-05-01
Modelling sound propagation in the ocean is an essential tool to assess the potential risk of air-gun shots on marine mammals. Based on a 2.5-D finite-difference code a full waveform modelling approach is presented, which determines both sound exposure levels of single shots and cumulative sound exposure levels of multiple shots fired along a seismic line. Band-limited point source approximations of compact air-gun clusters deployed by R/V Polarstern in polar regions are used as sound sources. Marine mammals are simulated as static receivers. Applications to deep and shallow water models including constant and depth-dependent sound velocity profiles of the Southern Ocean show dipole-like directivities in case of single shots and tubular cumulative sound exposure level fields beneath the seismic line in case of multiple shots. Compared to a semi-infinite model an incorporation of seafloor reflections enhances the seismically induced noise levels close to the sea surface. Refraction due to sound velocity gradients and sound channelling in near-surface ducts are evident, but affect only low to moderate levels. Hence, exposure zone radii derived for different hearing thresholds are almost independent of the sound velocity structure. With decreasing thresholds radii increase according to a spherical 20 log10 r law in case of single shots and according to a cylindrical 10 log10 r law in case of multiple shots. A doubling of the shot interval diminishes the cumulative sound exposure levels by -3 dB and halves the radii. The ocean bottom properties only slightly affect the radii in shallow waters, if the normal incidence reflection coefficient exceeds 0.2.
NASA Astrophysics Data System (ADS)
Lee, Yang-Sub
A time-domain numerical algorithm for solving the KZK (Khokhlov-Zabolotskaya-Kuznetsov) nonlinear parabolic wave equation is developed for pulsed, axisymmetric, finite amplitude sound beams in thermoviscous fluids. The KZK equation accounts for the combined effects of diffraction, absorption, and nonlinearity at the same order of approximation. The accuracy of the algorithm is established via comparison with analytical solutions for several limiting cases, and with numerical results obtained from a widely used algorithm for solving the KZK equation in the frequency domain. The time domain algorithm is used to investigate waveform distortion and shock formation in directive sound beams radiated by pulsed circular piston sources. New results include predictions for the entire process of self-demodulation, and for the effect of frequency modulation on pulse envelope distortion. Numerical results are compared with measurements, and focused sources are investigated briefly.
NASA Technical Reports Server (NTRS)
Goldstein, Marvin E.; Envia, E.
2002-01-01
In many cases of technological interest solid boundaries play a direct role in the aerodynamic sound generation process and their presence often results in a large increase in the acoustic radiation. A generalized treatment of the emission of sound from moving boundaries is presented. The approach is similar to that of Ffowcs Williams and Hawkings (1969) but the effect of the surrounding mean flow is explicitly accounted for. The results are used to develop a rational framework for the prediction of internally generated aero-engine noise. The final formulas suggest some new noise sources that may be of practical significance.
Monaural Sound Localization Revisited
NASA Technical Reports Server (NTRS)
Wightman, Frederic L.; Kistler, Doris J.
1997-01-01
Research reported during the past few decades has revealed the importance for human sound localization of the so-called 'monaural spectral cues.' These cues are the result of the direction-dependent filtering of incoming sound waves accomplished by the pinnae. One point of view about how these cues are extracted places great emphasis on the spectrum of the received sound at each ear individually. This leads to the suggestion that an effective way of studying the influence of these cues is to measure the ability of listeners to localize sounds when one of their ears is plugged. Numerous studies have appeared using this monaural localization paradigm. Three experiments are described here which are intended to clarify the results of the previous monaural localization studies and provide new data on how monaural spectral cues might be processed. Virtual sound sources are used in the experiments in order to manipulate and control the stimuli independently at the two ears. Two of the experiments deal with the consequences of the incomplete monauralization that may have contaminated previous work. The results suggest that even very low sound levels in the occluded ear provide access to interaural localization cues. The presence of these cues complicates the interpretation of the results of nominally monaural localization studies. The third experiment concerns the role of prior knowledge of the source spectrum, which is required if monaural cues are to be useful. The results of this last experiment demonstrate that extraction of monaural spectral cues can be severely disrupted by trial-to-trial fluctuations in the source spectrum. The general conclusion of the experiments is that, while monaural spectral cues are important, the monaural localization paradigm may not be the most appropriate way to study their role.
Monaural sound localization revisited.
Wightman, F L; Kistler, D J
1997-02-01
Research reported during the past few decades has revealed the importance for human sound localization of the so-called "monaural spectral cues." These cues are the result of the direction-dependent filtering of incoming sound waves accomplished by the pinnae. One point of view about how these cues are extracted places great emphasis on the spectrum of the received sound at each ear individually. This leads to the suggestion that an effective way of studying the influence of these cues is to measure the ability of listeners to localize sounds when one of their ears is plugged. Numerous studies have appeared using this monaural localization paradigm. Three experiments are described here which are intended to clarify the results of the previous monaural localization studies and provide new data on how monaural spectral cues might be processed. Virtual sound sources are used in the experiments in order to manipulate and control the stimuli independently at the two ears. Two of the experiments deal with the consequences of the incomplete monauralization that may have contaminated previous work. The results suggest that even very low sound levels in the occluded ear provide access to interaural localization cues. The presence of these cues complicates the interpretation of the results of nominally monaural localization studies. The third experiment concerns the role of prior knowledge of the source spectrum, which is required if monaural cues are to be useful. The results of this last experiment demonstrate that extraction of monaural spectral cues can be severely disrupted by trial-to-trial fluctuations in the source spectrum. The general conclusion of the experiments is that, while monaural spectral cues are important, the monaural localization paradigm may not be the most appropriate way to study their role.
NASA Astrophysics Data System (ADS)
Sugimoto, Tsuneyoshi; Nakagawa, Yutaka; Shirakawa, Takashi; Sano, Motoaki; Ohaba, Motoyoshi; Shibusawa, Sakae
2013-07-01
We propose a method for the monitoring and imaging of the water distribution in the rooting zone of plants using sound vibration. In this study, the water distribution measurement in the horizontal and vertical directions in the soil layer was examined to confirm whether a temporal change in the volume water content of the soil could be estimated from a temporal changes in propagation velocity. A scanning laser Doppler vibrometer (SLDV) is used for measurement of the vibration velocity of the soil surface, because the highly precise vibration velocity measurement of several many points can be carried out automatically. Sand with a uniform particle size distribution is used for the soil, as it has high plasticity; that is, the sand can return to a dry state easily even if it is soaked with water. A giant magnetostriction vibrator or a flat speaker is used as a sound source. Also, a soil moisture sensor, which measures the water content of the soil using the electric permittivity, is installed in the sand. From the experimental results of the vibration measurement and soil moisture sensors, we can confirm that the temporal changes of the water distribution in sand using the negative pressure irrigation system in both the horizontal and vertical directions can be estimated using the propagation velocity of sound. Therefore, in the future, we plan to develop an insertion-type sound source and receiver using the acceleration sensors, and we intend to examine whether our method can be applied even in commercial soil with growing plants.
NASA Astrophysics Data System (ADS)
Kim, Daesung; Kim, Kihyun; Wang, Semyung; Lee, Sung Q.; Crocker, Malcolm J.
2011-11-01
This paper mainly addresses design methods for near field loudspeaker arrays. These methods have been studied recently since they can be used to realize a personal audio space without the use of headphones. From a practical view point, they can also be used to form a directional sound beam within a short distance from the sources especially using a linear loudspeaker array. In this regard, we re-analyzed the previous near field beamforming methods in order to obtain a comprehensive near field beamforming formulation. Broadband directivity control is proposed for multi-objective optimization, which maximizes the directivity with the desired gain, where both the directivity and the gain are commonly used array performance measures. This method of control aims to form a directive sound beam within a short distance while widening the frequency range of the beamforming. Simulation and experimental results demonstrate that broadband directivity control achieves higher directivity and gain over our whole frequency range of interest compared with previous beamforming methods.
Interaural intensity difference limen.
DOT National Transportation Integrated Search
1967-05-01
The ability to judge the direction (the azimuth) of a sound source and to discriminate it from others is often essential to flyers. A major factor in the judgment process is the interaural intensity difference that the pilot can perceive. Three kinds...
Response of multi-panel assembly to noise from a jet in forward motion
NASA Technical Reports Server (NTRS)
Bayliss, A.; Maestrello, L.; Mcgreevy, J. L.; Fenno, C. C., Jr.
1995-01-01
A model of the interaction of the noise from a spreading subsonic jet with a 4 panel assembly is studied numerically in two dimensions. The effect of forward motion of the jet is accounted for by considering a uniform flow field superimposed on a mean jet exit profile. The jet is initially excited by a pulse-like source inserted into the flow field. The pulse triggers instabilities associated with the inviscid instability of the jet shear layer. These instabilities generate sound which in turn serves to excite the panels. We compare the sound from the jet, the responses of the panels and the resulting acoustic radiation for the static jet and the jet in forward motion. The far field acoustic radiation, the panel response and sound radiated from the panels are all computed and compared to computations of a static jet. The results demonstrate that for a jet in forward motion there is a reduction in sound in downstream directions and an increase in sound in upstream directions in agreement with experiments. Furthermore, the panel response and radiation for a jet in forward motion exhibits a downstream attenuation as compared with the static case.
Development of aerial ultrasonic source using cylinder typed vibrating plate with axial nodal mode
NASA Astrophysics Data System (ADS)
Asami, Takuya; Miura, Hikaru
2018-07-01
We developed a high-power aerial ultrasonic source with a cylinder typed vibrating plate combined with two rigid walls that can be directly connected to a pipe in order to solve the difficulty in connecting an ultrasonic source to a pipe containing particles while preventing the particles from leaking. The structure of the vibrating plate combined with two rigid walls that do not vibrate and can obtain a high sound pressure in the space inside the vibrating plate was designed using the finite element method (FEM). We found that the aerial ultrasonic source using the designed vibrating plate slightly vibrates at the rigid walls as designed using FEM and can be connected to other devices. In addition, the obtained sound pressure was around 8.0 kPa (172 dB) at an input electrical power of 7 W.
Radiated BPF sound measurement of centrifugal compressor
NASA Astrophysics Data System (ADS)
Ohuchida, S.; Tanaka, K.
2013-12-01
A technique to measure radiated BPF sound from an automotive turbocharger compressor impeller is proposed in this paper. Where there are high-level background noises in the measurement environment, it is difficult to discriminate the target component from the background. Since the effort of measuring BPF sound was taken in a room with such condition in this study, no discrete BPF peak was initially found on the sound spectrum. Taking its directionality into consideration, a microphone covered with a parabolic cone was selected and using this technique, the discrete peak of BPF was clearly observed. Since the level of measured sound was amplified due to the area-integration effect, correction was needed to obtain the real level. To do so, sound measurements with and without a parabolic cone were conducted for the fixed source and their level differences were used as correction factors. Consideration is given to the sound propagation mechanism utilizing measured BPF as well as the result of a simple model experiment. The present method is generally applicable to sound measurements conducted with a high level of background noise.
Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology
NASA Astrophysics Data System (ADS)
Niwa, Kenta; Nishino, Takanori; Takeda, Kazuya
A sound field reproduction method is proposed that uses blind source separation and a head-related transfer function. In the proposed system, multichannel acoustic signals captured at distant microphones are decomposed to a set of location/signal pairs of virtual sound sources based on frequency-domain independent component analysis. After estimating the locations and the signals of the virtual sources by convolving the controlled acoustic transfer functions with each signal, the spatial sound is constructed at the selected point. In experiments, a sound field made by six sound sources is captured using 48 distant microphones and decomposed into sets of virtual sound sources. Since subjective evaluation shows no significant difference between natural and reconstructed sound when six virtual sources and are used, the effectiveness of the decomposing algorithm as well as the virtual source representation are confirmed.
Noise Source Visualization Using a Digital Voice Recorder and Low-Cost Sensors
Cho, Yong Thung
2018-01-01
Accurate sound visualization of noise sources is required for optimal noise control. Typically, noise measurement systems require microphones, an analog-digital converter, cables, a data acquisition system, etc., which may not be affordable for potential users. Also, many such systems are not highly portable and may not be convenient for travel. Handheld personal electronic devices such as smartphones and digital voice recorders with relatively lower costs and higher performance have become widely available recently. Even though such devices are highly portable, directly implementing them for noise measurement may lead to erroneous results since such equipment was originally designed for voice recording. In this study, external microphones were connected to a digital voice recorder to conduct measurements and the input received was processed for noise visualization. In this way, a low cost, compact sound visualization system was designed and introduced to visualize two actual noise sources for verification with different characteristics: an enclosed loud speaker and a small air compressor. Reasonable accuracy of noise visualization for these two sources was shown over a relatively wide frequency range. This very affordable and compact sound visualization system can be used for many actual noise visualization applications in addition to educational purposes. PMID:29614038
A method for evaluating the relation between sound source segregation and masking
Lutfi, Robert A.; Liu, Ching-Ju
2011-01-01
Sound source segregation refers to the ability to hear as separate entities two or more sound sources comprising a mixture. Masking refers to the ability of one sound to make another sound difficult to hear. Often in studies, masking is assumed to result from a failure of segregation, but this assumption may not always be correct. Here a method is offered to identify the relation between masking and sound source segregation in studies and an example is given of its application. PMID:21302979
Jet crackle: skewness transport budget and a mechanistic source model
NASA Astrophysics Data System (ADS)
Buchta, David; Freund, Jonathan
2016-11-01
The sound from high-speed (supersonic) jets, such as on military aircraft, is distinctly different than that from lower-speed jets, such as on commercial airliners. Atop the already loud noise, a higher speed adds an intense, fricative, and intermittent character. The observed pressure wave patterns have strong peaks which are followed by relatively long shallows; notably, their pressure skewness is Sk >= 0 . 4 . Direct numerical simulation of free-shear-flow turbulence show that these skewed pressure waves occur immediately adjacent to the turbulence source for M >= 2 . 5 . Additionally, the near-field waves are seen to intersect and nonlinearly merge with other waves. Statistical analysis of terms in a pressure skewness transport equation show that starting just beyond δ99 the nonlinear wave mechanics that add to Sk are balanced by damping molecular effects, consistent with this aspect of the sound arising in the source region. A gas dynamics description is developed that neglects rotational turbulence dynamics and yet reproduces the key crackle features. At its core, this mechanism shows simply that nonlinear compressive effects lead directly to stronger compressions than expansions and thus Sk > 0 .
Numerical Simulation of Noise from Supersonic Jets Passing Through a Rigid Duct
NASA Technical Reports Server (NTRS)
Kandula, Max
2012-01-01
The generation, propagation and radiation of sound from a perfectly expanded Mach 2.5 cold supersonic jet flowing through an enclosed rigid-walled duct with an upstream J-deflector have been numerically simulated with the aid of OVERFLOW Navier-Stokes CFD code. A one-equation turbulence model is considered. While the near-field sound sources are computed by the CFD code, the far-field sound is evaluated by Kirchhoff surface integral formulation. Predictions of the farfield directivity of the OASPL (Overall Sound Pressure Level) agree satisfactorily with the experimental data previously reported by the author. Calculations also suggest that there is significant entrainment of air into the duct, with the mass flow rate of entrained air being about three times the jet exit mass flow rate.
Sound source localization identification accuracy: Envelope dependencies.
Yost, William A
2017-07-01
Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.
NASA Astrophysics Data System (ADS)
Oviatt, Eric; Patsiaouris, Konstantinos; Denardo, Bruce
2009-11-01
A sound source of finite size produces a diverging traveling wave in an unbounded fluid. A rigid body that is small compared to the wavelength experiences an attractive radiation force (toward the source). An attractive force is also exerted on the fluid itself. The effect can be demonstrated with a styrofoam ball suspended near a loudspeaker that is producing sound of high amplitude and low frequency (for example, 100 Hz). The behavior can be understood and roughly calculated as a time-averaged Bernoulli effect. A rigorous scattering calculation yields a radiation force that is within a factor of two of the Bernoulli result. For a spherical wave, the force decreases as the inverse fifth power of the distance from the source. Applications of the phenomenon include ultrasonic filtration of liquids and the growth of supermassive black holes that emit sound waves in a surrounding plasma. An experiment is being conducted in an anechoic chamber with a 1-inch diameter aluminum ball that is suspended from an analytical balance. Directly below the ball is a baffled loudspeaker that exerts an attractive force that is measured by the balance.
Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.
2003-01-01
A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Holzrichter, John F; Burnett, Greg C; Ng, Lawrence C
2013-05-21
A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.
2007-10-16
A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Diversity of acoustic tracheal system and its role for directional hearing in crickets
2013-01-01
Background Sound localization in small insects can be a challenging task due to physical constraints in deriving sufficiently large interaural intensity differences (IIDs) between both ears. In crickets, sound source localization is achieved by a complex type of pressure difference receiver consisting of four potential sound inputs. Sound acts on the external side of two tympana but additionally reaches the internal tympanal surface via two external sound entrances. Conduction of internal sound is realized by the anatomical arrangement of connecting trachea. A key structure is a trachea coupling both ears which is characterized by an enlarged part in its midline (i.e., the acoustic vesicle) accompanied with a thin membrane (septum). This facilitates directional sensitivity despite an unfavorable relationship between wavelength of sound and body size. Here we studied the morphological differences of the acoustic tracheal system in 40 cricket species (Gryllidae, Mogoplistidae) and species of outgroup taxa (Gryllotalpidae, Rhaphidophoridae, Gryllacrididae) of the suborder Ensifera comprising hearing and non hearing species. Results We found a surprisingly high variation of acoustic tracheal systems and almost all investigated species using intraspecific acoustic communication were characterized by an acoustic vesicle associated with a medial septum. The relative size of the acoustic vesicle - a structure most crucial for deriving high IIDs - implies an important role for sound localization. Most remarkable in this respect was the size difference of the acoustic vesicle between species; those with a more unfavorable ratio of body size to sound wavelength tend to exhibit a larger acoustic vesicle. On the other hand, secondary loss of acoustic signaling was nearly exclusively associated with the absence of both acoustic vesicle and septum. Conclusion The high diversity of acoustic tracheal morphology observed between species might reflect different steps in the evolution of the pressure difference receiver; with a precursor structure already present in ancestral non-hearing species. In addition, morphological transitions of the acoustic vesicle suggest a possible adaptive role for the generation of binaural directional cues. PMID:24131512
Speech Intelligibility in Various Noise Conditions with the Nucleus® 5 CP810 Sound Processor.
Dillier, Norbert; Lai, Wai Kong
2015-06-11
The Nucleus(®) 5 System Sound Processor (CP810, Cochlear™, Macquarie University, NSW, Australia) contains two omnidirectional microphones. They can be configured as a fixed directional microphone combination (called Zoom) or as an adaptive beamformer (called Beam), which adjusts the directivity continuously to maximally reduce the interfering noise. Initial evaluation studies with the CP810 had compared performance and usability of the new processor in comparison with the Freedom™ Sound Processor (Cochlear™) for speech in quiet and noise for a subset of the processing options. This study compares the two processing options suggested to be used in noisy environments, Zoom and Beam, for various sound field conditions using a standardized speech in noise matrix test (Oldenburg sentences test). Nine German-speaking subjects who previously had been using the Freedom speech processor and subsequently were upgraded to the CP810 device participated in this series of additional evaluation tests. The speech reception threshold (SRT for 50% speech intelligibility in noise) was determined using sentences presented via loudspeaker at 65 dB SPL in front of the listener and noise presented either via the same loudspeaker (S0N0) or at 90 degrees at either the ear with the sound processor (S0NCI+) or the opposite unaided ear (S0NCI-). The fourth noise condition consisted of three uncorrelated noise sources placed at 90, 180 and 270 degrees. The noise level was adjusted through an adaptive procedure to yield a signal to noise ratio where 50% of the words in the sentences were correctly understood. In spatially separated speech and noise conditions both Zoom and Beam could improve the SRT significantly. For single noise sources, either ipsilateral or contralateral to the cochlear implant sound processor, average improvements with Beam of 12.9 and 7.9 dB in SRT were found. The average SRT of -8 dB for Beam in the diffuse noise condition (uncorrelated noise from both sides and back) is truly remarkable and comparable to the performance of normal hearing listeners in the same test environment. The static directivity (Zoom) option in the diffuse noise condition still provides a significant benefit of 5.9 dB in comparison with the standard omnidirectional microphone setting. These results indicate that CI recipients may improve their speech recognition in noisy environments significantly using these directional microphone-processing options.
Behavioral response of manatees to variations in environmental sound levels
Miksis-Olds, Jennifer L.; Wagner, Tyler
2011-01-01
Florida manatees (Trichechus manatus latirostris) inhabit coastal regions because they feed on the aquatic vegetation that grows in shallow waters, which are the same areas where human activities are greatest. Noise produced from anthropogenic and natural sources has the potential to affect these animals by eliciting responses ranging from mild behavioral changes to extreme aversion. Sound levels were calculated from recordings made throughout behavioral observation periods. An information theoretic approach was used to investigate the relationship between behavior patterns and sound level. Results indicated that elevated sound levels affect manatee activity and are a function of behavioral state. The proportion of time manatees spent feeding and milling changed in response to sound level. When ambient sound levels were highest, more time was spent in the directed, goal-oriented behavior of feeding, whereas less time was spent engaged in undirected behavior such as milling. This work illustrates how shifts in activity of individual manatees may be useful parameters for identifying impacts of noise on manatees and might inform population level effects.
NASA Astrophysics Data System (ADS)
McInerny, S. A.
1990-10-01
This paper reviews what is known about far-field rocket noise from the controlled studies of the late 1950s and 1960s and from launch data. The peak dimensionless frequency, the dependence of overall sound power on exhaust parameters, and the directivity of the overall sound power of rockets are compared to those of subsonic jets and turbo-jets. The location of the dominant sound source in the rocket exhaust plume and the mean flow velocity in this region are discussed and shown to provide a qualitative explanation for the low peak Strouhal number, fD(e)/V(e), and large angle of maximum directivity. Lastly, two empirical prediction methods are compared with data from launches of a Titan family vehicle (two, solid rocket motors of 5.7 x 10 to the 6th N thrust each) and the Saturn V (five, liquid oxygen/rocket propellant engines of 6.7 x 10 to the 6th N thrust, each). The agreement is favorable. In contrast, these methods appear to overpredict the far-field sound pressure levels generated by the Space Shuttle.
The effect of spatial distribution on the annoyance caused by simultaneous sounds
NASA Astrophysics Data System (ADS)
Vos, Joos; Bronkhorst, Adelbert W.; Fedtke, Thomas
2004-05-01
A considerable part of the population is exposed to simultaneous and/or successive environmental sounds from different sources. In many cases, these sources are different with respect to their locations also. In a laboratory study, it was investigated whether the annoyance caused by the multiple sounds is affected by the spatial distribution of the sources. There were four independent variables: (1) sound category (stationary or moving), (2) sound type (stationary: lawn-mower, leaf-blower, and chain saw; moving: road traffic, railway, and motorbike), (3) spatial location (left, right, and combinations), and (4) A-weighted sound exposure level (ASEL of single sources equal to 50, 60, or 70 dB). In addition to the individual sounds in isolation, various combinations of two or three different sources within each sound category and sound level were presented for rating. The annoyance was mainly determined by sound level and sound source type. In most cases there were neither significant main effects of spatial distribution nor significant interaction effects between spatial distribution and the other variables. It was concluded that for rating the spatially distrib- uted sounds investigated, the noise dose can simply be determined by a summation of the levels for the left and right channels. [Work supported by CEU.
Detection of a Novel Mechanism of Acousto-Optic Modulation of Incoherent Light
Jarrett, Christopher W.; Caskey, Charles F.; Gore, John C.
2014-01-01
A novel form of acoustic modulation of light from an incoherent source has been detected in water as well as in turbid media. We demonstrate that patterns of modulated light intensity appear to propagate as the optical shadow of the density variations caused by ultrasound within an illuminated ultrasonic focal zone. This pattern differs from previous reports of acousto-optical interactions that produce diffraction effects that rely on phase shifts and changes in light directions caused by the acoustic modulation. Moreover, previous studies of acousto-optic interactions have mainly reported the effects of sound on coherent light sources via photon tagging, and/or the production of diffraction phenomena from phase effects that give rise to discrete sidebands. We aimed to assess whether the effects of ultrasound modulation of the intensity of light from an incoherent light source could be detected directly, and how the acoustically modulated (AOM) light signal depended on experimental parameters. Our observations suggest that ultrasound at moderate intensities can induce sufficiently large density variations within a uniform medium to cause measurable modulation of the intensity of an incoherent light source by absorption. Light passing through a region of high intensity ultrasound then produces a pattern that is the projection of the density variations within the region of their interaction. The patterns exhibit distinct maxima and minima that are observed at locations much different from those predicted by Raman-Nath, Bragg, or other diffraction theory. The observed patterns scaled appropriately with the geometrical magnification and sound wavelength. We conclude that these observed patterns are simple projections of the ultrasound induced density changes which cause spatial and temporal variations of the optical absorption within the illuminated sound field. These effects potentially provide a novel method for visualizing sound fields and may assist the interpretation of other hybrid imaging methods. PMID:25105880
NASA Technical Reports Server (NTRS)
El-Sum, H. M. A.; Mawardi, O. K.
1973-01-01
Techniques for studying aerodynamic noise generating mechanisms without disturbing the flow in a free field, and in the reverberation environment of the ARC wind tunnel were investigated along with the design and testing of an acoustic antenna with an electronic steering control. The acoustic characteristics of turbojet as a noise source, detection of direct sound from a source in a reverberant background, optical diagnostic methods, and the design characteristics of a high directivity acoustic antenna. Recommendations for further studies are included.
Spherical loudspeaker array for local active control of sound.
Rafaely, Boaz
2009-05-01
Active control of sound has been employed to reduce noise levels around listeners' head using destructive interference from noise-canceling sound sources. Recently, spherical loudspeaker arrays have been studied as multiple-channel sound sources, capable of generating sound fields with high complexity. In this paper, the potential use of a spherical loudspeaker array for local active control of sound is investigated. A theoretical analysis of the primary and secondary sound fields around a spherical sound source reveals that the natural quiet zones for the spherical source have a shell-shape. Using numerical optimization, quiet zones with other shapes are designed, showing potential for quiet zones with extents that are significantly larger than the well-known limit of a tenth of a wavelength for monopole sources. The paper presents several simulation examples showing quiet zones in various configurations.
Localizing the sources of two independent noises: Role of time varying amplitude differences
Yost, William A.; Brown, Christopher A.
2013-01-01
Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region. PMID:23556597
Localizing the sources of two independent noises: role of time varying amplitude differences.
Yost, William A; Brown, Christopher A
2013-04-01
Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region.
Sound Fields in Complex Listening Environments
2011-01-01
The conditions of sound fields used in research, especially testing and fitting of hearing aids, are usually simplified or reduced to fundamental physical fields, such as the free or the diffuse sound field. The concepts of such ideal conditions are easily introduced in theoretical and experimental investigations and in models for directional microphones, for example. When it comes to real-world application of hearing aids, however, the field conditions are more complex with regard to specific stationary and transient properties in room transfer functions and the corresponding impulse responses and binaural parameters. Sound fields can be categorized in outdoor rural and urban and indoor environments. Furthermore, sound fields in closed spaces of various sizes and shapes and in situations of transport in vehicles, trains, and aircrafts are compared with regard to the binaural signals. In laboratory tests, sources of uncertainties are individual differences in binaural cues and too less controlled sound field conditions. Furthermore, laboratory sound fields do not cover the variety of complex sound environments. Spatial audio formats such as higher-order ambisonics are candidates for sound field references not only in room acoustics and audio engineering but also in audiology. PMID:21676999
A generalized sound extrapolation method for turbulent flows
NASA Astrophysics Data System (ADS)
Zhong, Siyang; Zhang, Xin
2018-02-01
Sound extrapolation methods are often used to compute acoustic far-field directivities using near-field flow data in aeroacoustics applications. The results may be erroneous if the volume integrals are neglected (to save computational cost), while non-acoustic fluctuations are collected on the integration surfaces. In this work, we develop a new sound extrapolation method based on an acoustic analogy using Taylor's hypothesis (Taylor 1938 Proc. R. Soc. Lon. A 164, 476-490. (doi:10.1098/rspa.1938.0032)). Typically, a convection operator is used to filter out the acoustically inefficient components in the turbulent flows, and an acoustics dominant indirect variable Dcp‧ is solved. The sound pressure p' at the far field is computed from Dcp‧ based on the asymptotic properties of the Green's function. Validations results for benchmark problems with well-defined sources match well with the exact solutions. For aeroacoustics applications: the sound predictions by the aerofoil-gust interaction are close to those by an earlier method specially developed to remove the effect of vortical fluctuations (Zhong & Zhang 2017 J. Fluid Mech. 820, 424-450. (doi:10.1017/jfm.2017.219)); for the case of vortex shedding noise from a cylinder, the off-body predictions by the proposed method match well with the on-body Ffowcs-Williams and Hawkings result; different integration surfaces yield close predictions (of both spectra and far-field directivities) for a co-flowing jet case using an established direct numerical simulation database. The results suggest that the method may be a potential candidate for sound projection in aeroacoustics applications.
Large Eddy Simulation of Sound Generation by Turbulent Reacting and Nonreacting Shear Flows
NASA Astrophysics Data System (ADS)
Najafi-Yazdi, Alireza
The objective of the present study was to investigate the mechanisms of sound generation by subsonic jets. Large eddy simulations were performed along with bandpass filtering of the flow and sound in order to gain further insight into the pole of coherent structures in subsonic jet noise generation. A sixth-order compact scheme was used for spatial discretization of the fully compressible Navier-Stokes equations. Time integration was performed through the use of the standard fourth-order, explicit Runge-Kutta scheme. An implicit low dispersion, low dissipation Runge-Kutta (ILDDRK) method was developed and implemented for simulations involving sources of stiffness such as flows near solid boundaries, or combustion. A surface integral acoustic analogy formulation, called Formulation 1C, was developed for farfield sound pressure calculations. Formulation 1C was derived based on the convective wave equation in order to take into account the presence of a mean flow. The formulation was derived to be easy to implement as a numerical post-processing tool for CFD codes. Sound radiation from an unheated, Mach 0.9 jet at Reynolds number 400, 000 was considered. The effect of mesh size on the accuracy of the nearfield flow and farfield sound results was studied. It was observed that insufficient grid resolution in the shear layer results in unphysical laminar vortex pairing, and increased sound pressure levels in the farfield. Careful examination of the bandpass filtered pressure field suggested that there are two mechanisms of sound radiation in unheated subsonic jets that can occur in all scales of turbulence. The first mechanism is the stretching and the distortion of coherent vortical structures, especially close to the termination of the potential core. As eddies are bent or stretched, a portion of their kinetic energy is radiated. This mechanism is quadrupolar in nature, and is responsible for strong sound radiation at aft angles. The second sound generation mechanism appears to be associated with the transverse vibration of the shear-layer interface within the ambient quiescent flow, and has dipolar characteristics. This mechanism is believed to be responsible for sound radiation along the sideline directions. Jet noise suppression through the use of microjets was studied. The microjet injection induced secondary instabilities in the shear layer which triggered the transition to turbulence, and suppressed laminar vortex pairing. This in turn resulted in a reduction of OASPL at almost all observer locations. In all cases, the bandpass filtering of the nearfield flow and the associated sound provides revealing details of the sound radiation process. The results suggest that circumferential modes are significant and need to be included in future wavepacket models for jet noise prediction. Numerical simulations of sound radiation from nonpremixed flames were also performed. The simulations featured the solution of the fully compressible Navier-Stokes equations. Therefore, sound generation and radiation were directly captured in the simulations. A thickened flamelet model was proposed for nonpremixed flames. The model yields artificially thickened flames which can be better resolved on the computational grid, while retaining the physically currect values of the total heat released into the flow. Combustion noise has monopolar characteristics for low frequencies. For high frequencies, the sound field is no longer omni-directional. Major sources of sound appear to be located in the jet shear layer within one potential core length from the jet nozzle.
NASA Technical Reports Server (NTRS)
Roskam, J.; Muirhead, V. U.; Smith, H. W.; Peschier, T. D.
1977-01-01
The construction, calibration, and properties of a facility for measuring sound transmission through aircraft type panels are described along with the theoretical and empirical methods used. Topics discussed include typical noise source, sound transmission path, and acoustic cabin properties and their effect on interior noise. Experimental results show an average sound transmission loss in the mass controlled frequency region comparable to theoretical predictions. The results also verify that transmission losses in the stiffness controlled region directly depend on the fundamental frequency of the panel. Experimental and theoretical results indicate that increases in this frequency, and consequently in transmission loss, can be achieved by applying pressure differentials across the specimen.
Linear multivariate evaluation models for spatial perception of soundscape.
Deng, Zhiyong; Kang, Jian; Wang, Daiwei; Liu, Aili; Kang, Joe Zhengyu
2015-11-01
Soundscape is a sound environment that emphasizes the awareness of auditory perception and social or cultural understandings. The case of spatial perception is significant to soundscape. However, previous studies on the auditory spatial perception of the soundscape environment have been limited. Based on 21 native binaural-recorded soundscape samples and a set of auditory experiments for subjective spatial perception (SSP), a study of the analysis among semantic parameters, the inter-aural-cross-correlation coefficient (IACC), A-weighted-equal sound-pressure-level (L(eq)), dynamic (D), and SSP is introduced to verify the independent effect of each parameter and to re-determine some of their possible relationships. The results show that the more noisiness the audience perceived, the worse spatial awareness they received, while the closer and more directional the sound source image variations, dynamics, and numbers of sound sources in the soundscape are, the better the spatial awareness would be. Thus, the sensations of roughness, sound intensity, transient dynamic, and the values of Leq and IACC have a suitable range for better spatial perception. A better spatial awareness seems to promote the preference slightly for the audience. Finally, setting SSPs as functions of the semantic parameters and Leq-D-IACC, two linear multivariate evaluation models of subjective spatial perception are proposed.
Sensor system for heart sound biomonitor
NASA Astrophysics Data System (ADS)
Maple, Jarrad L.; Hall, Leonard T.; Agzarian, John; Abbott, Derek
1999-09-01
Heart sounds can be utilized more efficiently by medical doctors when they are displayed visually, rather than through a conventional stethoscope. A system whereby a digital stethoscope interfaces directly to a PC will be directly along with signal processing algorithms, adopted. The sensor is based on a noise cancellation microphone, with a 450 Hz bandwidth and is sampled at 2250 samples/sec with 12-bit resolution. Further to this, we discuss for comparison a piezo-based sensor with a 1 kHz bandwidth. A major problem is that the recording of the heart sound into these devices is subject to unwanted background noise which can override the heart sound and results in a poor visual representation. This noise originates from various sources such as skin contact with the stethoscope diaphragm, lung sounds, and other surrounding sounds such as speech. Furthermore we demonstrate a solution using 'wavelet denoising'. The wavelet transform is used because of the similarity between the shape of wavelets and the time-domain shape of a heartbeat sound. Thus coding of the waveform into the wavelet domain is achieved with relatively few wavelet coefficients, in contrast to the many Fourier components that would result from conventional decomposition. We show that the background noise can be dramatically reduced by a thresholding operation in the wavelet domain. The principle is that the background noise codes into many small broadband wavelet coefficients that can be removed without significant degradation of the signal of interest.
Issues in Humanoid Audition and Sound Source Localization by Active Audition
NASA Astrophysics Data System (ADS)
Nakadai, Kazuhiro; Okuno, Hiroshi G.; Kitano, Hiroaki
In this paper, we present an active audition system which is implemented on the humanoid robot "SIG the humanoid". The audition system for highly intelligent humanoids localizes sound sources and recognizes auditory events in the auditory scene. Active audition reported in this paper enables SIG to track sources by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. However, such an active head movement inevitably creates motor noises.The system adaptively cancels motor noises using motor control signals and the cover acoustics. The experimental result demonstrates that active audition by integration of audition, vision, and motor control attains sound source tracking in variety of conditions.onditions.
NASA Technical Reports Server (NTRS)
Schmidt, R. F.
1982-01-01
A transition from the antenna noise temperature formulation for extended noise sources in the far-field or Fraunhofer-region of an antenna to one of the intermediate near field or Fresnel-region is discussed. The effort is directed toward microwave antenna simulations and high-speed digital computer analysis of radiometric sounding units used to obtain water vapor and temperature profiles of the atmosphere. Fresnel-region fields are compared at various distances from the aperture. The antenna noise temperature contribution of an annular noise source is computed in the Fresnel-region (D squared/16 lambda) for a 13.2 cm diameter offset-paraboloid aperture at 60 GHz. The time-average Poynting vector is used to effect the computation.
The Relative Contribution of Interaural Time and Magnitude Cues to Dynamic Sound Localization
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.; Null, Cynthia H. (Technical Monitor)
1995-01-01
This paper presents preliminary data from a study examining the relative contribution of interaural time differences (ITDs) and interaural level differences (ILDs) to the localization of virtual sound sources both with and without head motion. The listeners' task was to estimate the apparent direction and distance of virtual sources (broadband noise) presented over headphones. Stimuli were synthesized from minimum phase representations of nonindividualized directional transfer functions; binaural magnitude spectra were derived from the minimum phase estimates and ITDs were represented as a pure delay. During dynamic conditions, listeners were encouraged to move their heads; the position of the listener's head was tracked and the stimuli were synthesized in real time using a Convolvotron to simulate a stationary external sound source. ILDs and ITDs were either correctly or incorrectly correlated with head motion: (1) both ILDs and ITDs correctly correlated, (2) ILDs correct, ITD fixed at 0 deg azimuth and 0 deg elevation, (3) ITDs correct, ILDs fixed at 0 deg, 0 deg. Similar conditions were run for static conditions except that none of the cues changed with head motion. The data indicated that, compared to static conditions, head movements helped listeners to resolve confusions primarily when ILDs were correctly correlated, although a smaller effect was also seen for correct ITDs. Together with the results for static conditions, the data suggest that localization tends to be dominated by the cue that is most reliable or consistent, when reliability is defined by consistency over time as well as across frequency bands.
Sound source localization method in an environment with flow based on Amiet-IMACS
NASA Astrophysics Data System (ADS)
Wei, Long; Li, Min; Qin, Sheng; Fu, Qiang; Yang, Debin
2017-05-01
A sound source localization method is proposed to localize and analyze the sound source in an environment with airflow. It combines the improved mapping of acoustic correlated sources (IMACS) method and Amiet's method, and is called Amiet-IMACS. It can localize uncorrelated and correlated sound sources with airflow. To implement this approach, Amiet's method is used to correct the sound propagation path in 3D, which improves the accuracy of the array manifold matrix and decreases the position error of the localized source. Then, the mapping of acoustic correlated sources (MACS) method, which is as a high-resolution sound source localization algorithm, is improved by self-adjusting the constraint parameter at each irritation process to increase convergence speed. A sound source localization experiment using a pair of loud speakers in an anechoic wind tunnel under different flow speeds is conducted. The experiment exhibits the advantage of Amiet-IMACS in localizing a more accurate sound source position compared with implementing IMACS alone in an environment with flow. Moreover, the aerodynamic noise produced by a NASA EPPLER 862 STRUT airfoil model in airflow with a velocity of 80 m/s is localized using the proposed method, which further proves its effectiveness in a flow environment. Finally, the relationship between the source position of this airfoil model and its frequency, along with its generation mechanism, is determined and interpreted.
Brinkløv, Signe; Jakobsen, Lasse; Ratcliffe, John M; Kalko, Elisabeth K V; Surlykke, Annemarie
2011-01-01
The directionality of bat echolocation calls defines the width of bats' sonar "view," while call intensity directly influences detection range since adequate sound energy must impinge upon objects to return audible echoes. Both are thus crucial parameters for understanding biosonar signal design. Phyllostomid bats have been classified as low intensity or "whispering bats," but recent data indicate that this designation may be inaccurate. Echolocation beam directionality in phyllostomids has only been measured through electrode brain-stimulation of restrained bats, presumably excluding active beam control via the noseleaf. Here, a 12-microphone array was used to measure echolocation call intensity and beam directionality in the frugivorous phyllostomid, Carollia perspicillata, echolocating in flight. The results showed a considerably narrower beam shape (half-amplitude beam angles of approximately 16° horizontally and 14° vertically) and louder echolocation calls [source levels averaging 99 dB sound pressure level (SPL) root mean square] for C. perspicillata than was found for this species when stationary. This suggests that naturally behaving phyllostomids shape their sound beam to achieve a longer and narrower sonar range than previously thought. C. perspicillata orient and forage in the forest interior and the narrow beam might be adaptive in clutter, by reducing the number and intensity of off-axis echoes.
NASA Astrophysics Data System (ADS)
Zhang, Qi; Bodony, Daniel
2014-11-01
Commercial jet aircraft generate undesirable noise from several sources, with the engines being the most dominant sources at take-off and major contributors at all other stages of flight. Acoustic liners, which are perforated sheets of metal or composite mounted within the engine, have been an effective means of reducing internal engine noise from the fan, compressor, combustor, and turbine but their performance suffers when subjected to a turbulent grazing flow or to high-amplitude incident sound due to poorly understood interactions between the liner orifices and the exterior flow. Through the use of direct numerical simulations, the flow-orifice interaction is examined numerically, quantified, and modeled over a range of conditions that includes current and envisioned uses of acoustic liners and with detail that exceeds experimental capabilities. A new time-domain model of acoustic liners is developed that extends currently-available reduced-order models to more complex flow conditions but is still efficient for use at the design stage.
Bakker, R H; Pedersen, E; van den Berg, G P; Stewart, R E; Lok, W; Bouma, J
2012-05-15
The present government in the Netherlands intends to realize a substantial growth of wind energy before 2020, both onshore and offshore. Wind turbines, when positioned in the neighborhood of residents may cause visual annoyance and noise annoyance. Studies on other environmental sound sources, such as railway, road traffic, industry and aircraft noise show that (long-term) exposure to sound can have negative effects other than annoyance from noise. This study aims to elucidate the relation between exposure to the sound of wind turbines and annoyance, self-reported sleep disturbance and psychological distress of people that live in their vicinity. Data were gathered by questionnaire that was sent by mail to a representative sample of residents of the Netherlands living in the vicinity of wind turbines A dose-response relationship was found between immission levels of wind turbine sound and selfreported noise annoyance. Sound exposure was also related to sleep disturbance and psychological distress among those who reported that they could hear the sound, however not directly but with noise annoyance acting as a mediator. Respondents living in areas with other background sounds were less affected than respondents in quiet areas. People living in the vicinity of wind turbines are at risk of being annoyed by the noise, an adverse effect in itself. Noise annoyance in turn could lead to sleep disturbance and psychological distress. No direct effects of wind turbine noise on sleep disturbance or psychological stress has been demonstrated, which means that residents, who do not hear the sound, or do not feel disturbed, are not adversely affected. Copyright © 2012 Elsevier B.V. All rights reserved.
Marking emergency exits and evacuation routes with sound beacons utilizing the precedence effect
NASA Astrophysics Data System (ADS)
van Wijngaarden, Sander J.; Bronkhorst, Adelbert W.; Boer, Louis C.
2004-05-01
Sound beacons can be extremely useful during emergency evacuations, especially when vision is obscured by smoke. When exits are marked with suitable sound sources, people can find these using only their capacity for directional hearing. Unfortunately, unless very explicit instructions were given, sound beacons currently commercially available (based on modulated noise) led to disappointing results during an evacuation experiment in a traffic tunnel. Only 19% out of 65 subjects were able to find an exit by ear. A signal designed to be more self-explanatory and less hostile-sounding (alternating chime signal and spoken message ``exit here'') increased the success rate to 86%. In a more complex environment-a mock-up of a ship's interior-routes to the exit were marked using multiple beacons. By applying carefully designed time delays between successive beacons, the direction of the route was marked, utilizing the precedence effect. Out of 34 subjects, 71% correctly followed the evacuation route by ear (compared to 24% for a noise signal as used in commercially available beacons). Even when subjects were forced to make a worst-case left-right decision at a T-junction, between two beacons differing only in arrival of the first wave front, 77% made the right decision.
Decadal trends in Indian Ocean ambient sound.
Miksis-Olds, Jennifer L; Bradley, David L; Niu, Xiaoyue Maggie
2013-11-01
The increase of ocean noise documented in the North Pacific has sparked concern on whether the observed increases are a global or regional phenomenon. This work provides evidence of low frequency sound increases in the Indian Ocean. A decade (2002-2012) of recordings made off the island of Diego Garcia, UK in the Indian Ocean was parsed into time series according to frequency band and sound level. Quarterly sound level comparisons between the first and last years were also performed. The combination of time series and temporal comparison analyses over multiple measurement parameters produced results beyond those obtainable from a single parameter analysis. The ocean sound floor has increased over the past decade in the Indian Ocean. Increases were most prominent in recordings made south of Diego Garcia in the 85-105 Hz band. The highest sound level trends differed between the two sides of the island; the highest sound levels decreased in the north and increased in the south. Rate, direction, and magnitude of changes among the multiple parameters supported interpretation of source functions driving the trends. The observed sound floor increases are consistent with concurrent increases in shipping, wind speed, wave height, and blue whale abundance in the Indian Ocean.
Horizontal sound localization in cochlear implant users with a contralateral hearing aid.
Veugen, Lidwien C E; Hendrikse, Maartje M E; van Wanrooij, Marc M; Agterberg, Martijn J H; Chalupper, Josef; Mens, Lucas H M; Snik, Ad F M; John van Opstal, A
2016-06-01
Interaural differences in sound arrival time (ITD) and in level (ILD) enable us to localize sounds in the horizontal plane, and can support source segregation and speech understanding in noisy environments. It is uncertain whether these cues are also available to hearing-impaired listeners who are bimodally fitted, i.e. with a cochlear implant (CI) and a contralateral hearing aid (HA). Here, we assessed sound localization behavior of fourteen bimodal listeners, all using the same Phonak HA and an Advanced Bionics CI processor, matched with respect to loudness growth. We aimed to determine the availability and contribution of binaural (ILDs, temporal fine structure and envelope ITDs) and monaural (loudness, spectral) cues to horizontal sound localization in bimodal listeners, by systematically varying the frequency band, level and envelope of the stimuli. The sound bandwidth had a strong effect on the localization bias of bimodal listeners, although localization performance was typically poor for all conditions. Responses could be systematically changed by adjusting the frequency range of the stimulus, or by simply switching the HA and CI on and off. Localization responses were largely biased to one side, typically the CI side for broadband and high-pass filtered sounds, and occasionally to the HA side for low-pass filtered sounds. HA-aided thresholds better than 45 dB HL in the frequency range of the stimulus appeared to be a prerequisite, but not a guarantee, for the ability to indicate sound source direction. We argue that bimodal sound localization is likely based on ILD cues, even at frequencies below 1500 Hz for which the natural ILDs are small. These cues are typically perturbed in bimodal listeners, leading to a biased localization percept of sounds. The high accuracy of some listeners could result from a combination of sufficient spectral overlap and loudness balance in bimodal hearing. Copyright © 2016 Elsevier B.V. All rights reserved.
Ear Deformations Give Bats a Physical Mechanism for Fast Adaptation of Ultrasonic Beam Patterns
NASA Astrophysics Data System (ADS)
Gao, Li; Balakrishnan, Sreenath; He, Weikai; Yan, Zhen; Müller, Rolf
2011-11-01
A large number of mammals, including humans, have intricate outer ear shapes that diffract incoming sound in a direction- and frequency-specific manner. Through this physical process, the outer ear shapes encode sound-source information into the sensory signals from each ear. Our results show that horseshoe bats could dynamically control these diffraction processes through fast nonrigid ear deformations. The bats’ ear shapes can alter between extreme configurations in about 100 ms and thereby change their acoustic properties in ways that would suit different acoustic sensing tasks.
Active room compensation for sound reinforcement using sound field separation techniques.
Heuchel, Franz M; Fernandez-Grande, Efren; Agerkvist, Finn T; Shabalina, Elena
2018-03-01
This work investigates how the sound field created by a sound reinforcement system can be controlled at low frequencies. An indoor control method is proposed which actively absorbs the sound incident on a reflecting boundary using an array of secondary sources. The sound field is separated into incident and reflected components by a microphone array close to the secondary sources, enabling the minimization of reflected components by means of optimal signals for the secondary sources. The method is purely feed-forward and assumes constant room conditions. Three different sound field separation techniques for the modeling of the reflections are investigated based on plane wave decomposition, equivalent sources, and the Spatial Fourier transform. Simulations and an experimental validation are presented, showing that the control method performs similarly well at enhancing low frequency responses with the three sound separation techniques. Resonances in the entire room are reduced, although the microphone array and secondary sources are confined to a small region close to the reflecting wall. Unlike previous control methods based on the creation of a plane wave sound field, the investigated method works in arbitrary room geometries and primary source positions.
Application of acoustic radiosity methods to noise propagation within buildings
NASA Astrophysics Data System (ADS)
Muehleisen, Ralph T.; Beamer, C. Walter
2005-09-01
The prediction of sound pressure levels in rooms from transmitted sound is a difficult problem. The sound energy in the source room incident on the common wall must be accurately predicted. In the receiving room, the propagation of sound from the planar wall source must also be accurately predicted. The radiosity method naturally computes the spatial distribution of sound energy incident on a wall and also naturally predicts the propagation of sound from a planar area source. In this paper, the application of the radiosity method to sound transmission problems is introduced and explained.
Ejectable underwater sound source recovery assembly
NASA Technical Reports Server (NTRS)
Irick, S. C. (Inventor)
1974-01-01
An underwater sound source is described that may be ejectably mounted on any mobile device that travels over water, to facilitate in the location and recovery of the device when submerged. A length of flexible line maintains a connection between the mobile device and the sound source. During recovery, the sound source is located be particularly useful in the recovery of spent rocket motors that bury in the ocean floor upon impact.
Bjørgesaeter, Anders; Ugland, Karl Inne; Bjørge, Arne
2004-10-01
The male harbor seal (Phoca vitulina) produces broadband nonharmonic vocalizations underwater during the breeding season. In total, 120 vocalizations from six colonies were analyzed to provide a description of the acoustic structure and for the presence of geographic variation. The complex harbor seal vocalizations may be described by how the frequency bandwidth varies over time. An algorithm that identifies the boundaries between noise and signal from digital spectrograms was developed in order to extract a frequency bandwidth contour. The contours were used as inputs for multivariate analysis. The vocalizations' sound types (e.g., pulsed sound, whistle, and broadband nonharmonic sound) were determined by comparing the vocalizations' spectrographic representations with sound waves produced by known sound sources. Comparison between colonies revealed differences in the frequency contours, as well as some geographical variation in use of sound types. The vocal differences may reflect a limited exchange of individuals between the six colonies due to long distances and strong site fidelity. Geographically different vocal repertoires have potential for identifying discrete breeding colonies of harbor seals, but more information is needed on the nature and extent of early movements of young, the degree of learning, and the stability of the vocal repertoire. A characteristic feature of many vocalizations in this study was the presence of tonal-like introductory phrases that fit into the categories pulsed sound and whistles. The functions of these phrases are unknown but may be important in distance perception and localization of the sound source. The potential behavioral consequences of the observed variability may be indicative of adaptations to different environmental properties influencing determination of distance and direction and plausible different male mating tactics.
Landrau-Giovannetti, Nelmarie; Mignucci-Giannoni, Antonio A; Reidenberg, Joy S
2014-10-01
West Indian (Trichechus manatus) and Amazonian (T. inunguis) manatees are vocal mammals, with most sounds produced for communication between mothers and calves. While their hearing and vocalizations have been well studied, the actual mechanism of sound production is unknown. Acoustical recordings and anatomical examination were used to determine the source of sound generation. Recordings were performed on live captive manatees from Puerto Rico, Cuba and Colombia (T. manatus) and from Peru (T. inunguis) to determine focal points of sound production. The manatees were recorded using two directional hydrophones placed on the throat and nasal region and an Edirol-R44 digital recorder. The average sound intensity level was analyzed to evaluate the sound source with a T test: paired two sample for means. Anatomical examinations were conducted on six T. manatus carcasses from Florida and Puerto Rico. During necropsies, the larynx, trachea, and nasal areas were dissected, with particular focus on identifying musculature and soft tissues capable of vibrating or constricting the airway. From the recordings we found that the acoustical intensity was significant (P < 0.0001) for both the individuals and the pooled manatees in the ventral throat region compared to the nasal region. From the dissection we found two raised areas of tissue in the lateral walls of the manatee's laryngeal lumen that are consistent with mammalian vocal folds. They oppose each other and may be able to regulate airflow between them when they are adducted or abducted by muscular control of arytenoid cartilages. Acoustic and anatomical evidence taken together suggest vocal folds as the mechanism for sound production in manatees. © 2014 Wiley Periodicals, Inc.
Huber, Rainer; Meis, Markus; Klink, Karin; Bartsch, Christian; Bitzer, Joerg
2014-01-01
Within the Lower Saxony Research Network Design of Environments for Ageing (GAL), a personal activity and household assistant (PAHA), an ambient reminder system, has been developed. One of its central output modality to interact with the user is sound. The study presented here evaluated three different system technologies for sound reproduction using up to five loudspeakers, including the "phantom source" concept. Moreover, a technology for hearing loss compensation for the mostly older users of the PAHA was implemented and evaluated. Evaluation experiments with 21 normal hearing and hearing impaired test subjects were carried out. The results show that after direct comparison of the sound presentation concepts, the presentation by the single TV speaker was most preferred, whereas the phantom source concept got the highest acceptance ratings as far as the general concept is concerned. The localization accuracy of the phantom source concept was good as long as the exact listening position was known to the algorithm and speech stimuli were used. Most subjects preferred the original signals over the pre-processed, dynamic-compressed signals, although processed speech was often described as being clearer.
Source levels of social sounds in migrating humpback whales (Megaptera novaeangliae).
Dunlop, Rebecca A; Cato, Douglas H; Noad, Michael J; Stokes, Dale M
2013-07-01
The source level of an animal sound is important in communication, since it affects the distance over which the sound is audible. Several measurements of source levels of whale sounds have been reported, but the accuracy of many is limited because the distance to the source and the acoustic transmission loss were estimated rather than measured. This paper presents measurements of source levels of social sounds (surface-generated and vocal sounds) of humpback whales from a sample of 998 sounds recorded from 49 migrating humpback whale groups. Sources were localized using a wide baseline five hydrophone array and transmission loss was measured for the site. Social vocalization source levels were found to range from 123 to 183 dB re 1 μPa @ 1 m with a median of 158 dB re 1 μPa @ 1 m. Source levels of surface-generated social sounds ("breaches" and "slaps") were narrower in range (133 to 171 dB re 1 μPa @ 1 m) but slightly higher in level (median of 162 dB re 1 μPa @ 1 m) compared to vocalizations. The data suggest that group composition has an effect on group vocalization source levels in that singletons and mother-calf-singing escort groups tend to vocalize at higher levels compared to other group compositions.
Acoustic Modeling for Aqua Ventus I off Monhegan Island, ME
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whiting, Jonathan M.; Hanna, Luke A.; DeChello, Nicole L.
2013-10-31
The DeepCwind consortium, led by the University of Maine, was awarded funding under the US Department of Energy’s Offshore Wind Advanced Technology Demonstration Program to develop two floating offshore wind turbines in the Gulf of Maine equipped with Goldwind 6 MW direct drive turbines, as the Aqua Ventus I project. The Goldwind turbines have a hub height of 100 m. The turbines will be deployed in Maine State waters, approximately 2.9 miles off Monhegan Island; Monhegan Island is located roughly 10 miles off the coast of Maine. In order to site and permit the offshore turbines, the acoustic output mustmore » be evaluated to ensure that the sound will not disturb residents on Monhegan Island, nor input sufficient sound levels into the nearby ocean to disturb marine mammals. This initial assessment of the acoustic output focuses on the sound of the turbines in air by modeling the assumed sound source level, applying a sound propagation model, and taking into account the distance from shore.« less
Shao, Wei; Mechefske, Chris K
2005-04-01
This paper describes an analytical model of finite cylindrical ducts with infinite flanges. This model is used to investigate the sound radiation characteristics of the gradient coil system of a magnetic resonance imaging (MRI) scanner. The sound field in the duct satisfies both the boundary conditions at the wall and at the open ends. The vibrating cylindrical wall of the duct is assumed to be the only sound source. Different acoustic conditions for the wall (rigid and absorptive) are used in the simulations. The wave reflection phenomenon at the open ends of the finite duct is described by general radiation impedance. The analytical model is validated by the comparison with its counterpart in a commercial code based on the boundary element method (BEM). The analytical model shows significant advantages over the BEM model with better numerical efficiency and a direct relation between the design parameters and the sound field inside the duct.
THE SYMBOLS OF CREATIVE ENERGY IN THE LITERATURE ON MYSTICSM AND ON ALCHEMY
Mahdihassan, S.
1989-01-01
Alchemy as art tries to imitate creation such as spontaneous generation. The magic wands of creation, of Chinese origin, would be a compass and a triangular carpenter's square. Creation is represented by the dual-natured soul, comprising of the spirit (Ruh) and “the” soul (Nafs). The ultimate source is creative energy which emanates form the Divine word of command. Creative energy, in its non-manifest form, would be ultrasonic energy, which can be represented by a humming sourd. This would be sympolized by the humming sound. This would be symbolized by the humming sound of bees represent creative energy and in fig 3 the fiddle, as direct producers of a humming sound. PMID:22557649
Malinina, E S; Andreeva, I G
2013-01-01
The perceptual peculiarities of sound source withdrawing and approaching and their influence on auditory aftereffects were studied in the free field. The radial movement of the auditory adapting stimuli was imitated by two methods: (1) by oppositely directed simultaneous amplitude change of the wideband signals at two loudspeakers placed at 1.1 and 4.5 m from a listener; (2) by an increase or a decrease of the wideband noise amplitude of the impulses at one of the loudspeakers--whether close or distant. The radial auditory movement of test stimuli was imitated by using the first method of imitation of adapting stimuli movement. Nine listeners estimated the direction of test stimuli movement without adaptation (control) and after adaptation. Adapting stimuli were stationary, slowly moving with sound level variation of 2 dB and rapidly moving with variation of 12 dB. The percentage of "withdrawing" responses was used for psychometric curve construction. Three perceptual phenomena were found. The growing louder effect was shown in control series without adaptation. The effect was characterized by a decrease of the number of "withdrawing" responses and overestimation of test stimuli as approaching. The position-dependent aftereffects were noticed after adaptation to the stationary and slowly moving sound stimuli. The aftereffect was manifested as an increase of the number of "withdrawing" responses and overestimation of test stimuli as withdrawal. The effect was reduced with increase of the distance between the listener and the loudspeaker. Movement aftereffects were revealed after adaptation to the rapidly moving stimuli. Aftereffects were direction-dependent: the number of "withdrawal" responses after adaptation to approach increased, whereas after adaptation to withdrawal it decreased relative to control. The movement aftereffects were more pronounced at imitation of movement of adapting stimuli by the first method. In this case the listener could determine the starting and the finishing points of movement trajectory. Interaction of movement aftereffects with the growing louder effect was absent in all ways of presentation of adapting stimuli. With increase of distance to the source of adapting stimuli, there was observed a tendency for a decrease of aftereffect of approach and for an increase of aftereffect of withdrawal.
Dynamic Spatial Hearing by Human and Robot Listeners
NASA Astrophysics Data System (ADS)
Zhong, Xuan
This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.
Wave field synthesis of moving virtual sound sources with complex radiation properties.
Ahrens, Jens; Spors, Sascha
2011-11-01
An approach to the synthesis of moving virtual sound sources with complex radiation properties in wave field synthesis is presented. The approach exploits the fact that any stationary sound source of finite spatial extent radiates spherical waves at sufficient distance. The angular dependency of the radiation properties of the source under consideration is reflected by the amplitude and phase distribution on the spherical wave fronts. The sound field emitted by a uniformly moving monopole source is derived and the far-field radiation properties of the complex virtual source under consideration are incorporated in order to derive a closed-form expression for the loudspeaker driving signal. The results are illustrated via numerical simulations of the synthesis of the sound field of a sample moving complex virtual source.
Refraction of Sound Emitted Near Solid Boundaries from a Sheared Jet
NASA Technical Reports Server (NTRS)
Dill, Loren H.; Oyedrian, Ayo A.; Krejsa, Eugene A.
1998-01-01
A mathematical model is developed to describe the sound emitted from an arbitrary point within a turbulent flow near solid boundaries. A unidirectional, transversely sheared mean flow is assumed, and the cross-section of the cold jet is of arbitrary shape. The analysis begins with Lilley's formulation of aerodynamic noise and, depending upon the specific model of turbulence used, leads via Fourier analysis to an expression for the spectral density of the intensity of the far-field sound emitted from a unit volume of turbulence. The expressions require solution of a reduced Green's function of Lilley's equation as well as certain moving axis velocity correlations of the turbulence. Integration over the entire flow field is required in order to predict the sound emitted by the complete flow. Calculations are presented for sound emitted from a plugflow jet exiting a semi-infinite flat duct. Polar plots of the far-field directivity show the dependence upon frequency and source position within the duct. Certain model problems are suggested to investigate the effect of duct termination, duct geometry, and mean flow shear upon the far-field sound.
Project Report of Virtual Experiments in Marine Bioacoustics: Model Validation
2010-08-01
are hypothesized to be the biosonar sound source in the bottlenose dolphin (Cranford, 2000; Cranford et al., 1996). The phonic lips consist of...generation apparatus can produce small changes or adjustments in bottlenose dolphin biosonar beam direction. There are likely more discoveries...Beam Direction Biosonar beam formation in dolphins has been the subject of considerable research (Au, 1980; Au, 1993; Au et al., 1978; Au et al
Monaural Sound Localization Based on Structure-Induced Acoustic Resonance
Kim, Keonwook; Kim, Youngwoong
2015-01-01
A physical structure such as a cylindrical pipe controls the propagated sound spectrum in a predictable way that can be used to localize the sound source. This paper designs a monaural sound localization system based on multiple pyramidal horns around a single microphone. The acoustic resonance within the horn provides a periodicity in the spectral domain known as the fundamental frequency which is inversely proportional to the radial horn length. Once the system accurately estimates the fundamental frequency, the horn length and corresponding angle can be derived by the relationship. The modified Cepstrum algorithm is employed to evaluate the fundamental frequency. In an anechoic chamber, localization experiments over azimuthal configuration show that up to 61% of the proper signal is recognized correctly with 30% misfire. With a speculated detection threshold, the system estimates direction 52% in positive-to-positive and 34% in negative-to-positive decision rate, on average. PMID:25668214
Psychophysics and Neuronal Bases of Sound Localization in Humans
Ahveninen, Jyrki; Kopco, Norbert; Jääskeläinen, Iiro P.
2013-01-01
Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory “where” pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. PMID:23886698
Head angle and elevation in classroom environments: implications for amplification.
Ricketts, Todd Andrew; Galster, Jason
2008-04-01
The purpose of this study was to examine children's head orientation relative to the arrival angle of competing signals and the sound source of interest in actual school settings. These data were gathered to provide information relative to the potential for directional benefit. Forty children, 4-17 years of age, with and without hearing loss, completed the study. Deviation in head angle and elevation relative to the direction of sound sources of interest were measured in 40 school environments. Measurements were made on the basis of physical data and videotapes from 3 cameras placed within each classroom. The results revealed similarly accurate head orientation across children with and without hearing loss when focusing on the 33% proportion of time in which children were most accurate. Orientation accuracy was not affected by age. The data also revealed that children with hearing loss were significantly more likely to orient toward brief utterances made by secondary talkers than were children with normal hearing. These data are consistent with the hypothesized association between hearing loss and increased visual monitoring. In addition, these results suggest that age does not limit the potential for signal-to-noise improvements from directivity-based interventions in noisy environments.
Wang, Chong
2018-03-01
In the case of a point source in front of a panel, the wavefront of the incident wave is spherical. This paper discusses spherical sound waves transmitting through a finite sized panel. The forced sound transmission performance that predominates in the frequency range below the coincidence frequency is the focus. Given the point source located along the centerline of the panel, forced sound transmission coefficient is derived through introducing the sound radiation impedance for spherical incident waves. It is found that in addition to the panel mass, forced sound transmission loss also depends on the distance from the source to the panel as determined by the radiation impedance. Unlike the case of plane incident waves, sound transmission performance of a finite sized panel does not necessarily converge to that of an infinite panel, especially when the source is away from the panel. For practical applications, the normal incidence sound transmission loss expression of plane incident waves can be used if the distance between the source and panel d and the panel surface area S satisfy d/S>0.5. When d/S ≈0.1, the diffuse field sound transmission loss expression may be a good approximation. An empirical expression for d/S=0 is also given.
NASA Technical Reports Server (NTRS)
Fuller, C. R.
1984-01-01
Sound propagation in infinite, semiinfinite, and finite circular ducts with circumferentially varying wall admittances is investigated analytically. The infinite case is considered, and an example demonstrates the effects of wall-admittance distribution on dispersion characteristics and mode shapes. An exact solution is obtained for the semiinfinite case, a circular duct with a flanged opening: sidelobe suppression and circumferential-mode energy scattering leading to radiated-field asymmetry are found. A finite duct system with specified hard-walled pressure sources is examined in detail, evaluating reflection coefficients, transmission losses, and radiated-field directivity. Graphs and diagrams are provided, and the implications of the results obtained for the design of aircraft-turbofan inlet liners are discussed.
Functional relevance of acoustic tracheal design in directional hearing in crickets.
Schmidt, Arne K D; Römer, Heiner
2016-10-15
Internally coupled ears (ICEs) allow small animals to reliably determine the direction of a sound source. ICEs are found in a variety of taxa, but crickets have evolved the most complex arrangement of coupled ears: an acoustic tracheal system composed of a large cross-body trachea that connects two entry points for sound in the thorax with the leg trachea of both ears. The key structure that allows for the tuned directionality of the ear is a tracheal inflation (acoustic vesicle) in the midline of the cross-body trachea holding a thin membrane (septum). Crickets are known to display a wide variety of acoustic tracheal morphologies, most importantly with respect to the presence of a single or double acoustic vesicle. However, the functional relevance of this variation is still not known. In this study, we investigated the peripheral directionality of three co-occurring, closely related cricket species of the subfamily Gryllinae. No support could be found for the hypothesis that a double vesicle should be regarded as an evolutionary innovation to (1) increase interaural directional cues, (2) increase the selectivity of the directional filter or (3) provide a better match between directional and sensitivity tuning. Nonetheless, by manipulating the double acoustic vesicle in the rainforest cricket Paroecanthus podagrosus, selectively eliminating the sound-transmitting pathways, we revealed that these pathways contribute almost equally to the total amount of interaural intensity differences, emphasizing their functional relevance in the system. © 2016. Published by The Company of Biologists Ltd.
Noise radiation directivity from a wind-tunnel inlet with inlet vanes and duct wall linings
NASA Technical Reports Server (NTRS)
Soderman, P. T.; Phillips, J. D.
1986-01-01
The acoustic radiation patterns from a 1/15th scale model of the Ames 80- by 120-Ft Wind Tunnel test section and inlet have been measured with a noise source installed in the test section. Data were acquired without airflow in the duct. Sound-absorbent inlet vanes oriented parallel to each other, or splayed with a variable incidence relative to the duct long axis, were evaluated along with duct wall linings. Results show that splayed vans tend to spread the sound to greater angles than those measured with the open inlet. Parallel vanes narrowed the high-frequency radiation pattern. Duct wall linings had a strong effect on acoustic directivity by attenuating wall reflections. Vane insertion loss was measured. Directivity results are compared with existing data from square ducts. Two prediction methods for duct radiation directivity are described: one is an empirical method based on the test data, and the other is a analytical method based on ray acoustics.
Grating lobe elimination in steerable parametric loudspeaker.
Shi, Chuang; Gan, Woon-Seng
2011-02-01
In the past two decades, the majority of research on the parametric loudspeaker has concentrated on the nonlinear modeling of acoustic propagation and pre-processing techniques to reduce nonlinear distortion in sound reproduction. There are, however, very few studies on directivity control of the parametric loudspeaker. In this paper, we propose an equivalent circular Gaussian source array that approximates the directivity characteristics of the linear ultrasonic transducer array. By using this approximation, the directivity of the sound beam from the parametric loudspeaker can be predicted by the product directivity principle. New theoretical results, which are verified through measurements, are presented to show the effectiveness of the delay-and-sum beamsteering structure for the parametric loudspeaker. Unlike the conventional loudspeaker array, where the spacing between array elements must be less than half the wavelength to avoid spatial aliasing, the parametric loudspeaker can take advantage of grating lobe elimination to extend the spacing of ultrasonic transducer array to more than 1.5 wavelengths in a typical application.
Near-field noise of a single-rotation propfan at an angle of attack
NASA Technical Reports Server (NTRS)
Nallasamy, M.; Envia, E.; Clark, B. J.; Groeneweg, J. F.
1990-01-01
The near field noise characteristics of a propfan operating at an angle of attack are examined utilizing the unsteady pressure field obtained from a 3-D Euler simulation of the propfan flowfield. The near field noise is calculated employing three different procedures: a direct computation method in which the noise field is extracted directly from the Euler solution, and two acoustic-analogy-based frequency domain methods which utilize the computed unsteady pressure distribution on the propfan blades as the source term. The inflow angles considered are -0.4, 1.6, and 4.6 degrees. The results of the direct computation method and one of the frequency domain methods show qualitative agreement with measurements. They show that an increase in the inflow angle is accompanied by an increase in the sound pressure level at the outboard wing boom locations and a decrease in the sound pressure level at the (inboard) fuselage locations. The trends in the computed azimuthal directivities of the noise field also conform to the measured and expected results.
2007-12-01
except for the dive zero time which needed to be programmed during the cruise when the deployment schedule dates were confirmed. _ ACM - Aanderaa ACM...guards bolted on to complete the frame prior to deployment. Sound Source - Sound sources were scheduled to be redeployed. Sound sources were originally...battery voltages and a vacuum. A +27 second time drift was noted and the time was reset. The sound source was scheduled to go to full power on November
Achieving perceptually-accurate aural telepresence
NASA Astrophysics Data System (ADS)
Henderson, Paul D.
Immersive multimedia requires not only realistic visual imagery but also a perceptually-accurate aural experience. A sound field may be presented simultaneously to a listener via a loudspeaker rendering system using the direct sound from acoustic sources as well as a simulation or "auralization" of room acoustics. Beginning with classical Wave-Field Synthesis (WFS), improvements are made to correct for asymmetries in loudspeaker array geometry. Presented is a new Spatially-Equalized WFS (SE-WFS) technique to maintain the energy-time balance of a simulated room by equalizing the reproduced spectrum at the listener for a distribution of possible source angles. Each reproduced source or reflection is filtered according to its incidence angle to the listener. An SE-WFS loudspeaker array of arbitrary geometry reproduces the sound field of a room with correct spectral and temporal balance, compared with classically-processed WFS systems. Localization accuracy of human listeners in SE-WFS sound fields is quantified by psychoacoustical testing. At a loudspeaker spacing of 0.17 m (equivalent to an aliasing cutoff frequency of 1 kHz), SE-WFS exhibits a localization blur of 3 degrees, nearly equal to real point sources. Increasing the loudspeaker spacing to 0.68 m (for a cutoff frequency of 170 Hz) results in a blur of less than 5 degrees. In contrast, stereophonic reproduction is less accurate with a blur of 7 degrees. The ventriloquist effect is psychometrically investigated to determine the effect of an intentional directional incongruence between audio and video stimuli. Subjects were presented with prerecorded full-spectrum speech and motion video of a talker's head as well as broadband noise bursts with a static image. The video image was displaced from the audio stimulus in azimuth by varying amounts, and the perceived auditory location measured. A strong bias was detectable for small angular discrepancies between audio and video stimuli for separations of less than 8 degrees for speech and less than 4 degrees with a pink noise burst. The results allow for the density of WFS systems to be selected from the required localization accuracy. Also, by exploiting the ventriloquist effect, the angular resolution of an audio rendering may be reduced when combined with spatially-accurate video.
Gassmann, Martin; Wiggins, Sean M; Hildebrand, John A
2015-10-01
Cuvier's beaked whales (Ziphius cavirostris) were tracked using two volumetric small-aperture (∼1 m element spacing) hydrophone arrays, embedded into a large-aperture (∼1 km element spacing) seafloor hydrophone array of five nodes. This array design can reduce the minimum number of nodes that are needed to record the arrival of a strongly directional echolocation sound from 5 to 2, while providing enough time-differences of arrivals for a three-dimensional localization without depending on any additional information such as multipath arrivals. To illustrate the capabilities of this technique, six encounters of up to three Cuvier's beaked whales were tracked over a two-month recording period within an area of 20 km(2) in the Southern California Bight. Encounter periods ranged from 11 min to 33 min. Cuvier's beaked whales were found to reduce the time interval between echolocation clicks while alternating between two inter-click-interval regimes during their descent towards the seafloor. Maximum peak-to-peak source levels of 179 and 224 dB re 1 μPa @ 1 m were estimated for buzz sounds and on-axis echolocation clicks (directivity index = 30 dB), respectively. Source energy spectra of the on-axis clicks show significant frequency components between 70 and 90 kHz, in addition to their typically noted FM upsweep at 40-60 kHz.
Effects of high combustion chamber pressure on rocket noise environment
NASA Technical Reports Server (NTRS)
Pao, S. P.
1972-01-01
The acoustical environment for a high combustion chamber pressure engine was examined in detail, using both conventional and advanced theoretical analysis. The influence of elevated chamber pressure on the rocket noise environment was established, based on increase in exit velocity and flame temperature, and changes in basic engine dimensions. Compared to large rocket engines, the overall sound power level is found to be 1.5 dB higher, if the thrust is the same. The peak Strouhal number shifted about one octave lower to a value near 0.01. Data on apparent sound source location and directivity patterns are also presented.
Detection and tracking of drones using advanced acoustic cameras
NASA Astrophysics Data System (ADS)
Busset, Joël.; Perrodin, Florian; Wellig, Peter; Ott, Beat; Heutschi, Kurt; Rühl, Torben; Nussbaumer, Thomas
2015-10-01
Recent events of drones flying over city centers, official buildings and nuclear installations stressed the growing threat of uncontrolled drone proliferation and the lack of real countermeasure. Indeed, detecting and tracking them can be difficult with traditional techniques. A system to acoustically detect and track small moving objects, such as drones or ground robots, using acoustic cameras is presented. The described sensor, is completely passive, and composed of a 120-element microphone array and a video camera. The acoustic imaging algorithm determines in real-time the sound power level coming from all directions, using the phase of the sound signals. A tracking algorithm is then able to follow the sound sources. Additionally, a beamforming algorithm selectively extracts the sound coming from each tracked sound source. This extracted sound signal can be used to identify sound signatures and determine the type of object. The described techniques can detect and track any object that produces noise (engines, propellers, tires, etc). It is a good complementary approach to more traditional techniques such as (i) optical and infrared cameras, for which the object may only represent few pixels and may be hidden by the blooming of a bright background, and (ii) radar or other echo-localization techniques, suffering from the weakness of the echo signal coming back to the sensor. The distance of detection depends on the type (frequency range) and volume of the noise emitted by the object, and on the background noise of the environment. Detection range and resilience to background noise were tested in both, laboratory environments and outdoor conditions. It was determined that drones can be tracked up to 160 to 250 meters, depending on their type. Speech extraction was also experimentally investigated: the speech signal of a person being 80 to 100 meters away can be captured with acceptable speech intelligibility.
Advanced Acoustic Model Technical Reference and User Manual
2009-05-01
the source directed from the source to the receiver. Aspread = Geometrical spherical spreading loss (point source). Aatm = ANSI/ ISO atmospheric...426.1 1,013.4 27000 722 422.5 1,009.2 28000 691 418.9 1,004.9 29000 660 415.4 1,000.6 30000 631 411.8 996.4 A d v a n c e d A c o u s t i c M...sound by molecular relaxation processes in the atmosphere is computed according to the current ANSI/ ISO standard.28 Examples of the weather effects
An integrated system for dynamic control of auditory perspective in a multichannel sound field
NASA Astrophysics Data System (ADS)
Corey, Jason Andrew
An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to create a perceptually strong impression of source location and movement within a simulated space.
NASA Astrophysics Data System (ADS)
Gauthier, P.-A.; Camier, C.; Lebel, F.-A.; Pasco, Y.; Berry, A.; Langlois, J.; Verron, C.; Guastavino, C.
2016-08-01
Sound environment reproduction of various flight conditions in aircraft mock-ups is a valuable tool for the study, prediction, demonstration and jury testing of interior aircraft sound quality and annoyance. To provide a faithful reproduced sound environment, time, frequency and spatial characteristics should be preserved. Physical sound field reproduction methods for spatial sound reproduction are mandatory to immerse the listener's body in the proper sound fields so that localization cues are recreated at the listener's ears. Vehicle mock-ups pose specific problems for sound field reproduction. Confined spaces, needs for invisible sound sources and very specific acoustical environment make the use of open-loop sound field reproduction technologies such as wave field synthesis (based on free-field models of monopole sources) not ideal. In this paper, experiments in an aircraft mock-up with multichannel least-square methods and equalization are reported. The novelty is the actual implementation of sound field reproduction with 3180 transfer paths and trim panel reproduction sources in laboratory conditions with a synthetic target sound field. The paper presents objective evaluations of reproduced sound fields using various metrics as well as sound field extrapolation and sound field characterization.
36 CFR Appendix B to Part 1191 - Americans With Disabilities Act: Scoping
Code of Federal Regulations, 2014 CFR
2014-07-01
... of Sport Activity. That portion of a room or space where the play or practice of a sport occurs..., receivers, and coupling devices to bypass the acoustical space between a sound source and a listener by means of induction loop, radio frequency, infrared, or direct-wired equipment. Boarding Pier. A portion...
A High-Resolution Stopwatch for Cents
ERIC Educational Resources Information Center
Gingl, Z.; Kopasz, K.
2011-01-01
A very low-cost, easy-to-make stopwatch is presented to support various experiments in mechanics. The high-resolution stopwatch is based on two photodetectors connected directly to the microphone input of a sound card. Dedicated free open-source software has been developed and made available to download. The efficiency is demonstrated by a free…
Bai, Mingsian R; Li, Yi; Chiang, Yi-Hao
2017-10-01
A unified framework is proposed for analysis and synthesis of two-dimensional spatial sound field in reverberant environments. In the sound field analysis (SFA) phase, an unbaffled 24-element circular microphone array is utilized to encode the sound field based on the plane-wave decomposition. Depending on the sparsity of the sound sources, the SFA stage can be implemented in two manners. For sparse-source scenarios, a one-stage algorithm based on compressive sensing algorithm is utilized. Alternatively, a two-stage algorithm can be used, where the minimum power distortionless response beamformer is used to localize the sources and Tikhonov regularization algorithm is used to extract the source amplitudes. In the sound field synthesis (SFS), a 32-element rectangular loudspeaker array is employed to decode the target sound field using pressure matching technique. To establish the room response model, as required in the pressure matching step of the SFS phase, an SFA technique for nonsparse-source scenarios is utilized. Choice of regularization parameters is vital to the reproduced sound field. In the SFS phase, three SFS approaches are compared in terms of localization performance and voice reproduction quality. Experimental results obtained in a reverberant room are presented and reveal that an accurate room response model is vital to immersive rendering of the reproduced sound field.
Gover, Bradford N; Ryan, James G; Stinson, Michael R
2002-11-01
A measurement system has been developed that is capable of analyzing the directional and spatial variations in a reverberant sound field. A spherical, 32-element array of microphones is used to generate a narrow beam that is steered in 60 directions. Using an omnidirectional loudspeaker as excitation, the sound pressure arriving from each steering direction is measured as a function of time, in the form of pressure impulse responses. By subsequent analysis of these responses, the variation of arriving energy with direction is studied. The directional diffusion and directivity index of the arriving sound can be computed, as can the energy decay rate in each direction. An analysis of the 32 microphone responses themselves allows computation of the point-to-point variation of reverberation time and of sound pressure level, as well as the spatial cross-correlation coefficient, over the extent of the array. The system has been validated in simple sound fields in an anechoic chamber and in a reverberation chamber. The system characterizes these sound fields as expected, both quantitatively from the measures and qualitatively from plots of the arriving energy versus direction. It is anticipated that the system will be of value in evaluating the directional distribution of arriving energy and the degree and diffuseness of sound fields in rooms.
Position-dependent hearing in three species of bushcrickets (Tettigoniidae, Orthoptera)
Lakes-Harlan, Reinhard; Scherberich, Jan
2015-01-01
A primary task of auditory systems is the localization of sound sources in space. Sound source localization in azimuth is usually based on temporal or intensity differences of sounds between the bilaterally arranged ears. In mammals, localization in elevation is possible by transfer functions at the ear, especially the pinnae. Although insects are able to locate sound sources, little attention is given to the mechanisms of acoustic orientation to elevated positions. Here we comparatively analyse the peripheral hearing thresholds of three species of bushcrickets in respect to sound source positions in space. The hearing thresholds across frequencies depend on the location of a sound source in the three-dimensional hearing space in front of the animal. Thresholds differ for different azimuthal positions and for different positions in elevation. This position-dependent frequency tuning is species specific. Largest differences in thresholds between positions are found in Ancylecha fenestrata. Correspondingly, A. fenestrata has a rather complex ear morphology including cuticular folds covering the anterior tympanal membrane. The position-dependent tuning might contribute to sound source localization in the habitats. Acoustic orientation might be a selective factor for the evolution of morphological structures at the bushcricket ear and, speculatively, even for frequency fractioning in the ear. PMID:26543574
Position-dependent hearing in three species of bushcrickets (Tettigoniidae, Orthoptera).
Lakes-Harlan, Reinhard; Scherberich, Jan
2015-06-01
A primary task of auditory systems is the localization of sound sources in space. Sound source localization in azimuth is usually based on temporal or intensity differences of sounds between the bilaterally arranged ears. In mammals, localization in elevation is possible by transfer functions at the ear, especially the pinnae. Although insects are able to locate sound sources, little attention is given to the mechanisms of acoustic orientation to elevated positions. Here we comparatively analyse the peripheral hearing thresholds of three species of bushcrickets in respect to sound source positions in space. The hearing thresholds across frequencies depend on the location of a sound source in the three-dimensional hearing space in front of the animal. Thresholds differ for different azimuthal positions and for different positions in elevation. This position-dependent frequency tuning is species specific. Largest differences in thresholds between positions are found in Ancylecha fenestrata. Correspondingly, A. fenestrata has a rather complex ear morphology including cuticular folds covering the anterior tympanal membrane. The position-dependent tuning might contribute to sound source localization in the habitats. Acoustic orientation might be a selective factor for the evolution of morphological structures at the bushcricket ear and, speculatively, even for frequency fractioning in the ear.
NASA Astrophysics Data System (ADS)
Zuo, Zhifeng; Maekawa, Hiroshi
2014-02-01
The interaction between a moderate-strength shock wave and a near-wall vortex is studied numerically by solving the two-dimensional, unsteady compressible Navier-Stokes equations using a weighted compact nonlinear scheme with a simple low-dissipation advection upstream splitting method for flux splitting. Our main purpose is to clarify the development of the flow field and the generation of sound waves resulting from the interaction. The effects of the vortex-wall distance on the sound generation associated with variations in the flow structures are also examined. The computational results show that three sound sources are involved in this problem: (i) a quadrupolar sound source due to the shock-vortex interaction; (ii) a dipolar sound source due to the vortex-wall interaction; and (iii) a dipolar sound source due to unsteady wall shear stress. The sound field is the combination of the sound waves produced by all three sound sources. In addition to the interaction of the incident shock with the vortex, a secondary shock-vortex interaction is caused by the reflection of the reflected shock (MR2) from the wall. The flow field is dominated by the primary and secondary shock-vortex interactions. The generation mechanism of the third sound, which is newly discovered, due to the MR2-vortex interaction is presented. The pressure variations generated by (ii) become significant with decreasing vortex-wall distance. The sound waves caused by (iii) are extremely weak compared with those caused by (i) and (ii) and are negligible in the computed sound field.
Spectral analysis methods for vehicle interior vibro-acoustics identification
NASA Astrophysics Data System (ADS)
Hosseini Fouladi, Mohammad; Nor, Mohd. Jailani Mohd.; Ariffin, Ahmad Kamal
2009-02-01
Noise has various effects on comfort, performance and health of human. Sound are analysed by human brain based on the frequencies and amplitudes. In a dynamic system, transmission of sound and vibrations depend on frequency and direction of the input motion and characteristics of the output. It is imperative that automotive manufacturers invest a lot of effort and money to improve and enhance the vibro-acoustics performance of their products. The enhancement effort may be very difficult and time-consuming if one relies only on 'trial and error' method without prior knowledge about the sources itself. Complex noise inside a vehicle cabin originated from various sources and travel through many pathways. First stage of sound quality refinement is to find the source. It is vital for automotive engineers to identify the dominant noise sources such as engine noise, exhaust noise and noise due to vibration transmission inside of vehicle. The purpose of this paper is to find the vibro-acoustical sources of noise in a passenger vehicle compartment. The implementation of spectral analysis method is much faster than the 'trial and error' methods in which, parts should be separated to measure the transfer functions. Also by using spectral analysis method, signals can be recorded in real operational conditions which conduce to more consistent results. A multi-channel analyser is utilised to measure and record the vibro-acoustical signals. Computational algorithms are also employed to identify contribution of various sources towards the measured interior signal. These achievements can be utilised to detect, control and optimise interior noise performance of road transport vehicles.
Reconstruction of sound source signal by analytical passive TR in the environment with airflow
NASA Astrophysics Data System (ADS)
Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu
2017-03-01
In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.
Localization of sound sources in a room with one microphone
NASA Astrophysics Data System (ADS)
Peić Tukuljac, Helena; Lissek, Hervé; Vandergheynst, Pierre
2017-08-01
Estimation of the location of sound sources is usually done using microphone arrays. Such settings provide an environment where we know the difference between the received signals among different microphones in the terms of phase or attenuation, which enables localization of the sound sources. In our solution we exploit the properties of the room transfer function in order to localize a sound source inside a room with only one microphone. The shape of the room and the position of the microphone are assumed to be known. The design guidelines and limitations of the sensing matrix are given. Implementation is based on the sparsity in the terms of voxels in a room that are occupied by a source. What is especially interesting about our solution is that we provide localization of the sound sources not only in the horizontal plane, but in the terms of the 3D coordinates inside the room.
On the diffusion of sound in an auditorium
NASA Astrophysics Data System (ADS)
Harris, Cyril M.
2005-09-01
A condition of perfect diffusion of sound is said to exist in an auditorium if, at any point within it, the reverberant sound travels in all directions with equal probability, and if the level of the reflected sound is everywhere equal. In deriving the reverberation time formula, which predicts how long sound will bounce around an enclosed space after the source has stopped, W.C. Sabine assumed perfect diffusion within it. When this is not the case, his formula may predict inaccurate results. For example, the Sabine equation will not give correct results in an auditorium with poor diffusion, as when there is a large overhanging balcony, or if one of the dimensions of the enclosed space is very much greater than the other dimensions, or if the auditorium is divided into spaces having different acoustical properties. An auditorium with excellent diffusion beneficially affects the uniformity of decay of sound within the space and pleases the listener's ear. Among techniques that contribute to good diffusion are the surface irregularities found in the elaborate styles of architecture of the past. Illustrations will be presented showing some approaches within the modern architectural idiom that have yielded successful results.
Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation
Oliva, Aude
2017-01-01
Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals. PMID:28451630
Refraction and Shielding of Noise in Non-Axisymmetric Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas
1996-01-01
This paper examines the shielding effect of the mean flow and refraction of sound in non-axisymmetric jets. A general three-dimensional ray-acoustic approach is applied. The methodology is independent of the exit geometry and may account for jet spreading and transverse as well as streamwise flow gradients. We assume that noise is dominated by small-scale turbulence. The source correlation terms, as described by the acoustic analogy approach, are simplified and a model is proposed that relates the source strength to 7/2 power of turbulence kinetic energy. Local characteristics of the source such as its strength, time- or length-scale, convection velocity and characteristic frequency are inferred from the mean flow considerations. Compressible Navier Stokes equations are solved with a k-e turbulence model. Numerical predictions are presented for a Mach 1.5, aspect ratio 2:1 elliptic jet. The predicted sound pressure level directivity demonstrates favorable agreement with reported data, indicating a relative quiet zone on the side of the major axis of the elliptic jet.
The sound of moving bodies. Ph.D. Thesis - Cambridge Univ.
NASA Technical Reports Server (NTRS)
Brentner, Kenneth Steven
1990-01-01
The importance of the quadrupole source term in the Ffowcs, Williams, and Hawkings (FWH) equation was addressed. The quadrupole source contains fundamental components of the complete fluid mechanics problem, which are ignored only at the risk of error. The results made it clear that any application of the acoustic analogy should begin with all of the source terms in the FWH theory. The direct calculation of the acoustic field as part of the complete unsteady fluid mechanics problem using CFD is considered. It was shown that aeroelastic calculation can indeed be made with CFD codes. The results indicate that the acoustic field is the most susceptible component of the computation to numerical error. Therefore, the ability to measure the damping of acoustic waves is absolutely essential both to develop acoustic computations. Essential groundwork for a new approach to the problem of sound generation by moving bodies is presented. This new computational acoustic approach holds the promise of solving many problems hitherto pushed aside.
Analysis of masking effects on speech intelligibility with respect to moving sound stimulus
NASA Astrophysics Data System (ADS)
Chen, Chiung Yao
2004-05-01
The purpose of this study is to compare the disturbed degree of speech by an immovable noise source and an apparent moving one (AMN). In the study of the sound localization, we found that source-directional sensitivity (SDS) well associates with the magnitude of interaural cross correlation (IACC). Ando et al. [Y. Ando, S. H. Kang, and H. Nagamatsu, J. Acoust. Soc. Jpn. (E) 8, 183-190 (1987)] reported that potential correlation between left and right inferior colliculus at auditory path in the brain is in harmony with the correlation function of amplitude input into two ear-canal entrances. We assume that the degree of disturbance under the apparent moving noisy source is probably different from that being installed in front of us within a constant distance in a free field (no reflection). Then, we found there is a different influence on speech intelligibility between a moving and a fixed source generated by 1/3-octave narrow-band noise with the center frequency 2 kHz. However, the reasons for the moving speed and the masking effects on speech intelligibility were uncertain.
NASA Astrophysics Data System (ADS)
Ipatov, M. S.; Ostroumov, M. N.; Sobolev, A. F.
2012-07-01
Experimental results are presented on the effect of both the sound pressure level and the type of spectrum of a sound source on the impedance of an acoustic lining. The spectra under study include those of white noise, a narrow-band signal, and a signal with a preset waveform. It is found that, to obtain reliable data on the impedance of an acoustic lining from the results of interferometric measurements, the total sound pressure level of white noise or the maximal sound pressure level of a pure tone (at every oscillation frequency) needs to be identical to the total sound pressure level of the actual source at the site of acoustic lining on the channel wall.
Acoustic effects of the ATOC signal (75 Hz, 195 dB) on dolphins and whales.
Au, W W; Nachtigall, P E; Pawloski, J L
1997-05-01
The Acoustic Thermometry of Ocean Climate (ATOC) program of Scripps Institution of Oceanography and the Applied Physics Laboratory, University of Washington, will broadcast a low-frequency 75-Hz phase modulated acoustic signal over ocean basins in order to study ocean temperatures on a global scale and examine the effects of global warming. One of the major concerns is the possible effect of the ATOC signal on marine life, especially on dolphins and whales. In order to address this issue, the hearing sensitivity of a false killer whale (Pseudorca crassidens) and a Risso's dolphin (Grampus griseus) to the ATOC sound was measured behaviorally. A staircase procedure with the signal levels being changed in 1-dB steps was used to measure the animals' threshold to the actual ATOC coded signal. The results indicate that small odontocetes such as the Pseudorca and Grampus swimming directly above the ATOC source will not hear the signal unless they dive to a depth of approximately 400 m. A sound propagation analysis suggests that the sound-pressure level at ranges greater than 0.5 km will be less than 130 dB for depths down to about 500 m. Several species of baleen whales produce sounds much greater than 170-180 dB. With the ATOC source on the axis of the deep sound channel (greater than 800 m), the ATOC signal will probably have minimal physical and physiological effects on cetaceans.
A Computational and Experimental Study of Resonators in Three Dimensions
NASA Technical Reports Server (NTRS)
Tam, C. K. W.; Ju, H.; Jones, Michael G.; Watson, Willie R.; Parrott, Tony L.
2009-01-01
In a previous work by the present authors, a computational and experimental investigation of the acoustic properties of two-dimensional slit resonators was carried out. The present paper reports the results of a study extending the previous work to three dimensions. This investigation has two basic objectives. The first is to validate the computed results from direct numerical simulations of the flow and acoustic fields of slit resonators in three dimensions by comparing with experimental measurements in a normal incidence impedance tube. The second objective is to study the flow physics of resonant liners responsible for sound wave dissipation. Extensive comparisons are provided between computed and measured acoustic liner properties with both discrete frequency and broadband sound sources. Good agreements are found over a wide range of frequencies and sound pressure levels. Direct numerical simulation confirms the previous finding in two dimensions that vortex shedding is the dominant dissipation mechanism at high sound pressure intensity. However, it is observed that the behavior of the shed vortices in three dimensions is quite different from those of two dimensions. In three dimensions, the shed vortices tend to evolve into ring (circular in plan form) vortices, even though the slit resonator opening from which the vortices are shed has an aspect ratio of 2.5. Under the excitation of discrete frequency sound, the shed vortices align themselves into two regularly spaced vortex trains moving away from the resonator opening in opposite directions. This is different from the chaotic shedding of vortices found in two-dimensional simulations. The effect of slit aspect ratio at a fixed porosity is briefly studied. For the range of liners considered in this investigation, it is found that the absorption coefficient of a liner increases when the open area of the single slit is subdivided into multiple, smaller slits.
Perceptual constancy in auditory perception of distance to railway tracks.
De Coensel, Bert; Nilsson, Mats E; Berglund, Birgitta; Brown, A L
2013-07-01
Distance to a sound source can be accurately estimated solely from auditory information. With a sound source such as a train that is passing by at a relatively large distance, the most important auditory information for the listener for estimating its distance consists of the intensity of the sound, spectral changes in the sound caused by air absorption, and the motion-induced rate of change of intensity. However, these cues are relative because prior information/experience of the sound source-its source power, its spectrum and the typical speed at which it moves-is required for such distance estimates. This paper describes two listening experiments that allow investigation of further prior contextual information taken into account by listeners-viz., whether they are indoors or outdoors. Asked to estimate the distance to the track of a railway, it is shown that listeners assessing sounds heard inside the dwelling based their distance estimates on the expected train passby sound level outdoors rather than on the passby sound level actually experienced indoors. This form of perceptual constancy may have consequences for the assessment of annoyance caused by railway noise.
Recent paleoseismicity record in Prince William Sound, Alaska, USA
NASA Astrophysics Data System (ADS)
Kuehl, Steven A.; Miller, Eric J.; Marshall, Nicole R.; Dellapenna, Timothy M.
2017-12-01
Sedimentological and geochemical investigation of sediment cores collected in the deep (>400 m) central basin of Prince William Sound, along with geochemical fingerprinting of sediment source areas, are used to identify earthquake-generated sediment gravity flows. Prince William Sound receives sediment from two distinct sources: from offshore (primarily Copper River) through Hinchinbrook Inlet, and from sources within the Sound (primarily Columbia Glacier). These sources are found to have diagnostic elemental ratios indicative of provenance; Copper River Basin sediments were significantly higher in Sr/Pb and Cu/Pb, whereas Prince William Sound sediments were significantly higher in K/Ca and Rb/Sr. Within the past century, sediment gravity flows deposited within the deep central channel of Prince William Sound have robust geochemical (provenance) signatures that can be correlated with known moderate to large earthquakes in the region. Given the thick Holocene sequence in the Sound ( 200 m) and correspondingly high sedimentation rates (>1 cm year-1), this relationship suggests that sediments within the central basin of Prince William Sound may contain an extraordinary high-resolution record of paleoseismicity in the region.
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Bridges, James; Georgiadis, Nicholas
2005-01-01
The model-based approach, used by the JeNo code to predict jet noise spectral directivity, is described. A linearized form of Lilley's equation governs the non-causal Green s function of interest, with the non-linear terms on the right hand side identified as the source. A Reynolds-averaged Navier-Stokes (RANS) solution yields the required mean flow for the solution of the propagation Green s function in a locally parallel flow. The RANS solution also produces time- and length-scales needed to model the non-compact source, the turbulent velocity correlation tensor, with exponential temporal and spatial functions. It is shown that while an exact non-causal Green s function accurately predicts the observed shift in the location of the spectrum peak with angle as well as the angularity of sound at low to moderate Mach numbers, the polar directivity of radiated sound is not entirely captured by this Green s function at high subsonic and supersonic acoustic Mach numbers. Results presented for unheated jets in the Mach number range of 0.51 to 1.8 suggest that near the peak radiation angle of high-speed jets, a different source/Green s function convolution integral may be required in order to capture the peak observed directivity of jet noise. A sample Mach 0.90 heated jet is also discussed that highlights the requirements for a comprehensive jet noise prediction model.
Interior sound field control using generalized singular value decomposition in the frequency domain.
Pasco, Yann; Gauthier, Philippe-Aubert; Berry, Alain; Moreau, Stéphane
2017-01-01
The problem of controlling a sound field inside a region surrounded by acoustic control sources is considered. Inspired by the Kirchhoff-Helmholtz integral, the use of double-layer source arrays allows such a control and avoids the modification of the external sound field by the control sources by the approximation of the sources as monopole and radial dipole transducers. However, the practical implementation of the Kirchhoff-Helmholtz integral in physical space leads to large numbers of control sources and error sensors along with excessive controller complexity in three dimensions. The present study investigates the potential of the Generalized Singular Value Decomposition (GSVD) to reduce the controller complexity and separate the effect of control sources on the interior and exterior sound fields, respectively. A proper truncation of the singular basis provided by the GSVD factorization is shown to lead to effective cancellation of the interior sound field at frequencies below the spatial Nyquist frequency of the control sources array while leaving the exterior sound field almost unchanged. Proofs of concept are provided through simulations achieved for interior problems by simulations in a free field scenario with circular arrays and in a reflective environment with square arrays.
Series expansions of rotating two and three dimensional sound fields.
Poletti, M A
2010-12-01
The cylindrical and spherical harmonic expansions of oscillating sound fields rotating at a constant rate are derived. These expansions are a generalized form of the stationary sound field expansions. The derivations are based on the representation of interior and exterior sound fields using the simple source approach and determination of the simple source solutions with uniform rotation. Numerical simulations of rotating sound fields are presented to verify the theory.
Kuwada, Shigeyuki; Bishop, Brian; Kim, Duck O.
2012-01-01
The major functions of the auditory system are recognition (what is the sound) and localization (where is the sound). Although each of these has received considerable attention, rarely are they studied in combination. Furthermore, the stimuli used in the bulk of studies did not represent sound location in real environments and ignored the effects of reverberation. Another ignored dimension is the distance of a sound source. Finally, there is a scarcity of studies conducted in unanesthetized animals. We illustrate a set of efficient methods that overcome these shortcomings. We use the virtual auditory space method (VAS) to efficiently present sounds at different azimuths, different distances and in different environments. Additionally, this method allows for efficient switching between binaural and monaural stimulation and alteration of acoustic cues singly or in combination to elucidate neural mechanisms underlying localization and recognition. Such procedures cannot be performed with real sound field stimulation. Our research is designed to address the following questions: Are IC neurons specialized to process what and where auditory information? How does reverberation and distance of the sound source affect this processing? How do IC neurons represent sound source distance? Are neural mechanisms underlying envelope processing binaural or monaural? PMID:22754505
Bevelhimer, Mark S; Deng, Z Daniel; Scherelis, Constantin
2016-01-01
Underwater noise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels and natural underwater sound sources in static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines on the Mississippi River, where sounds created by flowing water are part of all measurements, both natural ambient and anthropogenic sources. Vessel sizes ranged from a small fishing boat with 60 hp outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, which were, on average, 40 dB greater than the sound created by an operating hydrokinetic turbine. A comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed sound attenuation.
Zhang, Lanyue; Ding, Dandan; Yang, Desen; Wang, Jia; Shi, Jie
2017-01-01
Spherical microphone arrays have been paid increasing attention for their ability to locate a sound source with arbitrary incident angle in three-dimensional space. Low-frequency sound sources are usually located by using spherical near-field acoustic holography. The reconstruction surface and holography surface are conformal surfaces in the conventional sound field transformation based on generalized Fourier transform. When the sound source is on the cylindrical surface, it is difficult to locate by using spherical surface conformal transform. The non-conformal sound field transformation by making a transfer matrix based on spherical harmonic wave decomposition is proposed in this paper, which can achieve the transformation of a spherical surface into a cylindrical surface by using spherical array data. The theoretical expressions of the proposed method are deduced, and the performance of the method is simulated. Moreover, the experiment of sound source localization by using a spherical array with randomly and uniformly distributed elements is carried out. Results show that the non-conformal surface sound field transformation from a spherical surface to a cylindrical surface is realized by using the proposed method. The localization deviation is around 0.01 m, and the resolution is around 0.3 m. The application of the spherical array is extended, and the localization ability of the spherical array is improved. PMID:28489065
Calibration of Seismic Sources during a Test Cruise with the new RV SONNE
NASA Astrophysics Data System (ADS)
Engels, M.; Schnabel, M.; Damm, V.
2015-12-01
During autumn 2014, several test cruises of the brand new German research vessel SONNE were carried out before the first official scientific cruise started in December. In September 2014, BGR conducted a seismic test cruise in the British North Sea. RV SONNE is a multipurpose research vessel and was also designed for the mobile BGR 3D seismic equipment, which was tested successfully during the cruise. We spend two days for calibration of the following seismic sources of BGR: G-gun array (50 l @ 150 bar) G-gun array (50 l @ 207 bar) single GI-gun (3.4 l @ 150 bar) For this experiment two hydrophones (TC4042 from Reson Teledyne) sampling up to 48 kHz were fixed below a drifting buoy at 20 m and 60 m water depth - the sea bottom was at 80 m depth. The vessel with the seismic sources sailed several up to 7 km long profiles around the buoy in order to cover many different azimuths and distances. We aimed to measure sound pressure level (SPL) and sound exposure level (SEL) under the conditions of the shallow North Sea. Total reflections and refracted waves dominate the recorded wave field, enhance the noise level and partly screen the direct wave in contrast to 'true' deep water calibration based solely on the direct wave. Presented are SPL and RMS power results in time domain, the decay with distance along profiles, and the somehow complicated 2D sound radiation pattern modulated by topography. The shading effect of the vessel's hull is significant. In frequency domain we consider 1/3 octave levels and estimate the amount of energy in frequency ranges not used for reflection seismic processing. Results are presented in comparison of the three different sources listed above. We compare the measured SPL decay with distance during this experiment with deep water modeling of seismic sources (Gundalf software) and with published results from calibrations with other marine seismic sources under different conditions: E.g. Breitzke et al. (2008, 2010) with RV Polarstern, Tolstoy et al. (2004) with RV Ewing and Tolstoy et al. (2009) with RV Langseth, and Crone et al. (2014) with RV Langseth.
2014-01-01
This study evaluates a spatial-filtering algorithm as a method to improve speech reception for cochlear-implant (CI) users in reverberant environments with multiple noise sources. The algorithm was designed to filter sounds using phase differences between two microphones situated 1 cm apart in a behind-the-ear hearing-aid capsule. Speech reception thresholds (SRTs) were measured using a Coordinate Response Measure for six CI users in 27 listening conditions including each combination of reverberation level (T60 = 0, 270, and 540 ms), number of noise sources (1, 4, and 11), and signal-processing algorithm (omnidirectional response, dipole-directional response, and spatial-filtering algorithm). Noise sources were time-reversed speech segments randomly drawn from the Institute of Electrical and Electronics Engineers sentence recordings. Target speech and noise sources were processed using a room simulation method allowing precise control over reverberation times and sound-source locations. The spatial-filtering algorithm was found to provide improvements in SRTs on the order of 6.5 to 11.0 dB across listening conditions compared with the omnidirectional response. This result indicates that such phase-based spatial filtering can improve speech reception for CI users even in highly reverberant conditions with multiple noise sources. PMID:25330772
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandenberger, Jill M.; Louchouarn, Patrick; Kuo, Li-Jung
2010-07-05
The results of the Phase 1 Toxics Loading study suggested that runoff from the land surface and atmospheric deposition directly to marine waters have resulted in considerable loads of contaminants to Puget Sound (Hart Crowser et al. 2007). The limited data available for atmospheric deposition fluxes throughout Puget Sound was recognized as a significant data gap. Therefore, this study provided more recent or first reported atmospheric deposition fluxes of PAHs, PBDEs, and select trace elements for Puget Sound. Samples representing bulk atmospheric deposition were collected during 2008 and 2009 at seven stations around Puget Sound spanning from Padilla Bay southmore » to Nisqually River including Hood Canal and the Straits of Juan de Fuca. Revised annual loading estimates for atmospheric deposition to the waters of Puget Sound were calculated for each of the toxics and demonstrated an overall decrease in the atmospheric loading estimates except for polybrominated diphenyl ethers (PBDEs) and total mercury (THg). The median atmospheric deposition flux of total PBDE (7.0 ng/m2/d) was higher than that of the Hart Crowser (2007) Phase 1 estimate (2.0 ng/m2/d). The THg was not significantly different from the original estimates. The median atmospheric deposition flux for pyrogenic PAHs (34.2 ng/m2/d; without TCB) shows a relatively narrow range across all stations (interquartile range: 21.2- 61.1 ng/m2/d) and shows no influence of season. The highest median fluxes for all parameters were measured at the industrial location in Tacoma and the lowest were recorded at the rural sites in Hood Canal and Sequim Bay. Finally, a semi-quantitative apportionment study permitted a first-order characterization of source inputs to the atmosphere of the Puget Sound. Both biomarker ratios and a principal component analysis confirmed regional data from the Puget Sound and Straits of Georgia region and pointed to the predominance of biomass and fossil fuel (mostly liquid petroleum products such as gasoline and/or diesel) combustion as source inputs of combustion by-products to the atmosphere of the region and subsequently to the waters of Puget Sound.« less
Sound reduction by metamaterial-based acoustic enclosure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Shanshan; Li, Pei; Zhou, Xiaoming
In many practical systems, acoustic radiation control on noise sources contained within a finite volume by an acoustic enclosure is of great importance, but difficult to be accomplished at low frequencies due to the enhanced acoustic-structure interaction. In this work, we propose to use acoustic metamaterials as the enclosure to efficiently reduce sound radiation at their negative-mass frequencies. Based on a circularly-shaped metamaterial model, sound radiation properties by either central or eccentric sources are analyzed by numerical simulations for structured metamaterials. The parametric analyses demonstrate that the barrier thickness, the cavity size, the source type, and the eccentricity of themore » source have a profound effect on the sound reduction. It is found that increasing the thickness of the metamaterial barrier is an efficient approach to achieve large sound reduction over the negative-mass frequencies. These results are helpful in designing highly efficient acoustic enclosures for blockage of sound in low frequencies.« less
Freeman, Simon E; Buckingham, Michael J; Freeman, Lauren A; Lammers, Marc O; D'Spain, Gerald L
2015-01-01
A seven element, bi-linear hydrophone array was deployed over a coral reef in the Papahãnaumokuãkea Marine National Monument, Northwest Hawaiian Islands, in order to investigate the spatial, temporal, and spectral properties of biological sound in an environment free of anthropogenic influences. Local biological sound sources, including snapping shrimp and other organisms, produced curved-wavefront acoustic arrivals at the array, allowing source location via focusing to be performed over an area of 1600 m(2). Initially, however, a rough estimate of source location was obtained from triangulation of pair-wise cross-correlations of the sound. Refinements to these initial source locations, and source frequency information, were then obtained using two techniques, conventional and adaptive focusing. It was found that most of the sources were situated on or inside the reef structure itself, rather than over adjacent sandy areas. Snapping-shrimp-like sounds, all with similar spectral characteristics, originated from individual sources predominantly in one area to the east of the array. To the west, the spectral and spatial distributions of the sources were more varied, suggesting the presence of a multitude of heterogeneous biological processes. In addition to the biological sounds, some low-frequency noise due to distant breaking waves was received from end-fire north of the array.
36 CFR Appendix C to Part 1191 - Architectural Barriers Act: Scoping
Code of Federal Regulations, 2014 CFR
2014-07-01
.... That portion of a room or space where the play or practice of a sport occurs. Assembly Area. A building... devices to bypass the acoustical space between a sound source and a listener by means of induction loop, radio frequency, infrared, or direct-wired equipment. Boarding Pier. A portion of a pier where a boat is...
ERIC Educational Resources Information Center
Huang, Ying; Huang, Qiang; Chen, Xun; Wu, Xihong; Li, Liang
2009-01-01
Perceptual integration of the sound directly emanating from the source with reflections needs both temporal storage and correlation computation of acoustic details. We examined whether the temporal storage is frequency dependent and associated with speech unmasking. In Experiment 1, a break in correlation (BIC) between interaurally correlated…
Head Angle and Elevation in Classroom Environments: Implications for Amplification
ERIC Educational Resources Information Center
Ricketts, Todd Andrew; Galster, Jason
2008-01-01
Purpose: The purpose of this study was to examine children's head orientation relative to the arrival angle of competing signals and the sound source of interest in actual school settings. These data were gathered to provide information relative to the potential for directional benefit. Method: Forty children, 4-17 years of age, with and without…
Underwater Acoustic Source Localisation Among Blind and Sighted Scuba Divers: Comparative study.
Cambi, Jacopo; Livi, Ludovica; Livi, Walter
2017-05-01
Many blind individuals demonstrate enhanced auditory spatial discrimination or localisation of sound sources in comparison to sighted subjects. However, this hypothesis has not yet been confirmed with regards to underwater spatial localisation. This study therefore aimed to investigate underwater acoustic source localisation among blind and sighted scuba divers. This study took place between February and June 2015 in Elba, Italy, and involved two experimental groups of divers with either acquired (n = 20) or congenital (n = 10) blindness and a control group of 30 sighted divers. Each subject took part in five attempts at an under-water acoustic source localisation task, in which the divers were requested to swim to the source of a sound originating from one of 24 potential locations. The control group had their sight obscured during the task. The congenitally blind divers demonstrated significantly better underwater sound localisation compared to the control group or those with acquired blindness ( P = 0.0007). In addition, there was a significant correlation between years of blindness and underwater sound localisation ( P <0.0001). Congenital blindness was found to positively affect the ability of a diver to recognise the source of a sound in an underwater environment. As the correct localisation of sounds underwater may help individuals to avoid imminent danger, divers should perform sound localisation tests during training sessions.
Measurement of sound emitted by flying projectiles with aeroacoustic sources
NASA Technical Reports Server (NTRS)
Cho, Y. I.; Shakkottai, P.; Harstad, K. G.; Back, L. H.
1988-01-01
Training projectiles with axisymmetric ring cavities that produce intense tones in an airstream were shot in a straight-line trajectory. A ground-based microphone was used to obtain the angular distribution of sound intensity produced from the flying projectile. Data reduction required calculation of Doppler and attenuation factors. Also, the directional sensitivity of the ground-mounted microphone was measured and used in the data reduction. A rapid angular variation of sound intensity produced from the projectile was found that can be used to plot an intensity contour map on the ground. A full-scale field test confirmed the validity of the aeroacoustic concept of producing a relatively intense whistle from the projectile, and the usefulness of short-range flight tests that yield acoustic data free of uncertainties associated with diffraction, reflection, and refraction at jet boundaries in free-jet tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hatano, H.; Watanabe, T.
A new system was developed for the reciprocity calibration of acoustic emission transducers in Rayleigh-wave and longitudinal-wave sound fields. In order to reduce interference from spurious waves due to reflections and mode conversions, a large cylindrical block of forged steel was prepared for the transfer medium, and direct and spurious waves were discriminated between on the basis of their arrival times. Frequency characteristics of velocity sensitivity to both the Rayleigh wave and longitudinal wave were determined in the range of 50 kHz{endash}1 MHz by means of electrical measurements without the use of mechanical sound sources or reference transducers. {copyright} {italmore » 1997 Acoustical Society of America.}« less
The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl.
Baxter, Caitlin S; Nelson, Brian S; Takahashi, Terry T
2013-02-01
Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643-655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map.
The detection of differences in the cues to distance by elderly hearing-impaired listeners
Akeroyd, Michael A.; Blaschke, Julia; Gatehouse, Stuart
2013-01-01
This experiment measured the capability of hearing-impaired individuals to discriminate differences in the cues to the distance of spoken sentences. The stimuli were generated synthetically, using a room-image procedure to calculate the direct sound and first 74 reflections for a source placed in a 7 × 9 m room, and then presenting each of those sounds individually through a circular array of 24 loudspeakers. Seventy-seven listeners participated, aged 22-83 years and with hearing levels from −5 to 59 dB HL. In conditions where a substantial change in overall level due to the inverse-square law was available as a cue, the elderly-hearing-impaired listeners did not perform any different from control groups. In other conditions where that cue was unavailable (so leaving the direct-to-reverberant relationship as a cue), either because the reverberant field dominated the direct sound or because the overall level had been artificially equalized, hearing-impaired listeners performed worse than controls. There were significant correlations with listeners’ self-reported distance capabilities as measured by the “SSQ” questionnaire [S. Gatehouse and W. Noble, Int. J. Audiol. 43, 85-99 (2004)]. The results demonstrate that hearing-impaired listeners show deficits in the ability to use some of the cues which signal auditory distance. PMID:17348530
Grimm, Giso; Hohmann, Volker; Laugesen, Søren; Neher, Tobias
2017-01-01
In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far (‘radial’) and left-right (‘angular’) movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup. PMID:28675088
Lundbeck, Micha; Grimm, Giso; Hohmann, Volker; Laugesen, Søren; Neher, Tobias
2017-01-01
In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far ('radial') and left-right ('angular') movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup.
33 CFR 86.07 - Directional properties.
Code of Federal Regulations, 2011 CFR
2011-07-01
... properties. The sound pressure level of a directional whistle shall be not more than 4 dB below the sound... forward axis. The sound pressure level of the whistle in any other direction in the horizontal plane shall not be more than 10 dB less than the sound pressure level specified for the forward axis, so that the...
Rationale for the tinnitus retraining therapy trial.
Formby, Craig; Scherer, Roberta
2013-01-01
The Tinnitus Retraining Therapy Trial (TRTT) is a National Institutes of Health-sponsored, multi-centered, placebo-controlled, randomized trial evaluating the efficacy of tinnitus retraining therapy (TRT) and its component parts, directive counseling and sound therapy, as treatments for subjective debilitating tinnitus in the military. The TRTT will enroll 228 individuals at an allocation ratio of 1:1:1 to: (1) directive counseling and sound therapy using conventional sound generators; (2) directive counseling and placebo sound generators; or (3) standard of care as administered in the military. Study centers include a Study Chair's Office, a Data Coordinating Center, and six Military Clinical Centers with treatment and data collection standardized across all clinics. The primary outcome is change in Tinnitus Questionnaire (TQ) score assessed longitudinally at 3, 6, 12, and 18-month follow-up visits. Secondary outcomes include: Change in TQ sub-scales, Tinnitus Handicap Inventory, Tinnitus Functional Index, and TRT interview visual analog scale; audiometric and psychoacoustic measures; and change in quality of life. The TRTT will evaluate TRT efficacy by comparing TRT (directive counseling and conventional sound generators) with standard of care; directive counseling by comparing directive counseling plus placebo sound generators versus standard of care; and sound therapy by comparing conventional versus placebo sound generators. We hypothesize that full TRT will be more efficacious than standard of care, directive counseling and placebo sound generators more efficacious than standard of care, and conventional more efficacious than placebo sound generators in habituating the tinnitus awareness, annoyance, and impact on the study participant's life.
Optimum sensor placement for microphone arrays
NASA Astrophysics Data System (ADS)
Rabinkin, Daniel V.
Microphone arrays can be used for high-quality sound pickup in reverberant and noisy environments. Sound capture using conventional single microphone methods suffers severe degradation under these conditions. The beamforming capabilities of microphone array systems allow highly directional sound capture, providing enhanced signal-to-noise ratio (SNR) when compared to single microphone performance. The overall performance of an array system is governed by its ability to locate and track sound sources and its ability to capture sound from desired spatial volumes. These abilities are strongly affected by the spatial placement of microphone sensors. A method is needed to optimize placement for a specified number of sensors in a given acoustical environment. The objective of the optimization is to obtain the greatest average system SNR for sound capture in the region of interest. A two-step sound source location method is presented. In the first step, time delay of arrival (TDOA) estimates for select microphone pairs are determined using a modified version of the Omologo-Svaizer cross-power spectrum phase expression. In the second step, the TDOA estimates are used in a least-mean-squares gradient descent search algorithm to obtain a location estimate. Statistics for TDOA estimate error as a function of microphone pair/sound source geometry and acoustic environment are gathered from a set of experiments. These statistics are used to model position estimation accuracy for a given array geometry. The effectiveness of sound source capture is also dependent on array geometry and the acoustical environment. Simple beamforming and time delay compensation (TDC) methods provide spatial selectivity but suffer performance degradation in reverberant environments. Matched filter array (MFA) processing can mitigate the effects of reverberation. The shape and gain advantage of the capture region for these techniques is described and shown to be highly influenced by the placement of array sensors. A procedure is developed to evaluate a given array configuration based on the above-mentioned metrics. Constrained placement optimizations are performed that maximize SNR for both TDC and MFA capture methods. Results are compared for various acoustic environments and various enclosure sizes. General guidelines are presented for placement strategy and bandwidth dependence, as they relate to reverberation levels, ambient noise, and enclosure geometry. An overall performance function is described based on these metrics. Performance of the microphone array system is also constrained by the design limitations of the supporting hardware. Two newly developed hardware architectures are presented that support the described algorithms. A low- cost 8-channel system with off-the-shelf componentry was designed and its performance evaluated. A massively parallel 512-channel custom-built system is in development-its capabilities and the rationale for its design are described.
Reaching nearby sources: comparison between real and virtual sound and visual targets
Parseihian, Gaëtan; Jouffrais, Christophe; Katz, Brian F. G.
2014-01-01
Sound localization studies over the past century have predominantly been concerned with directional accuracy for far-field sources. Few studies have examined the condition of near-field sources and distance perception. The current study concerns localization and pointing accuracy by examining source positions in the peripersonal space, specifically those associated with a typical tabletop surface. Accuracy is studied with respect to the reporting hand (dominant or secondary) for auditory sources. Results show no effect on the reporting hand with azimuthal errors increasing equally for the most extreme source positions. Distance errors show a consistent compression toward the center of the reporting area. A second evaluation is carried out comparing auditory and visual stimuli to examine any bias in reporting protocol or biomechanical difficulties. No common bias error was observed between auditory and visual stimuli indicating that reporting errors were not due to biomechanical limitations in the pointing task. A final evaluation compares real auditory sources and anechoic condition virtual sources created using binaural rendering. Results showed increased azimuthal errors, with virtual source positions being consistently overestimated to more lateral positions, while no significant distance perception was observed, indicating a deficiency in the binaural rendering condition relative to the real stimuli situation. Various potential reasons for this discrepancy are discussed with several proposals for improving distance perception in peripersonal virtual environments. PMID:25228855
Directional acoustic measurements by laser Doppler velocimeters. [for jet aircraft noise
NASA Technical Reports Server (NTRS)
Mazumder, M. K.; Overbey, R. L.; Testerman, M. K.
1976-01-01
Laser Doppler velocimeters (LDVs) were used as velocity microphones to measure sound pressure level in the range of 90-130 db, spectral components, and two-point cross correlation functions for acoustic noise source identification. Close agreement between LDV and microphone data is observed. It was concluded that directional sensitivity and the ability to measure remotely make LDVs useful tools for acoustic measurement where placement of any physical probe is difficult or undesirable, as in the diagnosis of jet aircraft noise.
Stochastic sediment property inversion in Shallow Water 06.
Michalopoulou, Zoi-Heleni
2017-11-01
Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.
Bevelhimer, Mark S.; Deng, Z. Daniel; Scherelis, Constantin C.
2016-01-06
Underwaternoise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels and natural underwater sound sources in static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines on the Mississippi River, where soundsmore » created by flowing water are part of all measurements, both natural ambient and anthropogenic sources. Vessel sizes ranged from a small fishing boat with 60 hp outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, which were, on average, 40 dB greater than the sound created by an operating hydrokinetic turbine. As a result, a comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed sound attenuation.« less
Underwater auditory localization by a swimming harbor seal (Phoca vitulina).
Bodson, Anais; Miersch, Lars; Mauck, Bjoern; Dehnhardt, Guido
2006-09-01
The underwater sound localization acuity of a swimming harbor seal (Phoca vitulina) was measured in the horizontal plane at 13 different positions. The stimulus was either a double sound (two 6-kHz pure tones lasting 0.5 s separated by an interval of 0.2 s) or a single continuous sound of 1.2 s. Testing was conducted in a 10-m-diam underwater half circle arena with hidden loudspeakers installed at the exterior perimeter. The animal was trained to swim along the diameter of the half circle and to change its course towards the sound source as soon as the signal was given. The seal indicated the sound source by touching its assumed position at the board of the half circle. The deviation of the seals choice from the actual sound source was measured by means of video analysis. In trials with the double sound the seal localized the sound sources with a mean deviation of 2.8 degrees and in trials with the single sound with a mean deviation of 4.5 degrees. In a second experiment minimum audible angles of the stationary animal were found to be 9.8 degrees in front and 9.7 degrees in the back of the seal's head.
Marine mammal audibility of selected shallow-water survey sources.
MacGillivray, Alexander O; Racca, Roberto; Li, Zizheng
2014-01-01
Most attention about the acoustic effects of marine survey sound sources on marine mammals has focused on airgun arrays, with other common sources receiving less scrutiny. Sound levels above hearing threshold (sensation levels) were modeled for six marine mammal species and seven different survey sources in shallow water. The model indicated that odontocetes were most likely to hear sounds from mid-frequency sources (fishery, communication, and hydrographic systems), mysticetes from low-frequency sources (sub-bottom profiler and airguns), and pinnipeds from both mid- and low-frequency sources. High-frequency sources (side-scan and multibeam) generated the lowest estimated sensation levels for all marine mammal species groups.
Performance of active feedforward control systems in non-ideal, synthesized diffuse sound fields.
Misol, Malte; Bloch, Christian; Monner, Hans Peter; Sinapius, Michael
2014-04-01
The acoustic performance of passive or active panel structures is usually tested in sound transmission loss facilities. A reverberant sending room, equipped with one or a number of independent sound sources, is used to generate a diffuse sound field excitation which acts as a disturbance source on the structure under investigation. The spatial correlation and coherence of such a synthesized non-ideal diffuse-sound-field excitation, however, might deviate significantly from the ideal case. This has consequences for the operation of an active feedforward control system which heavily relies on the acquisition of coherent disturbance source information. This work, therefore, evaluates the spatial correlation and coherence of ideal and non-ideal diffuse sound fields and considers the implications on the performance of a feedforward control system. The system under consideration is an aircraft-typical double panel system, equipped with an active sidewall panel (lining), which is realized in a transmission loss facility. Experimental results for different numbers of sound sources in the reverberation room are compared to simulation results of a comparable generic double panel system excited by an ideal diffuse sound field. It is shown that the number of statistically independent noise sources acting on the primary structure of the double panel system depends not only on the type of diffuse sound field but also on the sample lengths of the processed signals. The experimental results show that the number of reference sensors required for a defined control performance exhibits an inverse relationship to control filter length.
Sound sensitivity of neurons in rat hippocampus during performance of a sound-guided task
Vinnik, Ekaterina; Honey, Christian; Schnupp, Jan; Diamond, Mathew E.
2012-01-01
To investigate how hippocampal neurons encode sound stimuli, and the conjunction of sound stimuli with the animal's position in space, we recorded from neurons in the CA1 region of hippocampus in rats while they performed a sound discrimination task. Four different sounds were used, two associated with water reward on the right side of the animal and the other two with water reward on the left side. This allowed us to separate neuronal activity related to sound identity from activity related to response direction. To test the effect of spatial context on sound coding, we trained rats to carry out the task on two identical testing platforms at different locations in the same room. Twenty-one percent of the recorded neurons exhibited sensitivity to sound identity, as quantified by the difference in firing rate for the two sounds associated with the same response direction. Sensitivity to sound identity was often observed on only one of the two testing platforms, indicating an effect of spatial context on sensory responses. Forty-three percent of the neurons were sensitive to response direction, and the probability that any one neuron was sensitive to response direction was statistically independent from its sensitivity to sound identity. There was no significant coding for sound identity when the rats heard the same sounds outside the behavioral task. These results suggest that CA1 neurons encode sound stimuli, but only when those sounds are associated with actions. PMID:22219030
Kastelein, Ronald A; van der Heul, Sander; Verboom, Willem C; Triesscheijn, Rob J V; Jennings, Nancy V
2006-02-01
To prevent grounding of ships and collisions between ships in shallow coastal waters, an underwater data collection and communication network (ACME) using underwater sounds to encode and transmit data is currently under development. Marine mammals might be affected by ACME sounds since they may use sound of a similar frequency (around 12 kHz) for communication, orientation, and prey location. If marine mammals tend to avoid the vicinity of the acoustic transmitters, they may be kept away from ecologically important areas by ACME sounds. One marine mammal species that may be affected in the North Sea is the harbour seal (Phoca vitulina). No information is available on the effects of ACME-like sounds on harbour seals, so this study was carried out as part of an environmental impact assessment program. Nine captive harbour seals were subjected to four sound types, three of which may be used in the underwater acoustic data communication network. The effect of each sound was judged by comparing the animals' location in a pool during test periods to that during baseline periods, during which no sound was produced. Each of the four sounds could be made into a deterrent by increasing its amplitude. The seals reacted by swimming away from the sound source. The sound pressure level (SPL) at the acoustic discomfort threshold was established for each of the four sounds. The acoustic discomfort threshold is defined as the boundary between the areas that the animals generally occupied during the transmission of the sounds and the areas that they generally did not enter during transmission. The SPLs at the acoustic discomfort thresholds were similar for each of the sounds (107 dB re 1 microPa). Based on this discomfort threshold SPL, discomfort zones at sea for several source levels (130-180 dB re 1 microPa) of the sounds were calculated, using a guideline sound propagation model for shallow water. The discomfort zone is defined as the area around a sound source that harbour seals are expected to avoid. The definition of the discomfort zone is based on behavioural discomfort, and does not necessarily coincide with the physical discomfort zone. Based on these results, source levels can be selected that have an acceptable effect on harbour seals in particular areas. The discomfort zone of a communication sound depends on the sound, the source level, and the propagation characteristics of the area in which the sound system is operational. The source level of the communication system should be adapted to each area (taking into account the width of a sea arm, the local sound propagation, and the importance of an area to the affected species). The discomfort zone should not coincide with ecologically important areas (for instance resting, breeding, suckling, and feeding areas), or routes between these areas.
Cortical Reorganisation during a 30-Week Tinnitus Treatment Program
McMahon, Catherine M.; Ibrahim, Ronny K.; Mathur, Ankit
2016-01-01
Subjective tinnitus is characterised by the conscious perception of a phantom sound. Previous studies have shown that individuals with chronic tinnitus have disrupted sound-evoked cortical tonotopic maps, time-shifted evoked auditory responses, and altered oscillatory cortical activity. The main objectives of this study were to: (i) compare sound-evoked brain responses and cortical tonotopic maps in individuals with bilateral tinnitus and those without tinnitus; and (ii) investigate whether changes in these sound-evoked responses occur with amelioration of the tinnitus percept during a 30-week tinnitus treatment program. Magnetoencephalography (MEG) recordings of 12 bilateral tinnitus participants and 10 control normal-hearing subjects reporting no tinnitus were recorded at baseline, using 500 Hz, 1000 Hz, 2000 Hz, and 4000 Hz tones presented monaurally at 70 dBSPL through insert tube phones. For the tinnitus participants, MEG recordings were obtained at 5-, 10-, 20- and 30- week time points during tinnitus treatment. Results for the 500 Hz and 1000 Hz sources (where hearing thresholds were within normal limits for all participants) showed that the tinnitus participants had a significantly larger and more anteriorly located source strengths when compared to the non-tinnitus participants. During the 30-week tinnitus treatment, the participants’ 500 Hz and 1000 Hz source strengths remained higher than the non-tinnitus participants; however, the source locations shifted towards the direction recorded from the non-tinnitus control group. Further, in the left hemisphere, there was a time-shifted association between the trajectory of change of the individual’s objective (source strength and anterior-posterior source location) and subjective measures (using tinnitus reaction questionnaire, TRQ). The differences in source strength between the two groups suggest that individuals with tinnitus have enhanced central gain which is not significantly influenced by the tinnitus treatment, and may result from the hearing loss per se. On the other hand, the shifts in the tonotopic map towards the non-tinnitus participants’ source location suggests that the tinnitus treatment might reduce the disruptions in the map, presumably produced by the tinnitus percept directly or indirectly. Further, the similarity in the trajectory of change across the objective and subjective parameters after time-shifting the perceptual changes by 5 weeks suggests that during or following treatment, perceptual changes in the tinnitus percept may precede neurophysiological changes. Subgroup analyses conducted by magnitude of hearing loss suggest that there were no differences in the 500 Hz and 1000 Hz source strength amplitudes for the mild-moderate compared with the mild-severe hearing loss subgroup, although the mean source strength was consistently higher for the mild-severe subgroup. Further, the mild-severe subgroup had 500 Hz and 1000 Hz source locations located more anteriorly (i.e., more disrupted compared to the control group) compared to the mild-moderate group, although this was trending towards significance only for the 500Hz left hemisphere source. While the small numbers of participants within the subgroup analyses reduce the statistical power, this study suggests that those with greater magnitudes of hearing loss show greater cortical disruptions with tinnitus and that tinnitus treatment appears to reduce the tonotopic map disruptions but not the source strength (or central gain). PMID:26901425
Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian
2016-03-22
Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches.
Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian
2016-01-01
Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches. PMID:27011187
Underwater Acoustic Source Localisation Among Blind and Sighted Scuba Divers
Cambi, Jacopo; Livi, Ludovica; Livi, Walter
2017-01-01
Objectives Many blind individuals demonstrate enhanced auditory spatial discrimination or localisation of sound sources in comparison to sighted subjects. However, this hypothesis has not yet been confirmed with regards to underwater spatial localisation. This study therefore aimed to investigate underwater acoustic source localisation among blind and sighted scuba divers. Methods This study took place between February and June 2015 in Elba, Italy, and involved two experimental groups of divers with either acquired (n = 20) or congenital (n = 10) blindness and a control group of 30 sighted divers. Each subject took part in five attempts at an under-water acoustic source localisation task, in which the divers were requested to swim to the source of a sound originating from one of 24 potential locations. The control group had their sight obscured during the task. Results The congenitally blind divers demonstrated significantly better underwater sound localisation compared to the control group or those with acquired blindness (P = 0.0007). In addition, there was a significant correlation between years of blindness and underwater sound localisation (P <0.0001). Conclusion Congenital blindness was found to positively affect the ability of a diver to recognise the source of a sound in an underwater environment. As the correct localisation of sounds underwater may help individuals to avoid imminent danger, divers should perform sound localisation tests during training sessions. PMID:28690888
The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl
Baxter, Caitlin S.; Takahashi, Terry T.
2013-01-01
Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643–655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map. PMID:23175801
Rationale for the tinnitus retraining therapy trial
Formby, Craig; Scherer, Roberta
2013-01-01
The Tinnitus Retraining Therapy Trial (TRTT) is a National Institutes of Health-sponsored, multi-centered, placebo-controlled, randomized trial evaluating the efficacy of tinnitus retraining therapy (TRT) and its component parts, directive counseling and sound therapy, as treatments for subjective debilitating tinnitus in the military. The TRTT will enroll 228 individuals at an allocation ratio of 1:1:1 to: (1) directive counseling and sound therapy using conventional sound generators; (2) directive counseling and placebo sound generators; or (3) standard of care as administered in the military. Study centers include a Study Chair’s Office, a Data Coordinating Center, and six Military Clinical Centers with treatment and data collection standardized across all clinics. The primary outcome is change in Tinnitus Questionnaire (TQ) score assessed longitudinally at 3, 6, 12, and 18-month follow-up visits. Secondary outcomes include: Change in TQ sub-scales, Tinnitus Handicap Inventory, Tinnitus Functional Index, and TRT interview visual analog scale; audiometric and psychoacoustic measures; and change in quality of life. The TRTT will evaluate TRT efficacy by comparing TRT (directive counseling and conventional sound generators) with standard of care; directive counseling by comparing directive counseling plus placebo sound generators versus standard of care; and sound therapy by comparing conventional versus placebo sound generators. We hypothesize that full TRT will be more efficacious than standard of care, directive counseling and placebo sound generators more efficacious than standard of care, and conventional more efficacious than placebo sound generators in habituating the tinnitus awareness, annoyance, and impact on the study participant’s life. PMID:23571304
Wind-instrument reflection function measurements in the time domain.
Keefe, D H
1996-04-01
Theoretical and computational analyses of wind-instrument sound production in the time domain have emerged as useful tools for understanding musical instrument acoustics, yet there exist few experimental measurements of the air-column response directly in the time domain. A new experimental, time-domain technique is proposed to measure the reflection function response of woodwind and brass-instrument air columns. This response is defined at the location of sound regeneration in the mouthpiece or double reed. A probe assembly comprised of an acoustic source and microphone is inserted directly into the air column entryway using a foam plug to ensure a leak-free fit. An initial calibration phase involves measurements on a single cylindrical tube of known dimensions. Measurements are presented on an alto saxophone and euphonium. The technique has promise for testing any musical instrument air columns using a single probe assembly and foam plugs over a range of diameters typical of air-column entryways.
Structure of supersonic jet flow and its radiated sound
NASA Technical Reports Server (NTRS)
Mankbadi, Reda R.; Hayer, M. Ehtesham; Povinelli, Louis A.
1994-01-01
The present paper explores the use of large-eddy simulations as a tool for predicting noise from first principles. A high-order numerical scheme is used to perform large-eddy simulations of a supersonic jet flow with emphasis on capturing the time-dependent flow structure representating the sound source. The wavelike nature of this structure under random inflow disturbances is demonstrated. This wavelike structure is then enhanced by taking the inflow disturbances to be purely harmonic. Application of Lighthill's theory to calculate the far-field noise, with the sound source obtained from the calculated time-dependent near field, is demonstrated. Alternative approaches to coupling the near-field sound source to the far-field sound are discussed.
Spacecraft Internal Acoustic Environment Modeling
NASA Technical Reports Server (NTRS)
Chu, SShao-sheng R.; Allen, Christopher S.
2009-01-01
Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. In FY09, the physical mockup developed in FY08, with interior geometric shape similar to Orion CM (Crew Module) IML (Interior Mode Line), was used to validate SEA (Statistical Energy Analysis) acoustic model development with realistic ventilation fan sources. The sound power levels of these sources were unknown a priori, as opposed to previous studies that RSS (Reference Sound Source) with known sound power level was used. The modeling results were evaluated based on comparisons to measurements of sound pressure levels over a wide frequency range, including the frequency range where SEA gives good results. Sound intensity measurement was performed over a rectangular-shaped grid system enclosing the ventilation fan source. Sound intensities were measured at the top, front, back, right, and left surfaces of the and system. Sound intensity at the bottom surface was not measured, but sound blocking material was placed tinder the bottom surface to reflect most of the incident sound energy back to the remaining measured surfaces. Integrating measured sound intensities over measured surfaces renders estimated sound power of the source. The reverberation time T6o of the mockup interior had been modified to match reverberation levels of ISS US Lab interior for speech frequency bands, i.e., 0.5k, 1k, 2k, 4 kHz, by attaching appropriately sized Thinsulate sound absorption material to the interior wall of the mockup. Sound absorption of Thinsulate was modeled in three methods: Sabine equation with measured mockup interior reverberation time T60, layup model based on past impedance tube testing, and layup model plus air absorption correction. The evaluation/validation was carried out by acquiring octave band microphone data simultaneously at ten fixed locations throughout the mockup. SPLs (Sound Pressure Levels) predicted by our SEA model match well with measurements for our CM mockup, with a more complicated shape. Additionally in FY09, background NC noise (Noise Criterion) simulation and MRT (Modified Rhyme Test) were developed and performed in the mockup to determine the maximum noise level in CM habitable volume for fair crew voice communications. Numerous demonstrations of simulated noise environment in the mockup and associated SIL (Speech Interference Level) via MRT were performed for various communities, including members from NASA and Orion prime-/sub-contractors. Also, a new HSIR (Human-Systems Integration Requirement) for limiting pre- and post-landing SIL was proposed.
Underwater sound of rigid-hulled inflatable boats.
Erbe, Christine; Liong, Syafrin; Koessler, Matthew Walter; Duncan, Alec J; Gourlay, Tim
2016-06-01
Underwater sound of rigid-hulled inflatable boats was recorded 142 times in total, over 3 sites: 2 in southern British Columbia, Canada, and 1 off Western Australia. Underwater sound peaked between 70 and 400 Hz, exhibiting strong tones in this frequency range related to engine and propeller rotation. Sound propagation models were applied to compute monopole source levels, with the source assumed 1 m below the sea surface. Broadband source levels (10-48 000 Hz) increased from 134 to 171 dB re 1 μPa @ 1 m with speed from 3 to 16 m/s (10-56 km/h). Source power spectral density percentile levels and 1/3 octave band levels are given for use in predictive modeling of underwater sound of these boats as part of environmental impact assessments.
Binaural Processing of Multiple Sound Sources
2016-08-18
Sound Source Localization Identification, and Sound Source Localization When Listeners Move. The CI research was also supported by an NIH grant...8217Cochlear Implant Performance in Realistic Listening Environments,’ Dr. Michael Dorman, Principal Investigator, Dr. William Yost unpaid advisor. The other... Listeners Move. The CI research was also supported by an NIH grant (“Cochlear Implant Performance in Realistic Listening Environments,” Dr. Michael Dorman
The acoustic vector sensor: a versatile battlefield acoustics sensor
NASA Astrophysics Data System (ADS)
de Bree, Hans-Elias; Wind, Jelmer W.
2011-06-01
The invention of the Microflown sensor has made it possible to measure acoustic particle velocity directly. An acoustic vector sensor (AVS) measures the particle velocity in three directions (the source direction) and the pressure. The sensor is a uniquely versatile battlefield sensor because its size is a few millimeters and it is sensitive to sound from 10Hz to 10kHz. This article shows field tests results of acoustic vector sensors, measuring rifles, heavy artillery, fixed wing aircraft and helicopters. Experimental data shows that the sensor is suitable as a ground sensor, mounted on a vehicle and on a UAV.
Kolarik, Andrew J; Cirstea, Silvia; Pardhan, Shahina
2013-02-01
Totally blind listeners often demonstrate better than normal capabilities when performing spatial hearing tasks. Accurate representation of three-dimensional auditory space requires the processing of available distance information between the listener and the sound source; however, auditory distance cues vary greatly depending upon the acoustic properties of the environment, and it is not known which distance cues are important to totally blind listeners. Our data show that totally blind listeners display better performance compared to sighted age-matched controls for distance discrimination tasks in anechoic and reverberant virtual rooms simulated using a room-image procedure. Totally blind listeners use two major auditory distance cues to stationary sound sources, level and direct-to-reverberant ratio, more effectively than sighted controls for many of the virtual distances tested. These results show that significant compensation among totally blind listeners for virtual auditory spatial distance leads to benefits across a range of simulated acoustic environments. No significant differences in performance were observed between listeners with partial non-correctable visual losses and sighted controls, suggesting that sensory compensation for virtual distance does not occur for listeners with partial vision loss.
Psychophysical investigation of an auditory spatial illusion in cats: the precedence effect.
Tollin, Daniel J; Yin, Tom C T
2003-10-01
The precedence effect (PE) describes several spatial perceptual phenomena that occur when similar sounds are presented from two different locations and separated by a delay. The mechanisms that produce the effect are thought to be responsible for the ability to localize sounds in reverberant environments. Although the physiological bases for the PE have been studied, little is known about how these sounds are localized by species other than humans. Here we used the search coil technique to measure the eye positions of cats trained to saccade to the apparent locations of sounds. To study the PE, brief broadband stimuli were presented from two locations, with a delay between their onsets; the delayed sound meant to simulate a single reflection. Although the cats accurately localized single sources, the apparent locations of the paired sources depended on the delay. First, the cats exhibited summing localization, the perception of a "phantom" sound located between the sources, for delays < +/-400 micros for sources positioned in azimuth along the horizontal plane, but not for sources positioned in elevation along the sagittal plane. Second, consistent with localization dominance, for delays from 400 micros to about 10 ms, the cats oriented toward the leading source location only, with little influence of the lagging source, both for horizontally and vertically placed sources. Finally, the echo threshold was reached for delays >10 ms, where the cats first began to orient to the lagging source on some trials. These data reveal that cats experience the PE phenomena similarly to humans.
NASA Astrophysics Data System (ADS)
Gover, Bradford Noel
The problem of hands-free speech pick-up is introduced, and it is identified how details of the spatial properties of the reverberant field may be useful for enhanced design of microphone arrays. From this motivation, a broadly-applicable measurement system has been developed for the analysis of the directional and spatial variations in reverberant sound fields. Two spherical, 32-element arrays of microphones are used to generate narrow beams over two different frequency ranges, together covering 300--3300 Hz. Using an omnidirectional loudspeaker as excitation in a room, the pressure impulse response in each of 60 steering directions is measured. Through analysis of these responses, the variation of arriving energy with direction is studied. The system was first validated in simple sound fields in an anechoic chamber and in a reverberation chamber. The system characterizes these sound fields as expected, both quantitatively through numerical descriptors and qualitatively from plots of the arriving energy versus direction. The system was then used to measure the sound fields in several actual rooms. Through both qualitative and quantitative output, these sound fields were seen to be highly anisotropic, influenced greatly by the direct sound and early-arriving reflections. Furthermore, the rate of sound decay was not independent of direction, sound being absorbed more rapidly in some directions than in others. These results are discussed in the context of the original motivation, and methods for their application to enhanced speech pick-up using microphone arrays are proposed.
Acoustic effects of the ATOC signal (75 Hz, 195 dB) on dolphins and whales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Au, W.W.; Nachtigall, P.E.; Pawloski, J.L.
1997-05-01
The Acoustic Thermometry of Ocean Climate (ATOC) program of Scripps Institution of Oceanography and the Applied Physics Laboratory, University of Washington, will broadcast a low-frequency 75-Hz phase modulated acoustic signal over ocean basins in order to study ocean temperatures on a global scale and examine the effects of global warming. One of the major concerns is the possible effect of the ATOC signal on marine life, especially on dolphins and whales. In order to address this issue, the hearing sensitivity of a false killer whale ({ital Pseudorca crassidens}) and a Risso{close_quote}s dolphin ({ital Grampus griseus}) to the ATOC sound wasmore » measured behaviorally. A staircase procedure with the signal levels being changed in 1-dB steps was used to measure the animals{close_quote} threshold to the actual ATOC coded signal. The results indicate that small odontocetes such as the {ital Pseudorca} and {ital Grampus} swimming directly above the ATOC source will not hear the signal unless they dive to a depth of approximately 400 m. A sound propagation analysis suggests that the sound-pressure level at ranges greater than 0.5 km will be less than 130 dB for depths down to about 500 m. Several species of baleen whales produce sounds much greater than 170{endash}180 dB. With the ATOC source on the axis of the deep sound channel (greater than 800 m), the ATOC signal will probably have minimal physical and physiological effects on cetaceans. {copyright} {ital 1997 Acoustical Society of America.}« less
Acoustical deterrence of Silver Carp (Hypophthalmichthys molitrix)
Brooke J. Vetter,; Cupp, Aaron R.; Fredricks, Kim T.; Gaikowski, Mark P.; Allen F. Mensinger,
2015-01-01
The invasive Silver Carp (Hypophthalmichthys molitrix) dominate large regions of the Mississippi River drainage and continue to expand their range northward threatening the Laurentian Great Lakes. This study found that complex broadband sound (0–10 kHz) is effective in altering the behavior of Silver Carp with implications for deterrent barriers or potential control measures (e.g., herding fish into nets). The phonotaxic response of Silver Carp was investigated using controlled experiments in outdoor concrete ponds (10 × 4.9 × 1.2 m). Pure tones (500–2000 Hz) and complex sound (underwater field recordings of outboard motors) were broadcast using underwater speakers. Silver Carp always reacted to the complex sounds by exhibiting negative phonotaxis to the sound source and by alternating speaker location, Silver Carp could be directed consistently, up to 37 consecutive times, to opposite ends of the large outdoor pond. However, fish habituated quickly to pure tones, reacting to only approximately 5 % of these presentations and never showed more than two consecutive responses. Previous studies have demonstrated the success of sound barriers in preventing Silver Carp movement using pure tones and this research suggests that a complex sound stimulus would be an even more effective deterrent.
What's what in auditory cortices?
Retsa, Chrysa; Matusz, Pawel J; Schnupp, Jan W H; Murray, Micah M
2018-08-01
Distinct anatomical and functional pathways are postulated for analysing a sound's object-related ('what') and space-related ('where') information. It remains unresolved to which extent distinct or overlapping neural resources subserve specific object-related dimensions (i.e. who is speaking and what is being said can both be derived from the same acoustic input). To address this issue, we recorded high-density auditory evoked potentials (AEPs) while participants selectively attended and discriminated sounds according to their pitch, speaker identity, uttered syllable ('what' dimensions) or their location ('where'). Sound acoustics were held constant across blocks; the only manipulation involved the sound dimension that participants had to attend to. The task-relevant dimension was varied across blocks. AEPs from healthy participants were analysed within an electrical neuroimaging framework to differentiate modulations in response strength from modulations in response topography; the latter of which forcibly follow from changes in the configuration of underlying sources. There were no behavioural differences in discrimination of sounds across the 4 feature dimensions. As early as 90ms post-stimulus onset, AEP topographies differed across 'what' conditions, supporting a functional sub-segregation within the auditory 'what' pathway. This study characterises the spatio-temporal dynamics of segregated, yet parallel, processing of multiple sound object-related feature dimensions when selective attention is directed to them. Copyright © 2018 Elsevier Inc. All rights reserved.
Acoustic signatures of sound source-tract coupling.
Arneodo, Ezequiel M; Perl, Yonatan Sanz; Mindlin, Gabriel B
2011-04-01
Birdsong is a complex behavior, which results from the interaction between a nervous system and a biomechanical peripheral device. While much has been learned about how complex sounds are generated in the vocal organ, little has been learned about the signature on the vocalizations of the nonlinear effects introduced by the acoustic interactions between a sound source and the vocal tract. The variety of morphologies among bird species makes birdsong a most suitable model to study phenomena associated to the production of complex vocalizations. Inspired by the sound production mechanisms of songbirds, in this work we study a mathematical model of a vocal organ, in which a simple sound source interacts with a tract, leading to a delay differential equation. We explore the system numerically, and by taking it to the weakly nonlinear limit, we are able to examine its periodic solutions analytically. By these means we are able to explore the dynamics of oscillatory solutions of a sound source-tract coupled system, which are qualitatively different from those of a sound source-filter model of a vocal organ. Nonlinear features of the solutions are proposed as the underlying mechanisms of observed phenomena in birdsong, such as unilaterally produced "frequency jumps," enhancement of resonances, and the shift of the fundamental frequency observed in heliox experiments. ©2011 American Physical Society
Acoustic signatures of sound source-tract coupling
Arneodo, Ezequiel M.; Perl, Yonatan Sanz; Mindlin, Gabriel B.
2014-01-01
Birdsong is a complex behavior, which results from the interaction between a nervous system and a biomechanical peripheral device. While much has been learned about how complex sounds are generated in the vocal organ, little has been learned about the signature on the vocalizations of the nonlinear effects introduced by the acoustic interactions between a sound source and the vocal tract. The variety of morphologies among bird species makes birdsong a most suitable model to study phenomena associated to the production of complex vocalizations. Inspired by the sound production mechanisms of songbirds, in this work we study a mathematical model of a vocal organ, in which a simple sound source interacts with a tract, leading to a delay differential equation. We explore the system numerically, and by taking it to the weakly nonlinear limit, we are able to examine its periodic solutions analytically. By these means we are able to explore the dynamics of oscillatory solutions of a sound source-tract coupled system, which are qualitatively different from those of a sound source-filter model of a vocal organ. Nonlinear features of the solutions are proposed as the underlying mechanisms of observed phenomena in birdsong, such as unilaterally produced “frequency jumps,” enhancement of resonances, and the shift of the fundamental frequency observed in heliox experiments. PMID:21599213
J-85 jet engine noise measured in the ONERA S1 wind tunnel and extrapolated to far field
NASA Technical Reports Server (NTRS)
Soderman, Paul T.; Julienne, Alain; Atencio, Adolph, Jr.
1991-01-01
Noise from a J-85 turbojet with a conical, convergent nozzle was measured in simulated flight in the ONERA S1 Wind Tunnel. Data are presented for several flight speeds up to 130 m/sec and for radiation angles of 40 to 160 degrees relative to the upstream direction. The jet was operated with subsonic and sonic exhaust speeds. A moving microphone on a 2 m sideline was used to survey the radiated sound field in the acoustically treated, closed test section. The data were extrapolated to a 122 m sideline by means of a multiple-sideline source-location method, which was used to identify the acoustic source regions, directivity patterns, and near field effects. The source-location method is described along with its advantages and disadvantages. Results indicate that the effects of simulated flight on J-85 noise are significant. At the maximum forward speed of 130 m/sec, the peak overall sound levels in the aft quadrant were attentuated approximately 10 dB relative to sound levels of the engine operated statically. As expected, the simulated flight and static data tended to merge in the forward quadrant as the radiation angle approached 40 degrees. There is evidence that internal engine or shock noise was important in the forward quadrant. The data are compared with published predictions for flight effects on pure jet noise and internal engine noise. A new empirical prediction is presented that relates the variation of internally generated engine noise or broadband shock noise to forward speed. Measured near field noise extrapolated to far field agrees reasonably well with data from similar engines tested statically outdoors, in flyover, in a wind tunnel, and on the Bertin Aerotrain. Anomalies in the results for the forward quadrant and for angles above 140 degrees are discussed. The multiple-sideline method proved to be cumbersome in this application, and it did not resolve all of the uncertainties associated with measurements of jet noise close to the jet. The simulation was complicated by wind-tunnel background noise and the propagation of low frequency sound around the circuit.
Acoustic ground impedance meter
NASA Technical Reports Server (NTRS)
Zuckerwar, A. J.
1981-01-01
A compact, portable instrument was developed to measure the acoustic impedance of the ground, or other surfaces, by direct pressure-volume velocity measurement. A Helmholz resonator, constructed of heavy-walled stainless steel but open at the bottom, is positioned over the surface having the unknown impedance. The sound source, a cam-driven piston of known stroke and thus known volume velocity, is located in the neck of the resonator. The cam speed is a variable up to a maximum 3600 rpm. The sound pressure at the test surface is measured by means of a microphone flush-mounted in the wall of the chamber. An optical monitor of the piston displacement permits measurement of the phase angle between the volume velocity and the sound pressure, from which the real and imaginary parts of the impedance can be evaluated. Measurements using a 5-lobed cam can be made up to 300 Hz. Detailed design criteria and results on a soil sample are presented.
A Hybrid RANS/LES Approach for Predicting Jet Noise
NASA Technical Reports Server (NTRS)
Goldstein, Marvin E.
2006-01-01
Hybrid acoustic prediction methods have an important advantage over the current Reynolds averaged Navier-Stokes (RANS) based methods in that they only involve modeling of the relatively universal subscale motion and not the configuration dependent larger scale turbulence. Unfortunately, they are unable to account for the high frequency sound generated by the turbulence in the initial mixing layers. This paper introduces an alternative approach that directly calculates the sound from a hybrid RANS/LES flow model (which can resolve the steep gradients in the initial mixing layers near the nozzle lip) and adopts modeling techniques similar to those used in current RANS based noise prediction methods to determine the unknown sources in the equations for the remaining unresolved components of the sound field. The resulting prediction method would then be intermediate between the current noise prediction codes and previously proposed hybrid noise prediction methods.
An underwater ranging system based on photoacoustic effect occurring on target surface
NASA Astrophysics Data System (ADS)
Ni, Kai; Hu, Kai; Li, Xinghui; Wang, Lidai; Zhou, Qian; Wang, Xiaohao
2016-11-01
In this paper, an underwater ranging system based on photoacoustic effect occurring on target surface is proposed. In this proposal, laser pulse generated by blue-green laser is directly incident on target surface, where the photoacoustic effect occurs and a sound source is formed. And then the sound wave which is also called photoacoustic signal is received by the ultrasonic receiver after passing through water. According to the time delay between transmitting laser and receiving photoacoustic signal, and sound velocity in water, the distance between the target and the ultrasonic receiver can be calculated. Differing from underwater range finding by only laser, this approach can avoid backscattering of laser beam, so easier to implement. Experimental system according to this principle has been constructed to verify the feasibility of this technology. The experimental results showed that a ranging accuracy of 1 mm can be effectively achieved when the target is close to the ultrasonic receiver.
Eric Lupo, J; Koka, Kanthaiah; Thornton, Jennifer L; Tollin, Daniel J
2011-02-01
Conductive hearing loss (CHL) is known to produce hearing deficits, including deficits in sound localization ability. The differences in sound intensities and timing experienced between the two tympanic membranes are important cues to sound localization (ILD and ITD, respectively). Although much is known about the effect of CHL on hearing levels, little investigation has been conducted into the actual impact of CHL on sound location cues. This study investigated effects of CHL induced by earplugs on cochlear microphonic (CM) amplitude and timing and their corresponding effect on the ILD and ITD location cues. Acoustic and CM measurements were made in 5 chinchillas before and after earplug insertion, and again after earplug removal using pure tones (500 Hz to 24 kHz). ILDs in the unoccluded condition demonstrated position and frequency dependence where peak far-lateral ILDs approached 30 dB for high frequencies. Unoccluded ear ITD cues demonstrated positional and frequency dependence with increased ITD cue for both decreasing frequency (±420 μs at 500 Hz, ±310 μs for 1-4 kHz) and increasingly lateral sound source locations. Occlusion of the ear canal with foam plugs resulted in a mild, frequency-dependent conductive hearing loss of 10-38 dB (mean 31 ± 3.9 dB) leading to a concomitant frequency dependent increase in ILDs at all source locations. The effective ITDs increased in a frequency dependent manner with ear occlusion as a direct result of the acoustic properties of the plugging material, the latter confirmed via acoustical measurements using a model ear canal with varying volumes of acoustic foam. Upon ear plugging with acoustic foam, a mild CHL is induced. Furthermore, the CHL induced by acoustic foam results in substantial changes in the magnitudes of both the ITD and ILD cues to sound location. Copyright © 2010 Elsevier B.V. All rights reserved.
Lupo, J. Eric; Koka, Kanthaiah; Thornton, Jennifer L.; Tollin, Daniel J.
2010-01-01
Conductive hearing loss (CHL) is known to produce hearing deficits, including deficits in sound localization ability. The differences in sound intensities and timing experienced between the two tympanic membranes are important cues to sound localization (ILD and ITD, respectively). Although much is known about the effect of CHL on hearing levels, little investigation has been conducted into the actual impact of CHL on sound location cues. This study investigated effects of CHL induced by earplugs on cochlear microphonic (CM) amplitude and timing and their corresponding effect on the ILD and ITD location cues. Acoustic and CM measurements were made in 5 chinchillas before and after earplug insertion, and again after earplug removal using pure tones (500 Hz to 24 kHz). ILDs in the unoccluded condition demonstrated position and frequency dependence where peak far-lateral ILDs approached 30 dB for high frequencies. Unoccluded ear ITD cues demonstrated positional and frequency dependence with increased ITD cue for both decreasing frequency (± 420 µs at 500 Hz, ± 310 µs for 1–4 kHz ) and increasingly lateral sound source locations. Occlusion of the ear canal with foam plugs resulted in a mild, frequency-dependent conductive hearing loss of 10–38 dB (mean 31 ± 3.9 dB) leading to a concomitant frequency dependent increase in ILDs at all source locations. The effective ITDs increased in a frequency dependent manner with ear occlusion as a direct result of the acoustic properties of the plugging material, the latter confirmed via acoustical measurements using a model ear canal with varying volumes of acoustic foam. Upon ear plugging with acoustic foam, a mild CHL is induced. Furthermore, the CHL induced by acoustic foam results in substantial changes in the magnitudes of both the ITD and ILD cues to sound location. PMID:21073935
Intensity-invariant coding in the auditory system.
Barbour, Dennis L
2011-11-01
The auditory system faithfully represents sufficient details from sound sources such that downstream cognitive processes are capable of acting upon this information effectively even in the face of signal uncertainty, degradation or interference. This robust sound source representation leads to an invariance in perception vital for animals to interact effectively with their environment. Due to unique nonlinearities in the cochlea, sound representations early in the auditory system exhibit a large amount of variability as a function of stimulus intensity. In other words, changes in stimulus intensity, such as for sound sources at differing distances, create a unique challenge for the auditory system to encode sounds invariantly across the intensity dimension. This challenge and some strategies available to sensory systems to eliminate intensity as an encoding variable are discussed, with a special emphasis upon sound encoding. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Crone, T. J.; Tolstoy, M.; Carton, H. D.
2013-12-01
In the summer of 2012, two multi-channel seismic (MCS) experiments, Cascadia Open-Access Seismic Transects (COAST) and Ridge2Trench, were conducted in the offshore Cascadia region. An area of growing environmental concern with active source seismic experiments is the potential impact of the received sound on marine mammals, but data relating to this issue is limited. For these surveys sound level 'mitigation radii' are established for the protection of marine mammals, based on direct arrival modeling and previous calibration experiments. Propagation of sound from seismic arrays can be accurately modeled in deep-water environments, but in shallow and sloped environments the complexity of local geology and bathymetry can make it difficult to predict sound levels as a function of distance from the source array. One potential solution to this problem is to measure the received levels in real-time using the ship's streamer (Diebold et al., 2010), which would allow the dynamic determination of suitable mitigation radii. We analyzed R/V Langseth streamer data collected on the shelf and slope off the Washington coast during the COAST experiment to measure received levels in situ up to 8 km away from the ship. Our analysis shows that water depth and bathymetric features can affect received levels in shallow water environments. The establishment of dynamic mitigation radii based on local conditions may help maximize the safety of marine mammals while also maximizing the ability of scientists to conduct seismic research. With increasing scientific and societal focus on subduction zone environments, a better understanding of shallow water sound propagation is essential for allowing seismic exploration of these hazardous environments to continue. Diebold, J. M., M. Tolstoy, L. Doermann, S. Nooner, S. Webb, and T. J. Crone (2010) R/V Marcus G. Langseth Seismic Source: Modeling and Calibration. Geochemistry, Geophysics, Geosystems, 11, Q12012, doi:10.1029/2010GC003216.
Tongue-driven sonar beam steering by a lingual-echolocating fruit bat
Falk, Benjamin; Chiu, Chen; Krishnan, Anand; Arbour, Jessica H.; Moss, Cynthia F.
2017-01-01
Animals enhance sensory acquisition from a specific direction by movements of head, ears, or eyes. As active sensing animals, echolocating bats also aim their directional sonar beam to selectively “illuminate” a confined volume of space, facilitating efficient information processing by reducing echo interference and clutter. Such sonar beam control is generally achieved by head movements or shape changes of the sound-emitting mouth or nose. However, lingual-echolocating Egyptian fruit bats, Rousettus aegyptiacus, which produce sound by clicking their tongue, can dramatically change beam direction at very short temporal intervals without visible morphological changes. The mechanism supporting this capability has remained a mystery. Here, we measured signals from free-flying Egyptian fruit bats and discovered a systematic angular sweep of beam focus across increasing frequency. This unusual signal structure has not been observed in other animals and cannot be explained by the conventional and widely-used “piston model” that describes the emission pattern of other bat species. Through modeling, we show that the observed beam features can be captured by an array of tongue-driven sound sources located along the side of the mouth, and that the sonar beam direction can be steered parsimoniously by inducing changes to the pattern of phase differences through moving tongue location. The effects are broadly similar to those found in a phased array—an engineering design widely found in human-made sonar systems that enables beam direction changes without changes in the physical transducer assembly. Our study reveals an intriguing parallel between biology and human engineering in solving problems in fundamentally similar ways. PMID:29244805
Tongue-driven sonar beam steering by a lingual-echolocating fruit bat.
Lee, Wu-Jung; Falk, Benjamin; Chiu, Chen; Krishnan, Anand; Arbour, Jessica H; Moss, Cynthia F
2017-12-01
Animals enhance sensory acquisition from a specific direction by movements of head, ears, or eyes. As active sensing animals, echolocating bats also aim their directional sonar beam to selectively "illuminate" a confined volume of space, facilitating efficient information processing by reducing echo interference and clutter. Such sonar beam control is generally achieved by head movements or shape changes of the sound-emitting mouth or nose. However, lingual-echolocating Egyptian fruit bats, Rousettus aegyptiacus, which produce sound by clicking their tongue, can dramatically change beam direction at very short temporal intervals without visible morphological changes. The mechanism supporting this capability has remained a mystery. Here, we measured signals from free-flying Egyptian fruit bats and discovered a systematic angular sweep of beam focus across increasing frequency. This unusual signal structure has not been observed in other animals and cannot be explained by the conventional and widely-used "piston model" that describes the emission pattern of other bat species. Through modeling, we show that the observed beam features can be captured by an array of tongue-driven sound sources located along the side of the mouth, and that the sonar beam direction can be steered parsimoniously by inducing changes to the pattern of phase differences through moving tongue location. The effects are broadly similar to those found in a phased array-an engineering design widely found in human-made sonar systems that enables beam direction changes without changes in the physical transducer assembly. Our study reveals an intriguing parallel between biology and human engineering in solving problems in fundamentally similar ways.
Monaural Sound Localization Based on Reflective Structure and Homomorphic Deconvolution
Park, Yeonseok; Choi, Anthony
2017-01-01
The asymmetric structure around the receiver provides a particular time delay for the specific incoming propagation. This paper designs a monaural sound localization system based on the reflective structure around the microphone. The reflective plates are placed to present the direction-wise time delay, which is naturally processed by convolutional operation with a sound source. The received signal is separated for estimating the dominant time delay by using homomorphic deconvolution, which utilizes the real cepstrum and inverse cepstrum sequentially to derive the propagation response’s autocorrelation. Once the localization system accurately estimates the information, the time delay model computes the corresponding reflection for localization. Because of the structure limitation, two stages of the localization process perform the estimation procedure as range and angle. The software toolchain from propagation physics and algorithm simulation realizes the optimal 3D-printed structure. The acoustic experiments in the anechoic chamber denote that 79.0% of the study range data from the isotropic signal is properly detected by the response value, and 87.5% of the specific direction data from the study range signal is properly estimated by the response time. The product of both rates shows the overall hit rate to be 69.1%. PMID:28946625
A Zonal Approach for Prediction of Jet Noise
NASA Technical Reports Server (NTRS)
Shih, S. H.; Hixon, D. R.; Mankbadi, Reda R.
1995-01-01
A zonal approach for direct computation of sound generation and propagation from a supersonic jet is investigated. The present work splits the computational domain into a nonlinear, acoustic-source regime and a linear acoustic wave propagation regime. In the nonlinear regime, the unsteady flow is governed by the large-scale equations, which are the filtered compressible Navier-Stokes equations. In the linear acoustic regime, the sound wave propagation is described by the linearized Euler equations. Computational results are presented for a supersonic jet at M = 2. 1. It is demonstrated that no spurious modes are generated in the matching region and the computational expense is reduced substantially as opposed to fully large-scale simulation.
A Corticothalamic Circuit Model for Sound Identification in Complex Scenes
Otazu, Gonzalo H.; Leibold, Christian
2011-01-01
The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668
Quantitative measurement of pass-by noise radiated by vehicles running at high speeds
NASA Astrophysics Data System (ADS)
Yang, Diange; Wang, Ziteng; Li, Bing; Luo, Yugong; Lian, Xiaomin
2011-03-01
It has been a challenge in the past to accurately locate and quantify the pass-by noise source radiated by the running vehicles. A system composed of a microphone array is developed in our current work to do this work. An acoustic-holography method for moving sound sources is designed to handle the Doppler effect effectively in the time domain. The effective sound pressure distribution is reconstructed on the surface of a running vehicle. The method has achieved a high calculation efficiency and is able to quantitatively measure the sound pressure at the sound source and identify the location of the main sound source. The method is also validated by the simulation experiments and the measurement tests with known moving speakers. Finally, the engine noise, tire noise, exhaust noise and wind noise of the vehicle running at different speeds are successfully identified by this method.
Research on characteristics of radiated noise of large cargo ship in shallow water
NASA Astrophysics Data System (ADS)
Liu, Yongdong; Zhang, Liang
2017-01-01
With the rapid development of the shipping industry, the number of the world's ship is gradually increasing. The characteristics of the radiated noise of the ship are also of concern. Since the noise source characteristics of multichannel interference, the surface wave and the sea temperature microstructure and other reasons, the sound signal received in the time-frequency domain has varying characteristics. The signal of the radiated noise of the large cargo ship JOCHOH from horizontal hydrophone array in some shallow water of China is processed and analyzed in the summer of 2015, and the results show that a large cargo ship JOCHOH has a number of noise sources in the direction of the ship's bow and stern lines, such as host, auxiliary and propellers. The radiating sound waves generated by these sources do not meet the spherical wave law at lower frequency in the ocean, and its radiated noise has inherent spatial distribution, the variation characteristics of the radiated noise the large cargo ship in time and frequency domain are given. The research method and results are of particular importance.
Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid.
Kidd, Gerald
2017-10-17
Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired "target" talker while ignoring the speech from unwanted "masker" talkers and other sources of sound. This listening situation forms the classic "cocktail party problem" described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. This approach, embodied in a prototype "visually guided hearing aid" (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based "spatial filter" operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in "informational masking." The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in "energetic masking." Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation. http://cred.pubs.asha.org/article.aspx?articleid=2601621.
Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid
2017-01-01
Purpose Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired “target” talker while ignoring the speech from unwanted “masker” talkers and other sources of sound. This listening situation forms the classic “cocktail party problem” described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. Method This approach, embodied in a prototype “visually guided hearing aid” (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. Results The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based “spatial filter” operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in “informational masking.” The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in “energetic masking.” Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Conclusions Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601621 PMID:29049603
Study on sound-speed dispersion in a sandy sediment at frequency ranges of 0.5-3 kHz and 90-170 kHz
NASA Astrophysics Data System (ADS)
Yu, Sheng-qi; Liu, Bao-hua; Yu, Kai-ben; Kan, Guang-ming; Yang, Zhi-guo
2017-03-01
In order to study the properties of sound-speed dispersion in a sandy sediment, the sound speed was measured both at high frequency (90-170 kHz) and low frequency (0.5-3 kHz) in laboratory environments. At high frequency, a sampling measurement was conducted with boiled and uncooked sand samples collected from the bottom of a large water tank. The sound speed was directly obtained through transmission measurement using single source and single hydrophone. At low frequency, an in situ measurement was conducted in the water tank, where the sandy sediment had been homogeneously paved at the bottom for a long time. The sound speed was indirectly inverted according to the traveling time of signals received by three buried hydrophones in the sandy sediment and the geometry in experiment. The results show that the mean sound speed is approximate 1710-1713 m/s with a weak positive gradient in the sand sample after being boiled (as a method to eliminate bubbles as much as possible) at high frequency, which agrees well with the predictions of Biot theory, the effective density fluid model (EDFM) and Buckingham's theory. However, the sound speed in the uncooked sandy sediment obviously decreases (about 80%) both at high frequency and low frequency due to plenty of bubbles in existence. And the sound-speed dispersion performs a weak negative gradient at high frequency. Finally, a water-unsaturated Biot model is presented for trying to explain the decrease of sound speed in the sandy sediment with plenty of bubbles.
Source sparsity control of sound field reproduction using the elastic-net and the lasso minimizers.
Gauthier, P-A; Lecomte, P; Berry, A
2017-04-01
Sound field reproduction is aimed at the reconstruction of a sound pressure field in an extended area using dense loudspeaker arrays. In some circumstances, sound field reproduction is targeted at the reproduction of a sound field captured using microphone arrays. Although methods and algorithms already exist to convert microphone array recordings to loudspeaker array signals, one remaining research question is how to control the spatial sparsity in the resulting loudspeaker array signals and what would be the resulting practical advantages. Sparsity is an interesting feature for spatial audio since it can drastically reduce the number of concurrently active reproduction sources and, therefore, increase the spatial contrast of the solution at the expense of a difference between the target and reproduced sound fields. In this paper, the application of the elastic-net cost function to sound field reproduction is compared to the lasso cost function. It is shown that the elastic-net can induce solution sparsity and overcomes limitations of the lasso: The elastic-net solves the non-uniqueness of the lasso solution, induces source clustering in the sparse solution, and provides a smoother solution within the activated source clusters.
Effects of sound source location and direction on acoustic parameters in Japanese churches.
Soeta, Yoshiharu; Ito, Ken; Shimokura, Ryota; Sato, Shin-ichi; Ohsawa, Tomohiro; Ando, Yoichi
2012-02-01
In 1965, the Catholic Church liturgy changed to allow priests to face the congregation. Whereas Church tradition, teaching, and participation have been much discussed with respect to priest orientation at Mass, the acoustical changes in this regard have not yet been examined scientifically. To discuss acoustic desired within churches, it is necessary to know the acoustical characteristics appropriate for each phase of the liturgy. In this study, acoustic measurements were taken at various source locations and directions using both old and new liturgies performed in Japanese churches. A directional loudspeaker was used as the source to provide vocal and organ acoustic fields, and impulse responses were measured. Various acoustical parameters such as reverberation time and early decay time were analyzed. The speech transmission index was higher for the new Catholic liturgy, suggesting that the change in liturgy has improved speech intelligibility. Moreover, the interaural cross-correlation coefficient and early lateral energy fraction were higher and lower, respectively, suggesting that the change in liturgy has made the apparent source width smaller. © 2012 Acoustical Society of America
Data Acquisition and Analyses of Magnetotelluric Sounding in Lujiang-Zongyang Ore Concentrated Area
NASA Astrophysics Data System (ADS)
Tang, J.; Xiao, X.; Zhou, C.; Lu, Q.
2010-12-01
It is really challenging to perform MT data acquisition and processing in the Lujiang-Zongyang ore concentrated area, where severe and complicated noise is mixed with the useful data. Dense population, well-developed water systems, transport networks, communication and power grids, and some in-exploiting mines are main sources of the noise. However, to conduct MT sounding in this area is not only helpful to the study of geological structure and tectonics of this zone, but also brings valuable experience of data analysis and processing in real field work with heavy interference. This work has been accomplished by us in 5 survey lines with 500 sounding stations in total. In order to verify the consistency of the 6 V5-2000 data acquisition systems employed in our study, a consistency experiment was conducted in a test area with weak interference. Curves of apparent resistivity and phase obtained from these 6 instruments are plotted in Fig.1, in which acceptable consistency is showed, except for a few high noise frequencies. To determine the optimal interval for data acquisition in this noise-heavy survey area, a comparison experiment was implemented in a single sounding station for comparing the data quality by different intervals. We found 20 hours or more were required for each acquisition. The evaluation was based on degree of coherence and signal-to-noise ratio. With analysis of the MT data in both time and frequency domain, noise was categorized into several patterns according to the characteristic of various noise sources, and then corresponding filters were adopted. After removing flying-spot, cubic spline smoothing and spacial filtering to all the sounding curves, apparent resistivity profiles were obtained. Further studies including 2D and 3D inverse analysis are on processing. Fig 1. Consistency experiment of the apparatus (a) and (b) are apparent resistivity curves of yx and xy direction;(c) and (d) are phase curves of yx and xy direction;J1,J2,J3,J4,J5,J6 are marks of the 6 apparatus respectively.
Sound quality indicators for urban places in Paris cross-validated by Milan data.
Ricciardi, Paola; Delaitre, Pauline; Lavandier, Catherine; Torchia, Francesca; Aumond, Pierre
2015-10-01
A specific smartphone application was developed to collect perceptive and acoustic data in Paris. About 3400 questionnaires were analyzed, regarding the global sound environment characterization, the perceived loudness of some emergent sources and the presence time ratio of sources that do not emerge from the background. Sound pressure level was recorded each second from the mobile phone's microphone during a 10-min period. The aim of this study is to propose indicators of urban sound quality based on linear regressions with perceptive variables. A cross validation of the quality models extracted from Paris data was carried out by conducting the same survey in Milan. The proposed sound quality general model is correlated with the real perceived sound quality (72%). Another model without visual amenity and familiarity is 58% correlated with perceived sound quality. In order to improve the sound quality indicator, a site classification was performed by Kohonen's Artificial Neural Network algorithm, and seven specific class models were developed. These specific models attribute more importance on source events and are slightly closer to the individual data than the global model. In general, the Parisian models underestimate the sound quality of Milan environments assessed by Italian people.
NASA Astrophysics Data System (ADS)
Holt, Marla M.; Insley, Stephen J.; Southall, Brandon L.; Schusterman, Ronald J.
2005-09-01
While attempting to gain access to receptive females, male northern elephant seals form dominance hierarchies through multiple dyadic interactions involving visual and acoustic signals. These signals are both highly stereotyped and directional. Previous behavioral observations suggested that males attend to the directional cues of these signals. We used in situ vocal playbacks to test whether males attend to directional cues of the acoustic components of a competitors calls (i.e., variation in call spectra and source levels). Here, we will focus on playback methodology. Playback calls were multiple exemplars of a marked dominant male from an isolated area, recorded with a directional microphone and DAT recorder and edited into a natural sequence that controlled call amplitude. Control calls were recordings of ambient rookery sounds with the male calls removed. Subjects were 20 marked males (10 adults and 10 subadults) all located at An~o Nuevo, CA. Playback presentations, calibrated for sound-pressure level, were broadcast at a distance of 7 m from each subject. Most responses were classified into the following categories: visual orientation, postural change, calling, movement toward or away from the loudspeaker, and re-directed aggression. We also investigated developmental, hierarchical, and ambient noise variables that were thought to influence male behavior.
Acoustic-tactile rendering of visual information
NASA Astrophysics Data System (ADS)
Silva, Pubudu Madhawa; Pappas, Thrasyvoulos N.; Atkins, Joshua; West, James E.; Hartmann, William M.
2012-03-01
In previous work, we have proposed a dynamic, interactive system for conveying visual information via hearing and touch. The system is implemented with a touch screen that allows the user to interrogate a two-dimensional (2-D) object layout by active finger scanning while listening to spatialized auditory feedback. Sound is used as the primary source of information for object localization and identification, while touch is used both for pointing and for kinesthetic feedback. Our previous work considered shape and size perception of simple objects via hearing and touch. The focus of this paper is on the perception of a 2-D layout of simple objects with identical size and shape. We consider the selection and rendition of sounds for object identification and localization. We rely on the head-related transfer function for rendering sound directionality, and consider variations of sound intensity and tempo as two alternative approaches for rendering proximity. Subjective experiments with visually-blocked subjects are used to evaluate the effectiveness of the proposed approaches. Our results indicate that intensity outperforms tempo as a proximity cue, and that the overall system for conveying a 2-D layout is quite promising.
Localization of virtual sound at 4 Gz.
Sandor, Patrick M B; McAnally, Ken I; Pellieux, Lionel; Martin, Russell L
2005-02-01
Acceleration directed along the body's z-axis (Gz) leads to misperception of the elevation of visual objects (the "elevator illusion"), most probably as a result of errors in the transformation from eye-centered to head-centered coordinates. We have investigated whether the location of sound sources is misperceived under increased Gz. Visually guided localization responses were made, using a remotely controlled laser pointer, to virtual auditory targets under conditions of 1 and 4 Gz induced in a human centrifuge. As these responses would be expected to be affected by the elevator illusion, we also measured the effect of Gz on the accuracy with which subjects could point to the horizon. Horizon judgments were lower at 4 Gz than at 1 Gz, so sound localization responses at 4 Gz were corrected for this error in the transformation from eye-centered to head-centered coordinates. We found that the accuracy and bias of sound localization are not significantly affected by increased Gz. The auditory modality is likely to provide a reliable means of conveying spatial information to operators in dynamic environments in which Gz can vary.
Characterization of Sound Radiation by Unresolved Scales of Motion in Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Rubinstein, Robert; Zhou, Ye
1999-01-01
Evaluation of the sound sources in a high Reynolds number turbulent flow requires time-accurate resolution of an extremely large number of scales of motion. Direct numerical simulations will therefore remain infeasible for the forseeable future: although current large eddy simulation methods can resolve the largest scales of motion accurately the, they must leave some scales of motion unresolved. A priori studies show that acoustic power can be underestimated significantly if the contribution of these unresolved scales is simply neglected. In this paper, the problem of evaluating the sound radiation properties of the unresolved, subgrid-scale motions is approached in the spirit of the simplest subgrid stress models: the unresolved velocity field is treated as isotropic turbulence with statistical descriptors, evaluated from the resolved field. The theory of isotropic turbulence is applied to derive formulas for the total power and the power spectral density of the sound radiated by a filtered velocity field. These quantities are compared with the corresponding quantities for the unfiltered field for a range of filter widths and Reynolds numbers.
New insights into insect's silent flight. Part II: sound source and noise control
NASA Astrophysics Data System (ADS)
Xue, Qian; Geng, Biao; Zheng, Xudong; Liu, Geng; Dong, Haibo
2016-11-01
The flapping flight of aerial animals has excellent aerodynamic performance but meanwhile generates low noise. In this study, the unsteady flow and acoustic characteristics of the flapping wing are numerically investigated for three-dimensional (3D) models of Tibicen linnei cicada at free forward flight conditions. Single cicada wing is modelled as a membrane with prescribed motion reconstructed by Wan et al. (2015). The flow field and acoustic field around the flapping wing are solved with immersed-boundary-method based incompressible flow solver and linearized-perturbed-compressible-equations based acoustic solver. The 3D simulation allows examination of both directivity and frequency composition of the produced sound in a full space. The mechanism of sound generation of flapping wing is analyzed through correlations between acoustic signals and flow features. Along with a flexible wing model, a rigid wing model is also simulated. The results from these two cases will be compared to investigate the effects of wing flexibility on sound generation. This study is supported by NSF CBET-1313217 and AFOSR FA9550-12-1-0071.
NASA Astrophysics Data System (ADS)
Aaronson, Neil L.
This dissertation deals with questions important to the problem of human sound source localization in rooms, starting with perceptual studies and moving on to physical measurements made in rooms. In Chapter 1, a perceptual study is performed relevant to a specific phenomenon the effect of speech reflections occurring in the front-back dimension and the ability of humans to segregate that from unreflected speech. Distracters were presented from the same source as the target speech, a loudspeaker directly in front of the listener, and also from a loudspeaker directly behind the listener, delayed relative to the front loudspeaker. Steps were taken to minimize the contributions of binaural difference cues. For all delays within +/-32 ms, a release from informational masking of about 2 dB occurred. This suggested that human listeners are able to segregate speech sources based on spatial cues, even with minimal binaural cues. In moving on to physical measurements in rooms, a method was sought for simultaneous measurement of room characteristics such as impulse response (IR) and reverberation time (RT60), and binaural parameters such as interaural time difference (ITD), interaural level difference (ILD), and the interaural cross-correlation function and coherence. Chapter 2 involves investigations into the usefulness of maximum length sequences (MLS) for these purposes. Comparisons to random telegraph noise (RTN) show that MLS performs better in the measurement of stationary and room transfer functions, IR, and RT60 by an order of magnitude in RMS percent error, even after Wiener filtering and exponential time-domain filtering have improved the accuracy of RTN measurements. Measurements were taken in real rooms in an effort to understand how the reverberant characteristics of rooms affect binaural parameters important to sound source localization. Chapter 3 deals with interaural coherence, a parameter important for localization and perception of auditory source width. MLS were used to measure waveform and envelope coherences in two rooms for various source distances and 0° azimuth through a head-and-torso simulator (KEMAR). A relationship is sought that relates these two types of coherence, since envelope coherence, while an important quantity, is generally less accessible than waveform coherence. A power law relationship is shown to exist between the two that works well within and across bands, for any source distance, and is robust to reverberant conditions of the room. Measurements of ITD, ILD, and coherence in rooms give insight into the way rooms affect these parameters, and in turn, the ability of listeners to localize sounds in rooms. Such measurements, along with room properties, are made and analyzed using MLS methods in Chapter 4. It was found that the pinnae cause incoherence for sound sources incident between 30° and 90°. In human listeners, this does not seem to adversely affect performance in lateralization experiments. The cause of poor coherence in rooms was studied as part of Chapter 4 as well. It was found that rooms affect coherence by introducing variance into the ITD spectra within the bands in which it is measured. A mathematical model to predict the interaural coherence within a band given the standard deviation of the ITD spectrum and the center frequency of the band gives an exponential relationship. This is found to work well in predicting measured coherence given ITD spectrum variance. The pinnae seem to affect the ITD spectrum in a similar way at incident sound angles for which coherence is poor in an anechoic environment.
An Acoustic Source Reactive to Tow Cable Strum
2012-09-21
sound wave radiates from the head mass. Dkt . No. 101720 Application No. ?? REPLACEMENT SHEET? /3 DRAFT 1 CABLE CURVATURE INDUCING LONGITUDINAL...MOTION IDEALIZED TOW CABLE (NO TRANSVERSE VIBRATION) REALISTIC TOW CABLE (INCLUDES TRANSVERSE VIBRATION) DIRECTION OF TOW FIG. 1 (PRIOR ART) Dkt . No...DISPLACEMENT DISPLACEMENT LONGITUDINAL (PRIOR ART) DISPLACEMENT LONGITUDINAL Dkt . No. 101720 Application No. ?? REPLACEMENT SHEET? /3 DRAFT 10 A B B A
Development of an ICT-Based Air Column Resonance Learning Media
NASA Astrophysics Data System (ADS)
Purjiyanta, Eka; Handayani, Langlang; Marwoto, Putut
2016-08-01
Commonly, the sound source used in the air column resonance experiment is the tuning fork having disadvantage of unoptimal resonance results due to the sound produced which is getting weaker. In this study we made tones with varying frequency using the Audacity software which were, then, stored in a mobile phone as a source of sound. One advantage of this sound source is the stability of the resulting sound enabling it to produce the same powerful sound. The movement of water in a glass tube mounted on the tool resonance and the tone sound that comes out from the mobile phone were recorded by using a video camera. Sound resonances recorded were first, second, and third resonance, for each tone frequency mentioned. The resulting sound stays longer, so it can be used for the first, second, third and next resonance experiments. This study aimed to (1) explain how to create tones that can substitute tuning forks sound used in air column resonance experiments, (2) illustrate the sound wave that occurred in the first, second, and third resonance in the experiment, and (3) determine the speed of sound in the air. This study used an experimental method. It was concluded that; (1) substitute tones of a tuning fork sound can be made by using the Audacity software; (2) the form of sound waves that occured in the first, second, and third resonance in the air column resonance can be drawn based on the results of video recording of the air column resonance; and (3) based on the experiment result, the speed of sound in the air is 346.5 m/s, while based on the chart analysis with logger pro software, the speed of sound in the air is 343.9 ± 0.3171 m/s.
Directionally Antagonistic Graphene Oxide-Polyurethane Hybrid Aerogel as a Sound Absorber.
Oh, Jung-Hwan; Kim, Jieun; Lee, Hyeongrae; Kang, Yeonjune; Oh, Il-Kwon
2018-06-21
Innovative sound absorbers, the design of which is based on carbon nanotubes and graphene derivatives, could be used to make more efficient sound absorbing materials because of their excellent intrinsic mechanical and chemical properties. However, controlling the directional alignments of low-dimensional carbon nanomaterials, such as restacking, alignment, and dispersion, has been a challenging problem when developing sound absorbing forms. Herein, we present the directionally antagonistic graphene oxide-polyurethane hybrid aerogel we developed as a sound absorber, the physical properties of which differ according to the alignment of the microscopic graphene oxide sheets. This porous graphene sound absorber has a microporous hierarchical cellular structure with adjustable stiffness and improved sound absorption performance, thereby overcoming the restrictions of both geometric and function-orientated functions. Furthermore, by controlling the inner cell size and aligned structure of graphene oxide layers in this study, we achieved remarkable improvement of the sound absorption performance at low frequency. This improvement is attributed to multiple scattering of incident and reflection waves on the aligned porous surfaces, and air-viscous resistance damping inside interconnected structures between the urethane foam and the graphene oxide network. Two anisotropic sound absorbers based on the directionally antagonistic graphene oxide-polyurethane hybrid aerogels were fabricated. They show remarkable differences owing to the opposite alignment of graphene oxide layers inside the polyurethane foam and are expected to be appropriate for the engineering design of sound absorbers in consideration of the wave direction.
Assessment of noise exposure for basketball sports referees.
Masullo, Massimiliano; Lenzuni, Paolo; Maffei, Luigi; Nataletti, Pietro; Ciaburro, Giuseppe; Annesi, Diego; Moschetto, Antonio
2016-01-01
Dosimetric measurements carried out on basketball referees have shown that whistles not only generate very high peak sound pressure levels, but also play a relevant role in determining the overall exposure to noise of the exposed subjects. Because of the peculiar geometry determined by the mutual positions of the whistle, the microphone, and the ear, experimental data cannot be directly compared with existing occupational noise exposure and/or action limits. In this article, an original methodology, which allows experimental results to be reliably compared with the aforementioned limits, is presented. The methodology is based on the use of two correction factors to compensate the effects of the position of the dosimeter microphone (fR) and of the sound source (fS). Correction factors were calculated by means of laboratory measurements for two models of whistles (Fox 40 Classic and Fox 40 Sonik) and for two head orientations (frontal and oblique).Results sho w that for peak sound pressure levels the values of fR and fS, are in the range -8.3 to -4.6 dB and -6.0 to -1.7 dB, respectively. If one considers the Sound Exposure Levels (SEL) of whistle events, the same correction factors are in the range of -8.9 to -5.3 dB and -5.4 to -1.5 dB, respectively. The application of these correction factors shows that the corrected weekly noise exposure level for referees is 80.6 dB(A), which is slightly in excess of the lower action limit of the 2003/10/EC directive, and a few dB below the Recommended Exposure Limit (REL) proposed by the National Institute for Occupational Safety and Health (NIOSH). The corrected largest peak sound pressure level is 134.7 dB(C) which is comparable to the lower action limit of the 2003/10/EC directive, but again substantially lower than the ceiling limit of 140 dB(A) set by NIOSH.
How the owl tracks its prey – II
Takahashi, Terry T.
2010-01-01
Barn owls can capture prey in pitch darkness or by diving into snow, while homing in on the sounds made by their prey. First, the neural mechanisms by which the barn owl localizes a single sound source in an otherwise quiet environment will be explained. The ideas developed for the single source case will then be expanded to environments in which there are multiple sound sources and echoes – environments that are challenging for humans with impaired hearing. Recent controversies regarding the mechanisms of sound localization will be discussed. Finally, the case in which both visual and auditory information are available to the owl will be considered. PMID:20889819
Design of laser monitoring and sound localization system
NASA Astrophysics Data System (ADS)
Liu, Yu-long; Xu, Xi-ping; Dai, Yu-ming; Qiao, Yang
2013-08-01
In this paper, a novel design of laser monitoring and sound localization system is proposed. It utilizes laser to monitor and locate the position of the indoor conversation. In China most of the laser monitors no matter used in labor in an instrument uses photodiode or phototransistor as a detector at present. At the laser receivers of those facilities, light beams are adjusted to ensure that only part of the window in photodiodes or phototransistors received the beams. The reflection would deviate from its original path because of the vibration of the detected window, which would cause the changing of imaging spots in photodiode or phototransistor. However, such method is limited not only because it could bring in much stray light in receivers but also merely single output of photocurrent could be obtained. Therefore a new method based on quadrant detector is proposed. It utilizes the relation of the optical integral among quadrants to locate the position of imaging spots. This method could eliminate background disturbance and acquired two-dimensional spots vibrating data pacifically. The principle of this whole system could be described as follows. Collimated laser beams are reflected from vibrate-window caused by the vibration of sound source. Therefore reflected beams are modulated by vibration source. Such optical signals are collected by quadrant detectors and then are processed by photoelectric converters and corresponding circuits. Speech signals are eventually reconstructed. In addition, sound source localization is implemented by the means of detecting three different reflected light sources simultaneously. Indoor mathematical models based on the principle of Time Difference Of Arrival (TDOA) are established to calculate the twodimensional coordinate of sound source. Experiments showed that this system is able to monitor the indoor sound source beyond 15 meters with a high quality of speech reconstruction and to locate the sound source position accurately.
Assessment of Hydroacoustic Propagation Using Autonomous Hydrophones in the Scotia Sea
2010-09-01
Award No. DE-AI52-08NA28654 Proposal No. BAA08-36 ABSTRACT The remote area of the Atlantic Ocean near the Antarctic Peninsula and the South...hydroacoustic blind spot. To investigate the sound propagation and interferences affected by these landmasses in the vicinity of the Antarctic polar...from large icebergs (near-surface sources) were utilized as natural sound sources. Surface sound sources, e.g., ice-related events, tend to suffer less
The low-frequency sound power measuring technique for an underwater source in a non-anechoic tank
NASA Astrophysics Data System (ADS)
Zhang, Yi-Ming; Tang, Rui; Li, Qi; Shang, Da-Jing
2018-03-01
In order to determine the radiated sound power of an underwater source below the Schroeder cut-off frequency in a non-anechoic tank, a low-frequency extension measuring technique is proposed. This technique is based on a unique relationship between the transmission characteristics of the enclosed field and those of the free field, which can be obtained as a correction term based on previous measurements of a known simple source. The radiated sound power of an unknown underwater source in the free field can thereby be obtained accurately from measurements in a non-anechoic tank. To verify the validity of the proposed technique, a mathematical model of the enclosed field is established using normal-mode theory, and the relationship between the transmission characteristics of the enclosed and free fields is obtained. The radiated sound power of an underwater transducer source is tested in a glass tank using the proposed low-frequency extension measuring technique. Compared with the free field, the radiated sound power level of the narrowband spectrum deviation is found to be less than 3 dB, and the 1/3 octave spectrum deviation is found to be less than 1 dB. The proposed testing technique can be used not only to extend the low-frequency applications of non-anechoic tanks, but also for measurement of radiated sound power from complicated sources in non-anechoic tanks.
NASA Astrophysics Data System (ADS)
Montazeri, Allahyar; Taylor, C. James
2017-10-01
This article addresses the coupling of acoustic secondary sources in a confined space in a sound field reduction framework. By considering the coupling of sources in a rectangular enclosure, the set of coupled equations governing its acoustical behavior are solved. The model obtained in this way is used to analyze the behavior of multi-input multi-output (MIMO) active sound field control (ASC) systems, where the coupling of sources cannot be neglected. In particular, the article develops the analytical results to analyze the effect of coupling of an array of secondary sources on the sound pressure levels inside an enclosure, when an array of microphones is used to capture the acoustic characteristics of the enclosure. The results are supported by extensive numerical simulations showing how coupling of loudspeakers through acoustic modes of the enclosure will change the strength and hence the driving voltage signal applied to the secondary loudspeakers. The practical significance of this model is to provide a better insight on the performance of the sound reproduction/reduction systems in confined spaces when an array of loudspeakers and microphones are placed in a fraction of wavelength of the excitation signal to reduce/reproduce the sound field. This is of particular importance because the interaction of different sources affects their radiation impedance depending on the electromechanical properties of the loudspeakers.
Bio-Inspired Micromechanical Directional Acoustic Sensor
NASA Astrophysics Data System (ADS)
Swan, William; Alves, Fabio; Karunasiri, Gamani
Conventional directional sound sensors employ an array of spatially separated microphones and the direction is determined using arrival times and amplitudes. In nature, insects such as the Ormia ochracea fly can determine the direction of sound using a hearing organ much smaller than the wavelength of sound it detects. The fly's eardrums are mechanically coupled, only separated by about 1 mm, and have remarkable directional sensitivity. A micromechanical sensor based on the fly's hearing system was designed and fabricated on a silicon on insulator (SOI) substrate using MEMS technology. The sensor consists of two 1 mm2 wings connected using a bridge and to the substrate using two torsional legs. The dimensions of the sensor and material stiffness determine the frequency response of the sensor. The vibration of the wings in response to incident sound at the bending resonance was measured using a laser vibrometer and found to be about 1 μm/Pa. The electronic response of the sensor to sound was measured using integrated comb finger capacitors and found to be about 25 V/Pa. The fabricated sensors showed good directional sensitivity. In this talk, the design, fabrication and characteristics of the directional sound sensor will be described. Supported by ONR and TDSI.
Goldsworthy, Raymond L.; Delhorne, Lorraine A.; Desloge, Joseph G.; Braida, Louis D.
2014-01-01
This article introduces and provides an assessment of a spatial-filtering algorithm based on two closely-spaced (∼1 cm) microphones in a behind-the-ear shell. The evaluated spatial-filtering algorithm used fast (∼10 ms) temporal-spectral analysis to determine the location of incoming sounds and to enhance sounds arriving from straight ahead of the listener. Speech reception thresholds (SRTs) were measured for eight cochlear implant (CI) users using consonant and vowel materials under three processing conditions: An omni-directional response, a dipole-directional response, and the spatial-filtering algorithm. The background noise condition used three simultaneous time-reversed speech signals as interferers located at 90°, 180°, and 270°. Results indicated that the spatial-filtering algorithm can provide speech reception benefits of 5.8 to 10.7 dB SRT compared to an omni-directional response in a reverberant room with multiple noise sources. Given the observed SRT benefits, coupled with an efficient design, the proposed algorithm is promising as a CI noise-reduction solution. PMID:25096120
Consistent modelling of wind turbine noise propagation from source to receiver.
Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; Dag, Kaya O; Moriarty, Patrick
2017-11-01
The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. The local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.
Consistent modelling of wind turbine noise propagation from source to receiver
Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; ...
2017-11-28
The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less
Consistent modelling of wind turbine noise propagation from source to receiver
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong
The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less
Automatic adventitious respiratory sound analysis: A systematic review
Bowyer, Stuart; Rodriguez-Villegas, Esther
2017-01-01
Background Automatic detection or classification of adventitious sounds is useful to assist physicians in diagnosing or monitoring diseases such as asthma, Chronic Obstructive Pulmonary Disease (COPD), and pneumonia. While computerised respiratory sound analysis, specifically for the detection or classification of adventitious sounds, has recently been the focus of an increasing number of studies, a standardised approach and comparison has not been well established. Objective To provide a review of existing algorithms for the detection or classification of adventitious respiratory sounds. This systematic review provides a complete summary of methods used in the literature to give a baseline for future works. Data sources A systematic review of English articles published between 1938 and 2016, searched using the Scopus (1938-2016) and IEEExplore (1984-2016) databases. Additional articles were further obtained by references listed in the articles found. Search terms included adventitious sound detection, adventitious sound classification, abnormal respiratory sound detection, abnormal respiratory sound classification, wheeze detection, wheeze classification, crackle detection, crackle classification, rhonchi detection, rhonchi classification, stridor detection, stridor classification, pleural rub detection, pleural rub classification, squawk detection, and squawk classification. Study selection Only articles were included that focused on adventitious sound detection or classification, based on respiratory sounds, with performance reported and sufficient information provided to be approximately repeated. Data extraction Investigators extracted data about the adventitious sound type analysed, approach and level of analysis, instrumentation or data source, location of sensor, amount of data obtained, data management, features, methods, and performance achieved. Data synthesis A total of 77 reports from the literature were included in this review. 55 (71.43%) of the studies focused on wheeze, 40 (51.95%) on crackle, 9 (11.69%) on stridor, 9 (11.69%) on rhonchi, and 18 (23.38%) on other sounds such as pleural rub, squawk, as well as the pathology. Instrumentation used to collect data included microphones, stethoscopes, and accelerometers. Several references obtained data from online repositories or book audio CD companions. Detection or classification methods used varied from empirically determined thresholds to more complex machine learning techniques. Performance reported in the surveyed works were converted to accuracy measures for data synthesis. Limitations Direct comparison of the performance of surveyed works cannot be performed as the input data used by each was different. A standard validation method has not been established, resulting in different works using different methods and performance measure definitions. Conclusion A review of the literature was performed to summarise different analysis approaches, features, and methods used for the analysis. The performance of recent studies showed a high agreement with conventional non-automatic identification. This suggests that automated adventitious sound detection or classification is a promising solution to overcome the limitations of conventional auscultation and to assist in the monitoring of relevant diseases. PMID:28552969
Sound-direction identification with bilateral cochlear implants.
Neuman, Arlene C; Haravon, Anita; Sislian, Nicole; Waltzman, Susan B
2007-02-01
The purpose of this study was to compare the accuracy of sound-direction identification in the horizontal plane by bilateral cochlear implant users when localization was measured with pink noise and with speech stimuli. Eight adults who were bilateral users of Nucleus 24 Contour devices participated in the study. All had received implants in both ears in a single surgery. Sound-direction identification was measured in a large classroom by using a nine-loudspeaker array. Localization was tested in three listening conditions (bilateral cochlear implants, left cochlear implant, and right cochlear implant), using two different stimuli (a speech stimulus and pink noise bursts) in a repeated-measures design. Sound-direction identification accuracy was significantly better when using two implants than when using a single implant. The mean root-mean-square error was 29 degrees for the bilateral condition, 54 degrees for the left cochlear implant, and 46.5 degrees for the right cochlear implant condition. Unilateral accuracy was similar for right cochlear implant and left cochlear implant performance. Sound-direction identification performance was similar for speech and pink noise stimuli. The data obtained in this study add to the growing body of evidence that sound-direction identification with bilateral cochlear implants is better than with a single implant. The similarity in localization performance obtained with the speech and pink noise supports the use of either stimulus for measuring sound-direction identification.
Toward a Nonlinear Acoustic Analogy: Turbulence as a Source of Sound and Nonlinear Propagation
NASA Technical Reports Server (NTRS)
Miller, Steven A. E.
2015-01-01
An acoustic analogy is proposed that directly includes nonlinear propagation effects. We examine the Lighthill acoustic analogy and replace the Green's function of the wave equation with numerical solutions of the generalized Burgers' equation. This is justified mathematically by using similar arguments that are the basis of the solution of the Lighthill acoustic analogy. This approach is superior to alternatives because propagation is accounted for directly from the source to the far-field observer instead of from an arbitrary intermediate point. Validation of a numerical solver for the generalized Burgers' equation is performed by comparing solutions with the Blackstock bridging function and measurement data. Most importantly, the mathematical relationship between the Navier- Stokes equations, the acoustic analogy that describes the source, and canonical nonlinear propagation equations is shown. Example predictions are presented for nonlinear propagation of jet mixing noise at the sideline angle
Toward a Nonlinear Acoustic Analogy: Turbulence as a Source of Sound and Nonlinear Propagation
NASA Technical Reports Server (NTRS)
Miller, Steven A. E.
2015-01-01
An acoustic analogy is proposed that directly includes nonlinear propagation effects. We examine the Lighthill acoustic analogy and replace the Green's function of the wave equation with numerical solutions of the generalized Burgers' equation. This is justified mathematically by using similar arguments that are the basis of the solution of the Lighthill acoustic analogy. This approach is superior to alternatives because propagation is accounted for directly from the source to the far-field observer instead of from an arbitrary intermediate point. Validation of a numerical solver for the generalized Burgers' equation is performed by comparing solutions with the Blackstock bridging function and measurement data. Most importantly, the mathematical relationship between the Navier-Stokes equations, the acoustic analogy that describes the source, and canonical nonlinear propagation equations is shown. Example predictions are presented for nonlinear propagation of jet mixing noise at the sideline angle.
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Sutliff, Daniel L.
2014-01-01
The Rotating Rake mode measurement system was designed to measure acoustic duct modes generated by a fan stage. Initially, the mode amplitudes and phases were quantified from a single rake measurement at one axial location. To directly measure the modes propagating in both directions within a duct, a second rake was mounted to the rotating system with an offset in both the axial and the azimuthal directions. The rotating rake data analysis technique was then extended to include the data measured by the second rake. The analysis resulted in a set of circumferential mode levels at each of the two rake microphone locations. Radial basis functions were then least-squares fit to this data to obtain the radial mode amplitudes for the modes propagating in both directions within the duct. Validation experiments have been conducted using artificial acoustic sources. Results are shown for the measurement of the standing waves in the duct from sound generated by one and two acoustic sources that are separated into the component modes propagating in both directions within the duct. Measured reflection coefficients from the open end of the duct are compared to analytical predictions.
Callback response of dugongs to conspecific chirp playbacks.
Ichikawa, Kotaro; Akamatsu, Tomonari; Shinke, Tomio; Adulyanukosol, Kanjana; Arai, Nobuaki
2011-06-01
Dugongs (Dugong dugon) produce bird-like calls such as chirps and trills. The vocal responses of dugongs to playbacks of several acoustic stimuli were investigated. Animals were exposed to four different playback stimuli: a recorded chirp from a wild dugong, a synthesized down-sweep sound, a synthesized constant-frequency sound, and silence. Wild dugongs vocalized more frequently after playback of broadcast chirps than that after constant-frequency sounds or silence. The down-sweep sound also elicited more vocal responses than did silence. No significant difference was found between the broadcast chirps and the down-sweep sound. The ratio of wild dugong chirps to all calls and the dominant frequencies of the wild dugong calls were significantly higher during playbacks of broadcast chirps, down-sweep sounds, and constant-frequency sounds than during those of silence. The source level and duration of dugong chirps increased significantly as signaling distance increased. No significant correlation was found between signaling distance and the source level of trills. These results show that dugongs vocalize to playbacks of frequency-modulated signals and suggest that the source level of dugong chirps may be manipulated to compensate for transmission loss between the source and receiver. This study provides the first behavioral observations revealing the function of dugong chirps. © 2011 Acoustical Society of America
NASA Technical Reports Server (NTRS)
Embleton, Tony F. W.; Daigle, Gilles A.
1991-01-01
Reviewed here is the current state of knowledge with respect to each basic mechanism of sound propagation in the atmosphere and how each mechanism changes the spectral or temporal characteristics of the sound received at a distance from the source. Some of the basic processes affecting sound wave propagation which are present in any situation are discussed. They are geometrical spreading, molecular absorption, and turbulent scattering. In geometrical spreading, sound levels decrease with increasing distance from the source; there is no frequency dependence. In molecular absorption, sound energy is converted into heat as the sound wave propagates through the air; there is a strong dependence on frequency. In turbulent scattering, local variations in wind velocity and temperature induce fluctuations in phase and amplitude of the sound waves as they propagate through an inhomogeneous medium; there is a moderate dependence on frequency.
Wensveen, Paul J; von Benda-Beckmann, Alexander M; Ainslie, Michael A; Lam, Frans-Peter A; Kvadsheim, Petter H; Tyack, Peter L; Miller, Patrick J O
2015-05-01
The behaviour of a marine mammal near a noise source can modulate the sound exposure it receives. We demonstrate that two long-finned pilot whales both surfaced in synchrony with consecutive arrivals of multiple sonar pulses. We then assess the effect of surfacing and other behavioural response strategies on the received cumulative sound exposure levels and maximum sound pressure levels (SPLs) by modelling realistic spatiotemporal interactions of a pilot whale with an approaching source. Under the propagation conditions of our model, some response strategies observed in the wild were effective in reducing received levels (e.g. movement perpendicular to the source's line of approach), but others were not (e.g. switching from deep to shallow diving; synchronous surfacing after maximum SPLs). Our study exemplifies how simulations of source-whale interactions guided by detailed observational data can improve our understanding about motivations behind behaviour responses observed in the wild (e.g., reducing sound exposure, prey movement). Copyright © 2015 Elsevier Ltd. All rights reserved.
Spiousas, Ignacio; Etchemendy, Pablo E.; Eguia, Manuel C.; Calcagno, Esteban R.; Abregú, Ezequiel; Vergara, Ramiro O.
2017-01-01
Previous studies on the effect of spectral content on auditory distance perception (ADP) focused on the physically measurable cues occurring either in the near field (low-pass filtering due to head diffraction) or when the sound travels distances >15 m (high-frequency energy losses due to air absorption). Here, we study how the spectrum of a sound arriving from a source located in a reverberant room at intermediate distances (1–6 m) influences the perception of the distance to the source. First, we conducted an ADP experiment using pure tones (the simplest possible spectrum) of frequencies 0.5, 1, 2, and 4 kHz. Then, we performed a second ADP experiment with stimuli consisting of continuous broadband and bandpass-filtered (with center frequencies of 0.5, 1.5, and 4 kHz and bandwidths of 1/12, 1/3, and 1.5 octave) pink-noise clips. Our results showed an effect of the stimulus frequency on the perceived distance both for pure tones and filtered noise bands: ADP was less accurate for stimuli containing energy only in the low-frequency range. Analysis of the frequency response of the room showed that the low accuracy observed for low-frequency stimuli can be explained by the presence of sparse modal resonances in the low-frequency region of the spectrum, which induced a non-monotonic relationship between binaural intensity and source distance. The results obtained in the second experiment suggest that ADP can also be affected by stimulus bandwidth but in a less straightforward way (i.e., depending on the center frequency, increasing stimulus bandwidth could have different effects). Finally, the analysis of the acoustical cues suggests that listeners judged source distance using mainly changes in the overall intensity of the auditory stimulus with distance rather than the direct-to-reverberant energy ratio, even for low-frequency noise bands (which typically induce high amount of reverberation). The results obtained in this study show that, depending on the spectrum of the auditory stimulus, reverberation can degrade ADP rather than improve it. PMID:28690556
Spiousas, Ignacio; Etchemendy, Pablo E; Eguia, Manuel C; Calcagno, Esteban R; Abregú, Ezequiel; Vergara, Ramiro O
2017-01-01
Previous studies on the effect of spectral content on auditory distance perception (ADP) focused on the physically measurable cues occurring either in the near field (low-pass filtering due to head diffraction) or when the sound travels distances >15 m (high-frequency energy losses due to air absorption). Here, we study how the spectrum of a sound arriving from a source located in a reverberant room at intermediate distances (1-6 m) influences the perception of the distance to the source. First, we conducted an ADP experiment using pure tones (the simplest possible spectrum) of frequencies 0.5, 1, 2, and 4 kHz. Then, we performed a second ADP experiment with stimuli consisting of continuous broadband and bandpass-filtered (with center frequencies of 0.5, 1.5, and 4 kHz and bandwidths of 1/12, 1/3, and 1.5 octave) pink-noise clips. Our results showed an effect of the stimulus frequency on the perceived distance both for pure tones and filtered noise bands: ADP was less accurate for stimuli containing energy only in the low-frequency range. Analysis of the frequency response of the room showed that the low accuracy observed for low-frequency stimuli can be explained by the presence of sparse modal resonances in the low-frequency region of the spectrum, which induced a non-monotonic relationship between binaural intensity and source distance. The results obtained in the second experiment suggest that ADP can also be affected by stimulus bandwidth but in a less straightforward way (i.e., depending on the center frequency, increasing stimulus bandwidth could have different effects). Finally, the analysis of the acoustical cues suggests that listeners judged source distance using mainly changes in the overall intensity of the auditory stimulus with distance rather than the direct-to-reverberant energy ratio, even for low-frequency noise bands (which typically induce high amount of reverberation). The results obtained in this study show that, depending on the spectrum of the auditory stimulus, reverberation can degrade ADP rather than improve it.
Hermannsen, Line; Beedholm, Kristian
2017-01-01
Acoustic harassment devices (AHD) or ‘seal scarers’ are used extensively, not only to deter seals from fisheries, but also as mitigation tools to deter marine mammals from potentially harmful sound sources, such as offshore pile driving. To test the effectiveness of AHDs, we conducted two studies with similar experimental set-ups on two key species: harbour porpoises and harbour seals. We exposed animals to 500 ms tone bursts at 12 kHz simulating that of an AHD (Lofitech), but with reduced output levels (source peak-to-peak level of 165 dB re 1 µPa). Animals were localized with a theodolite before, during and after sound exposures. In total, 12 sound exposures were conducted to porpoises and 13 exposures to seals. Porpoises were found to exhibit avoidance reactions out to ranges of 525 m from the sound source. Contrary to this, seal observations increased during sound exposure within 100 m of the loudspeaker. We thereby demonstrate that porpoises and seals respond very differently to AHD sounds. This has important implications for application of AHDs in multi-species habitats, as sound levels required to deter less sensitive species (seals) can lead to excessive and unwanted large deterrence ranges on more sensitive species (porpoises). PMID:28791155
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevelhimer, Mark S.; Deng, Z. Daniel; Scherelis, Constantin
2016-01-01
Underwater noise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels of different sizes and other underwater sound sources in both static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines onmore » the Mississippi River, where the sound of flowing water is included in background measurements. The size of vessels measured ranged from a small fishing boat with a 60 HP outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, and when compared to the sound created by an operating HK turbine were many times greater. A comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed values.« less
Feasibility of making sound power measurements in the NASA Langley V/STOL tunnel test section
NASA Technical Reports Server (NTRS)
Brooks, T. F.; Scheiman, J.; Silcox, R. J.
1976-01-01
Based on exploratory acoustic measurements in Langley's V/STOL wind tunnel, recommendations are made on the methodology for making sound power measurements of aircraft components in the closed tunnel test section. During airflow, tunnel self-noise and microphone flow-induced noise place restrictions on the amplitude and spectrum of the sound source to be measured. Models of aircraft components with high sound level sources, such as thrust engines and powered lift systems, seem likely candidates for acoustic testing.
Malinina, E S
2014-01-01
The spatial specificity of auditory aftereffect was studied after a short-time adaptation (5 s) to the broadband noise (20-20000 Hz). Adapting stimuli were sequences of noise impulses with the constant amplitude, test stimuli--with the constant and changing amplitude: an increase of amplitude of impulses in sequence was perceived by listeners as approach of the sound source, while a decrease of amplitude--as its withdrawal. The experiments were performed in an anechoic chamber. The auditory aftereffect was estimated under the following conditions: the adapting and test stimuli were presented from the loudspeaker located at a distance of 1.1 m from the listeners (the subjectively near spatial domain) or 4.5 m from the listeners (the subjectively near spatial domain) or 4.5 m from the listeners (the subjectively far spatial domain); the adapting and test stimuli were presented from different distances. The obtained data showed that perception of the imitated movement of the sound source in both spatial domains had the common characteristic peculiarities that manifested themselves both under control conditions without adaptation and after adaptation to noise. In the absence of adaptation for both distances, an asymmetry of psychophysical curves was observed: the listeners estimated the test stimuli more often as approaching. The overestimation by listeners of test stimuli as the approaching ones was more pronounced at their presentation from the distance of 1.1 m, i. e., from the subjectively near spatial domain. After adaptation to noise the aftereffects showed spatial specificity in both spatial domains: they were observed only at the spatial coincidence of adapting and test stimuli and were absent at their separation. The aftereffects observed in two spatial domains were similar in direction and value: the listeners estimated the test stimuli more often as withdrawing as compared to control. The result of such aftereffect was restoration of the symmetry of psychometric curves and of the equiprobable estimation of direction of movement of test signals.
Andrews, John T.; Barber, D.C.; Jennings, A.E.; Eberl, D.D.; Maclean, B.; Kirby, M.E.; Stoner, J.S.
2012-01-01
Core HU97048-007PC was recovered from the continental Labrador Sea slope at a water depth of 945 m, 250 km seaward from the mouth of Cumberland Sound, and 400 km north of Hudson Strait. Cumberland Sound is a structural trough partly floored by Cretaceous mudstones and Paleozoic carbonates. The record extends from ∼10 to 58 ka. On-board logging revealed a complex series of lithofacies, including buff-colored detrital carbonate-rich sediments [Heinrich (H)-events] frequently bracketed by black facies. We investigate the provenance of these facies using quantitative X-ray diffraction on drill-core samples from Paleozoic and Cretaceous bedrock from the SE Baffin Island Shelf, and on the < 2-mm sediment fraction in a transect of five cores from Cumberland Sound to the NW Labrador Sea. A sediment unmixing program was used to discriminate between sediment sources, which included dolomite-rich sediments from Baffin Bay, calcite-rich sediments from Hudson Strait and discrete sources from Cumberland Sound. Results indicated that the bulk of the sediment was derived from Cumberland Sound, but Baffin Bay contributed to sediments coeval with H-0 (Younger Dryas), whereas Hudson Strait was the source during H-events 1–4. Contributions from the Cretaceous outcrops within Cumberland Sound bracket H-events, thus both leading and lagging Hudson Strait-sourced H-events.
Riede, Tobias; Goller, Franz
2010-10-01
Song production in songbirds is a model system for studying learned vocal behavior. As in humans, bird phonation involves three main motor systems (respiration, vocal organ and vocal tract). The avian respiratory mechanism uses pressure regulation in air sacs to ventilate a rigid lung. In songbirds sound is generated with two independently controlled sound sources, which reside in a uniquely avian vocal organ, the syrinx. However, the physical sound generation mechanism in the syrinx shows strong analogies to that in the human larynx, such that both can be characterized as myoelastic-aerodynamic sound sources. Similarities include active adduction and abduction, oscillating tissue masses which modulate flow rate through the organ and a layered structure of the oscillating tissue masses giving rise to complex viscoelastic properties. Differences in the functional morphology of the sound producing system between birds and humans require specific motor control patterns. The songbird vocal apparatus is adapted for high speed, suggesting that temporal patterns and fast modulation of sound features are important in acoustic communication. Rapid respiratory patterns determine the coarse temporal structure of song and maintain gas exchange even during very long songs. The respiratory system also contributes to the fine control of airflow. Muscular control of the vocal organ regulates airflow and acoustic features. The upper vocal tract of birds filters the sounds generated in the syrinx, and filter properties are actively adjusted. Nonlinear source-filter interactions may also play a role. The unique morphology and biomechanical system for sound production in birds presents an interesting model for exploring parallels in control mechanisms that give rise to highly convergent physical patterns of sound generation. More comparative work should provide a rich source for our understanding of the evolution of complex sound producing systems. Copyright © 2009 Elsevier Inc. All rights reserved.
The auditory P50 component to onset and offset of sound
Pratt, Hillel; Starr, Arnold; Michalewski, Henry J.; Bleich, Naomi; Mittelman, Nomi
2008-01-01
Objective: The auditory Event-Related Potentials (ERP) component P50 to sound onset and offset have been reported to be similar, but their magnetic homologue has been reported absent to sound offset. We compared the spatio-temporal distribution of cortical activity during P50 to sound onset and offset, without confounds of spectral change. Methods: ERPs were recorded in response to onsets and offsets of silent intervals of 0.5 s (gaps) appearing randomly in otherwise continuous white noise and compared to ERPs to randomly distributed click pairs with half second separation presented in silence. Subjects were awake and distracted from the stimuli by reading a complicated text. Measures of P50 included peak latency and amplitude, as well as source current density estimates to the clicks and sound onsets and offsets. Results P50 occurred in response to noise onsets and to clicks, while to noise offset it was absent. Latency of P50 was similar to noise onset (56 msec) and to clicks (53 msec). Sources of P50 to noise onsets and clicks included bilateral superior parietal areas. In contrast, noise offsets activated left inferior temporal and occipital areas at the time of P50. Source current density was significantly higher to noise onset than offset in the vicinity of the temporo-parietal junction. Conclusions: P50 to sound offset is absent compared to the distinct P50 to sound onset and to clicks, at different intracranial sources. P50 to stimulus onset and to clicks appears to reflect preattentive arousal by a new sound in the scene. Sound offset does not involve a new sound and hence the absent P50. Significance: Stimulus onset activates distinct early cortical processes that are absent to offset. PMID:18055255
Blind separation of incoherent and spatially disjoint sound sources
NASA Astrophysics Data System (ADS)
Dong, Bin; Antoni, Jérôme; Pereira, Antonio; Kellermann, Walter
2016-11-01
Blind separation of sound sources aims at reconstructing the individual sources which contribute to the overall radiation of an acoustical field. The challenge is to reach this goal using distant measurements when all sources are operating concurrently. The working assumption is usually that the sources of interest are incoherent - i.e. statistically orthogonal - so that their separation can be approached by decorrelating a set of simultaneous measurements, which amounts to diagonalizing the cross-spectral matrix. Principal Component Analysis (PCA) is traditionally used to this end. This paper reports two new findings in this context. First, a sufficient condition is established under which "virtual" sources returned by PCA coincide with true sources; it stipulates that the sources of interest should be not only incoherent but also spatially orthogonal. A particular case of this instance is met by spatially disjoint sources - i.e. with non-overlapping support sets. Second, based on this finding, a criterion that enforces both statistical and spatial orthogonality is proposed to blindly separate incoherent sound sources which radiate from disjoint domains. This criterion can be easily incorporated into acoustic imaging algorithms such as beamforming or acoustical holography to identify sound sources of different origins. The proposed methodology is validated on laboratory experiments. In particular, the separation of aeroacoustic sources is demonstrated in a wind tunnel.
Hansen, M; Wahlberg, M; Madsen, P T
2008-12-01
Underwater sound signals for biosonar and communication normally have different source properties to serve the purposes of generating efficient acoustic backscatter from small objects or conveying information to conspecifics. Harbor porpoises (Phocoena phocoena) are nonwhistling toothed whales that produce directional, narrowband, high-frequency (HF) echolocation clicks. This study tests the hypothesis that their 130 kHz HF clicks also contain a low-frequency (LF) component more suited for communication. Clicks from three captive porpoises were analyzed to quantify the LF and HF source properties. The LF component is 59 (S.E.M=1.45 dB) dB lower than the HF component recorded on axis, and even at extreme off-axis angles of up to 135 degrees , the HF component is 9 dB higher than the LF component. Consequently, the active space of the HF component will always be larger than that of the LF component. It is concluded that the LF component is a by-product of the sound generator rather than a dedicated pulse produced to serve communication purposes. It is demonstrated that distortion and clipping in analog tape recorders can explain some of the prominent LF components reported in earlier studies, emphasizing the risk of erroneous classification of sound types based on recording artifacts.
Acoustic noise generation by the DOE/NASA MOD-1 wind turbine
NASA Technical Reports Server (NTRS)
Kelley, N. D.
1981-01-01
The results of a series of measurements taken over the past year of the acoustic emissions from the DOE/NASA MOD-1 Wind Turbine show the maximum acoustic energy is concentrated in the low frequency range, often below 100 Hz. The temporal as well as the frequency characteristics of the turbine sounds have been shown to be important since the MOD-1 is capable of radiating both coherent and incoherent noise. The coherent sounds are usually impulsive and are manifested in an averaged frequency domain plot as large numbers of discrete energy bands extending from the blade passage frequency to beyond 50 Hz on occasion. It is these impulsive sounds which are identified as the principal source of the annoyance to a dozen families living within 3 km of the turbine. The source of the coherent noise appears to be the rapid, unsteady blade loads encountered as the blade passes through the wake of the tower structure. Annoying levels are occasionally reached at nearby homes due to the interaction of the low frequency, high energy peaks in the acoustic impulses and the structural modes of the homes as well as by direct radiation outdoors. The peak levels of these impulses can be enhanced or subdued through complete propagation.
Damage monitoring in historical murals by speckle interferometry
NASA Astrophysics Data System (ADS)
Hinsch, Klaus D.; Gulker, Gerd; Joost, Holger
2003-11-01
In the conservation of historical murals it is important to identify loose plaster sections that threaten to fall off. Electronic speckle interferometry in combination with acoustic excitation of the object has been employed to monitor loose areas. To avoid disadvantages of high sound irradiation of the complete building a novel directional audio-sound source based on nonlinear mixing of ultrasound has been introduced. The optical system was revised for optimum performance in the new environment. Emphasis is placed on noise suppression to increase sensitivity. Furthermore, amplitude and phase data of object response over the frequency-range inspected are employed to gain additional information on the state of the plaster or paint. Laboratory studies on sample specimen supplement field campaigns at historical sites.
Direct-current vertical electrical-resistivity soundings in the Lower Peninsula of Michigan
Westjohn, D.B.; Carter, P.J.
1989-01-01
Ninety-three direct-current vertical electrical-resistivity soundings were conducted in the Lower Peninsula of Michigan from June through October 1987. These soundings were made to assist in mapping the depth to brine in areas where borehole resistivity logs and water-quality data are sparse or lacking. The Schlumberger array for placement of current and potential electrodes was used for each sounding. Vertical electrical-resistivity sounding field data, shifted and smoothed sounding data, and electric layers calculated using inverse modeling techniques are presented. Also included is a summary of the near-surface conditions and depths to conductors and resistors for each sounding location.
Direct computation of turbulence and noise
NASA Technical Reports Server (NTRS)
Berman, C.; Gordon, G.; Karniadakis, G.; Batcho, P.; Jackson, E.; Orszag, S.
1991-01-01
Jet exhaust turbulence noise is computed using a time dependent solution of the three dimensional Navier-Stokes equations to supply the source terms for an acoustic computation based on the Phillips convected wave equation. An extrapolation procedure is then used to determine the far field noise spectrum in terms of the near field sound. This will lay the groundwork for studies of more complex flows typical of noise suppression nozzles.
NASA Technical Reports Server (NTRS)
Soderman, Paul T.; Jaeger, Stephen M.; Hayes, Julie A.; Allen, Christopher S.
2002-01-01
A recessed, 42-inch deep acoustic lining has been designed and installed in the 40- by 80- Foot Wind Tunnel (40x80) test section to greatly improve the acoustic quality of the facility. This report describes the test section acoustic performance as determined by a detailed static calibration-all data were acquired without wind. Global measurements of sound decay from steady noise sources showed that the facility is suitable for acoustic studies of jet noise or similar randomly generated sound. The wall sound absorption, size of the facility, and averaging effects of wide band random noise all tend to minimize interference effects from wall reflections. The decay of white noise with distance was close to free field above 250 Hz. However, tonal sound data from propellers and fans, for example, will have an error band to be described that is caused by the sensitivity of tones to even weak interference. That error band could be minimized by use of directional instruments such as phased microphone arrays. Above 10 kHz, air absorption began to dominate the sound field in the large test section, reflections became weaker, and the test section tended toward an anechoic environment as frequency increased.
Depth dependence of wind-driven, broadband ambient noise in the Philippine Sea.
Barclay, David R; Buckingham, Michael J
2013-01-01
In 2009, as part of PhilSea09, the instrument platform known as Deep Sound was deployed in the Philippine Sea, descending under gravity to a depth of 6000 m, where it released a drop weight, allowing buoyancy to return it to the surface. On the descent and ascent, at a speed of 0.6 m/s, Deep Sound continuously recorded broadband ambient noise on two vertically aligned hydrophones separated by 0.5 m. For frequencies between 1 and 10 kHz, essentially all the noise was found to be downward traveling, exhibiting a depth-independent directional density function having the simple form cos θ, where θ ≤ 90° is the polar angle measured from the zenith. The spatial coherence and cross-spectral density of the noise show no change in character in the vicinity of the critical depth, consistent with a local, wind-driven surface-source distribution. The coherence function accurately matches that predicted by a simple model of deep-water, wind-generated noise, provided that the theoretical coherence is evaluated using the local sound speed. A straightforward inversion procedure is introduced for recovering the sound speed profile from the cross-correlation function of the noise, returning sound speeds with a root-mean-square error relative to an independently measured profile of 8.2 m/s.
Inverse method predicting spinning modes radiated by a ducted fan from free-field measurements.
Lewy, Serge
2005-02-01
In the study the inverse problem of deducing the modal structure of the acoustic field generated by a ducted turbofan is addressed using conventional farfield directivity measurements. The final objective is to make input data available for predicting noise radiation in other configurations that would not have been tested. The present paper is devoted to the analytical part of that study. The proposed method is based on the equations governing ducted sound propagation and free-field radiation. It leads to fast computations checked on Rolls-Royce tests made in the framework of previous European projects. Results seem to be reliable although the system of equations to be solved is generally underdetermined (more propagating modes than acoustic measurements). A limited number of modes are thus selected according to any a priori knowledge of the sources. A first guess of the source amplitudes is obtained by adjusting the calculated maximum of radiation of each mode to the measured sound pressure level at the same angle. A least squares fitting gives the final solution. A simple correction can be made to take account of the mean flow velocity inside the nacelle which shifts the directivity patterns. It consists of modifying the actual frequency to keep the cut-off ratios unchanged.
Finneran, James J; Branstetter, Brian K; Houser, Dorian S; Moore, Patrick W; Mulsow, Jason; Martin, Cameron; Perisho, Shaun
2014-10-01
Previous measurements of toothed whale echolocation transmission beam patterns have utilized few hydrophones and have therefore been limited to fine angular resolution only near the principal axis or poor resolution over larger azimuthal ranges. In this study, a circular, horizontal planar array of 35 hydrophones was used to measure a dolphin's transmission beam pattern with 5° to 10° resolution at azimuths from -150° to +150°. Beam patterns and directivity indices were calculated from both the peak-peak sound pressure and the energy flux density. The emitted pulse became smaller in amplitude and progressively distorted as it was recorded farther off the principal axis. Beyond ±30° to 40°, the off-axis signal consisted of two distinct pulses whose difference in time of arrival increased with the absolute value of the azimuthal angle. A simple model suggests that the second pulse is best explained as a reflection from internal structures in the dolphin's head, and does not implicate the use of a second sound source. Click energy was also more directional at the higher source levels utilized at longer ranges, where the center frequency was elevated compared to that of the lower amplitude clicks used at shorter range.
Donovan, Chris; Sweet, Jennifer; Eccher, Matthew; Megerian, Cliff; Semaan, Maroun; Murray, Gail; Miller, Jonathan
2015-12-01
Tinnitus is a source of considerable morbidity, and neuromodulation has been shown to be a potential treatment option. However, the location of the primary auditory cortex within Heschl gyrus in the temporal operculum presents challenges for targeting and electrode implantation. To determine whether anatomic targeting with intraoperative verification using evoked potentials can be used to implant electrodes directly into the Heschl gyrus (HG). Nine patients undergoing stereo-electroencephalogram evaluation for epilepsy were enrolled. HG was directly targeted on volumetric magnetic resonance imaging, and framed stereotaxy was used to implant an electrode parallel to the axis of the gyrus by using an oblique anterolateral-posteromedial trajectory. Intraoperative evoked potentials from auditory stimuli were recorded from multiple electrode contacts. Postoperatively, stimulation of each electrode was performed and participants were asked to describe the percept. Audiometric analysis was performed for 2 participants during subthreshold stimulation. Sounds presented to the contralateral and ipsilateral ears produced evoked potentials in HG electrodes in all participants intraoperatively. Stimulation produced a reproducible sensation of sound in all participants with perceived volume proportional to amplitude. Four participants reported distinct sounds when different electrodes were stimulated, with more medial contacts producing tones perceived as higher in pitch. Stimulation was not associated with adverse audiometric effects. There were no complications of electrode implantation. Direct anatomic targeting with physiological verification can be used to implant electrodes directly into primary auditory cortex. If deep brain stimulation proves effective for intractable tinnitus, this technique may be useful to assist with electrode implantation. DBS, deep brain stimulatorEEG, electroencephalographyHG, Heschl gyrus.
On the role of glottis-interior sources in the production of voiced sound.
Howe, M S; McGowan, R S
2012-02-01
The voice source is dominated by aeroacoustic sources downstream of the glottis. In this paper an investigation is made of the contribution to voiced speech of secondary sources within the glottis. The acoustic waveform is ultimately determined by the volume velocity of air at the glottis, which is controlled by vocal fold vibration, pressure forcing from the lungs, and unsteady backreactions from the sound and from the supraglottal air jet. The theory of aerodynamic sound is applied to study the influence on the fine details of the acoustic waveform of "potential flow" added-mass-type glottal sources, glottis friction, and vorticity either in the glottis-wall boundary layer or in the portion of the free jet shear layer within the glottis. These sources govern predominantly the high frequency content of the sound when the glottis is near closure. A detailed analysis performed for a canonical, cylindrical glottis of rectangular cross section indicates that glottis-interior boundary/shear layer vortex sources and the surface frictional source are of comparable importance; the influence of the potential flow source is about an order of magnitude smaller. © 2012 Acoustical Society of America
The silent base flow and the sound sources in a laminar jet.
Sinayoko, Samuel; Agarwal, Anurag
2012-03-01
An algorithm to compute the silent base flow sources of sound in a jet is introduced. The algorithm is based on spatiotemporal filtering of the flow field and is applicable to multifrequency sources. It is applied to an axisymmetric laminar jet and the resulting sources are validated successfully. The sources are compared to those obtained from two classical acoustic analogies, based on quiescent and time-averaged base flows. The comparison demonstrates how the silent base flow sources shed light on the sound generation process. It is shown that the dominant source mechanism in the axisymmetric laminar jet is "shear-noise," which is a linear mechanism. The algorithm presented here could be applied to fully turbulent flows to understand the aerodynamic noise-generation mechanism. © 2012 Acoustical Society of America
The acoustical cues to sound location in the Guinea pig (cavia porcellus)
Greene, Nathanial T; Anbuhl, Kelsey L; Williams, Whitney; Tollin, Daniel J.
2014-01-01
There are three main acoustical cues to sound location, each attributable to space-and frequency-dependent filtering of the propagating sound waves by the outer ears, head, and torso: Interaural differences in time (ITD) and level (ILD) as well as monaural spectral shape cues. While the guinea pig has been a common model for studying the anatomy, physiology, and behavior of binaural and spatial hearing, extensive measurements of their available acoustical cues are lacking. Here, these cues were determined from directional transfer functions (DTFs), the directional components of the head-related transfer functions, for eleven adult guinea pigs. In the frontal hemisphere, monaural spectral notches were present for frequencies from ~10 to 20 kHz; in general, the notch frequency increased with increasing sound source elevation and in azimuth toward the contralateral ear. The maximum ITDs calculated from low-pass filtered (2 kHz cutoff frequency) DTFs were ~250 µs, whereas the maximum ITD measured with low frequency tone pips was over 320 µs. A spherical head model underestimates ITD magnitude under normal conditions, but closely approximates values when the pinnae were removed. Interaural level differences (ILDs) strongly depended on location and frequency; maximum ILDs were < 10 dB for frequencies < 4 kHz and were as large as 40 dB for frequencies > 10 kHz. Removal of the pinna reduced the depth and sharpness of spectral notches, altered the acoustical axis, and reduced the acoustical gain, ITDs, and ILDs; however, spectral shape features and acoustical gain were not completely eliminated, suggesting a substantial contribution of the head and torso in altering the sounds present at the tympanic membrane. PMID:25051197
Sound Radiated by a Wave-Like Structure in a Compressible Jet
NASA Technical Reports Server (NTRS)
Golubev, V. V.; Prieto, A. F.; Mankbadi, R. R.; Dahl, M. D.; Hixon, R.
2003-01-01
This paper extends the analysis of acoustic radiation from the source model representing spatially-growing instability waves in a round jet at high speeds. Compared to previous work, a modified approach to the sound source modeling is examined that employs a set of solutions to linearized Euler equations. The sound radiation is then calculated using an integral surface method.
Photoacoustic Effect Generated from an Expanding Spherical Source
NASA Astrophysics Data System (ADS)
Bai, Wenyu; Diebold, Gerald J.
2018-02-01
Although the photoacoustic effect is typically generated by amplitude-modulated continuous or pulsed radiation, the form of the wave equation for pressure that governs the generation of sound indicates that optical sources moving in an absorbing fluid can produce sound as well. Here, the characteristics of the acoustic wave produced by a radially symmetric Gaussian source expanding outwardly from the origin are found. The unique feature of the photoacoustic effect from the spherical source is a trailing compressive wave that arises from reflection of an inwardly propagating component of the wave. Similar to the one-dimensional geometry, an unbounded amplification effect is found for the Gaussian source expanding at the sound speed.
NASA Technical Reports Server (NTRS)
Haynes, Jared; Kenny, Jeremy
2009-01-01
Lift-off acoustic environments for NASA's Ares I - Crew Launch Vehicle are predicted using the second source distribution methodology described in the NASA SP-8072. Three modifications made to the model include a shorter core length approximation, a core termination procedure upon plume deflection, and a new set of directivity indices measured from static test firings of the Reusable Solid Rocket Motor (RSRM). The modified sound pressure level predictions increased more than 5 dB overall, and the peak levels shifted two third-octave bands higher in frequency.
NASA Astrophysics Data System (ADS)
Kozuka, Teruyuki; Yasui, Kyuichi; Tuziuti, Toru; Towata, Atsuya; Lee, Judy; Iida, Yasuo
2009-07-01
Using a standing-wave field generated between a sound source and a reflector, it is possible to trap small objects at nodes of the sound pressure distribution in air. In this study, a sound field generated under a flat or concave reflector was studied by both experimental measurement and numerical calculation. The calculated result agrees well with the experimental data. The maximum force generated between a sound source of 25.0 mm diameter and a concave reflector is 0.8 mN in the experiment. A steel ball of 2.0 mm in diameter was levitated in the sound field in air.
Levin, Dovid; Habets, Emanuël A P; Gannot, Sharon
2010-10-01
An acoustic vector sensor provides measurements of both the pressure and particle velocity of a sound field in which it is placed. These measurements are vectorial in nature and can be used for the purpose of source localization. A straightforward approach towards determining the direction of arrival (DOA) utilizes the acoustic intensity vector, which is the product of pressure and particle velocity. The accuracy of an intensity vector based DOA estimator in the presence of noise has been analyzed previously. In this paper, the effects of reverberation upon the accuracy of such a DOA estimator are examined. It is shown that particular realizations of reverberation differ from an ideal isotropically diffuse field, and induce an estimation bias which is dependent upon the room impulse responses (RIRs). The limited knowledge available pertaining the RIRs is expressed statistically by employing the diffuse qualities of reverberation to extend Polack's statistical RIR model. Expressions for evaluating the typical bias magnitude as well as its probability distribution are derived.
Sound field reproduction as an equivalent acoustical scattering problem.
Fazi, Filippo Maria; Nelson, Philip A
2013-11-01
Given a continuous distribution of acoustic sources, the determination of the source strength that ensures the synthesis of a desired sound field is shown to be identical to the solution of an equivalent acoustic scattering problem. The paper begins with the presentation of the general theory that underpins sound field reproduction with secondary sources continuously arranged on the boundary of the reproduction region. The process of reproduction by a continuous source distribution is modeled by means of an integral operator (the single layer potential). It is then shown how the solution of the sound reproduction problem corresponds to that of an equivalent scattering problem. Analytical solutions are computed for two specific instances of this problem, involving, respectively, the use of a secondary source distribution in spherical and planar geometries. The results are shown to be the same as those obtained with analyses based on High Order Ambisonics and Wave Field Synthesis, respectively, thus bringing to light a fundamental analogy between these two methods of sound reproduction. Finally, it is shown how the physical optics (Kirchhoff) approximation enables the derivation of a high-frequency simplification for the problem under consideration, this in turn being related to the secondary source selection criterion reported in the literature on Wave Field Synthesis.
Investigation of spherical loudspeaker arrays for local active control of sound.
Peleg, Tomer; Rafaely, Boaz
2011-10-01
Active control of sound can be employed globally to reduce noise levels in an entire enclosure, or locally around a listener's head. Recently, spherical loudspeaker arrays have been studied as multiple-channel sources for local active control of sound, presenting the fundamental theory and several active control configurations. In this paper, important aspects of using a spherical loudspeaker array for local active control of sound are further investigated. First, the feasibility of creating sphere-shaped quiet zones away from the source is studied both theoretically and numerically, showing that these quiet zones are associated with sound amplification and poor system robustness. To mitigate the latter, the design of shell-shaped quiet zones around the source is investigated. A combination of two spherical sources is then studied with the aim of enlarging the quiet zone. The two sources are employed to generate quiet zones that surround a rigid sphere, investigating the application of active control around a listener's head. A significant improvement in performance is demonstrated in this case over a conventional headrest-type system that uses two monopole secondary sources. Finally, several simulations are presented to support the theoretical work and to demonstrate the performance and limitations of the system. © 2011 Acoustical Society of America
NASA Astrophysics Data System (ADS)
Ramos, António L. L.; Holm, Sverre; Gudvangen, Sigmund; Otterlei, Ragnvald
2013-06-01
Acoustical sniper positioning is based on the detection and direction-of-arrival estimation of the shockwave and the muzzle blast acoustical signals. In real-life situations, the detection and direction-of-arrival estimation processes is usually performed under the influence of background noise sources, e.g., vehicles noise, and might result in non-negligible inaccuracies than can affect the system performance and reliability negatively, specially when detecting the muzzle sound under long range distance and absorbing terrains. This paper introduces a multi-band spectral subtraction based algorithm for real-time noise reduction, applied to gunshot acoustical signals. The ballistic shockwave and the muzzle blast signals exhibit distinct frequency contents that are affected differently by additive noise. In most real situations, the noise component is colored and a multi-band spectral subtraction approach for noise reduction contributes to reducing the presence of artifacts in denoised signals. The proposed algorithm is tested using a dataset generated by combining signals from real gunshots and real vehicle noise. The noise component was generated using a steel tracked military tank running on asphalt and includes, therefore, the sound from the vehicle engine, which varies slightly in frequency over time according to the engine's rpm, and the sound from the steel tracks as the vehicle moves.
Some factors influencing radiation of sound from flow interaction with edges of finite surfaces
NASA Technical Reports Server (NTRS)
Hayden, R. E.; Fox, H. L.; Chanaud, R. C.
1976-01-01
Edges of surfaces which are exposed to unsteady flow cause both strictly acoustic effects and hydrodynamic effects, in the form of generation of new hydrodynamic sources in the immediate vicinity of the edge. An analytical model is presented which develops the explicit sound-generation role of the velocity and Mach number of the eddy convection past the edge, and the importance of relative scale lengths of the turbulence, as well as the relative intensity of pressure fluctuations. The Mach number (velocity) effects show that the important paramater is the convection Mach number of the eddies. The effects of turbulence scale lengths, isotropy, and spatial density (separation) are shown to be important in determining the level and spectrum of edge sound radiated for the edge dipole mechanism. Experimental data is presented which provides support for the dipole edge noise model in terms of Mach number (velocity) scaling, parametric dependence on flow field parameter, directivity, and edge diffraction effects.
Liu, Juan; Ando, Hiroshi
2016-01-01
Most real-world events stimulate multiple sensory modalities simultaneously. Usually, the stiffness of an object is perceived haptically. However, auditory signals also contain stiffness-related information, and people can form impressions of stiffness from the different impact sounds of metal, wood, or glass. To understand whether there is any interaction between auditory and haptic stiffness perception, and if so, whether the inferred material category is the most relevant auditory information, we conducted experiments using a force-feedback device and the modal synthesis method to present haptic stimuli and impact sound in accordance with participants’ actions, and to modulate low-level acoustic parameters, i.e., frequency and damping, without changing the inferred material categories of sound sources. We found that metal sounds consistently induced an impression of stiffer surfaces than did drum sounds in the audio-only condition, but participants haptically perceived surfaces with modulated metal sounds as significantly softer than the same surfaces with modulated drum sounds, which directly opposes the impression induced by these sounds alone. This result indicates that, although the inferred material category is strongly associated with audio-only stiffness perception, low-level acoustic parameters, especially damping, are more tightly integrated with haptic signals than the material category is. Frequency played an important role in both audio-only and audio-haptic conditions. Our study provides evidence that auditory information influences stiffness perception differently in unisensory and multisensory tasks. Furthermore, the data demonstrated that sounds with higher frequency and/or shorter decay time tended to be judged as stiffer, and contact sounds of stiff objects had no effect on the haptic perception of soft surfaces. We argue that the intrinsic physical relationship between object stiffness and acoustic parameters may be applied as prior knowledge to achieve robust estimation of stiffness in multisensory perception. PMID:27902718
Behrendt, John C.
2013-01-01
The West Antarctic Ice Sheet (WAIS) flows through the volcanically active West Antarctic Rift System (WARS). The aeromagnetic method has been the most useful geophysical tool for identification of subglacial volcanic rocks, since 1959–64 surveys, particularly combined with 1978 radar ice-sounding. The unique 1991–97 Central West Antarctica (CWA) aerogeophysical survey covering 354,000 km2 over the WAIS, (5-km line-spaced, orthogonal lines of aeromagnetic, radar ice-sounding, and aerogravity measurements), still provides invaluable information on subglacial volcanic rocks, particularly combined with the older aeromagnetic profiles. These data indicate numerous 100–>1000 nT, 5–50-km width, shallow-source, magnetic anomalies over an area greater than 1.2 × 106 km2, mostly from subglacial volcanic sources. I interpreted the CWA anomalies as defining about 1000 “volcanic centers” requiring high remanent normal magnetizations in the present field direction. About 400 anomaly sources correlate with bed topography. At least 80% of these sources have less than 200 m relief at the WAIS bed. They appear modified by moving ice, requiring a younger age than the WAIS (about 25 Ma). Exposed volcanoes in the WARS are The present rapid changes resulting from global warming, could be accelerated by subglacial volcanism.
Seismic and Biological Sources of Ambient Ocean Sound
NASA Astrophysics Data System (ADS)
Freeman, Simon Eric
Sound is the most efficient radiation in the ocean. Sounds of seismic and biological origin contain information regarding the underlying processes that created them. A single hydrophone records summary time-frequency information from the volume within acoustic range. Beamforming using a hydrophone array additionally produces azimuthal estimates of sound sources. A two-dimensional array and acoustic focusing produce an unambiguous two-dimensional `image' of sources. This dissertation describes the application of these techniques in three cases. The first utilizes hydrophone arrays to investigate T-phases (water-borne seismic waves) in the Philippine Sea. Ninety T-phases were recorded over a 12-day period, implying a greater number of seismic events occur than are detected by terrestrial seismic monitoring in the region. Observation of an azimuthally migrating T-phase suggests that reverberation of such sounds from bathymetric features can occur over megameter scales. In the second case, single hydrophone recordings from coral reefs in the Line Islands archipelago reveal that local ambient reef sound is spectrally similar to sounds produced by small, hard-shelled benthic invertebrates in captivity. Time-lapse photography of the reef reveals an increase in benthic invertebrate activity at sundown, consistent with an increase in sound level. The dominant acoustic phenomenon on these reefs may thus originate from the interaction between a large number of small invertebrates and the substrate. Such sounds could be used to take census of hard-shelled benthic invertebrates that are otherwise extremely difficult to survey. A two-dimensional `map' of sound production over a coral reef in the Hawaiian Islands was obtained using two-dimensional hydrophone array in the third case. Heterogeneously distributed bio-acoustic sources were generally co-located with rocky reef areas. Acoustically dominant snapping shrimp were largely restricted to one location within the area surveyed. This distribution of sources could reveal small-scale spatial ecological limitations, such as the availability of food and shelter. While array-based passive acoustic sensing is well established in seismoacoustics, the technique is little utilized in the study of ambient biological sound. With the continuance of Moore's law and advances in battery and memory technology, inferring biological processes from ambient sound may become a more accessible tool in underwater ecological evaluation and monitoring.
Noise suppression of a dipole source by tensioned membrane with side-branch cavities
Liu, Y.; Choy, Y. S.; Huang, L.; Cheng, L.
2012-01-01
Reducing the ducted-fan noise at the low frequency range remains a big technical challenge. This study presents a passive approach to directly suppress the dipole sound radiation from an axial-flow fan housed by a tensioned membrane with cavity backing. The method aims at achieving control of low frequency noise with an appreciable bandwidth. The use of the membrane not only eliminates the aerodynamic loss of flow, but also provides flexibility in controlling the range of the stopband with high insertion loss by varying its tension and mass. A three-dimensional model is presented which allows the performance of the proposed device to be explored analytically. With the proper design, this device can achieve a noise reduction of 5 dB higher than the empty expansion cavity recently proposed by Huang et al. [J. Acoust. Soc. Am. 128, 152–163 (2010)]. Through the detailed modal analysis, even in vacuo modes of the membrane vibration are found to play an important role in the suppression of sound radiation from the dipole source. Experimental validation is conducted with a loudspeaker as the dipole source and good agreement between the predicted and measured insertion loss is achieved. PMID:22978868
A study of poultry processing plant noise characteristics and potential noise control techniques
NASA Technical Reports Server (NTRS)
Wyvill, J. C.; Jape, A. D.; Moriarity, L. J.; Atkins, R. D.
1980-01-01
The noise environment in a typical poultry processing plant was characterized by developing noise contours for two representative plants: Central Soya of Athens, Inc., Athens, Georgia, and Tip Top Poultry, Inc., Marietta, Georgia. Contour information was restricted to the evisceration are of both plants because nearly 60 percent of all process employees are stationed in this area during a normal work shift. Both plant evisceration areas were composed of tile walls, sheet metal ceilings, and concrete floors. Processing was performed in an assembly-line fashion in which the birds travel through the area on overhead shackles while personnel remain at fixed stations. Processing machinery was present throughout the area. In general, the poultry processing noise problem is the result of loud sources and reflective surfaces. Within the evisceration area, it can be concluded that only a few major sources (lung guns, a chiller component, and hock cutters) are responsible for essentially all direct and reverberant sound pressure levels currently observed during normal operations. Consequently, any effort to reduce the noise problem must first address the sound power output of these sources and/or the absorptive qualitities of the room.
Smith, Rosanna C G; Price, Stephen R
2014-01-01
Sound source localization is critical to animal survival and for identification of auditory objects. We investigated the acuity with which humans localize low frequency, pure tone sounds using timing differences between the ears. These small differences in time, known as interaural time differences or ITDs, are identified in a manner that allows localization acuity of around 1° at the midline. Acuity, a relative measure of localization ability, displays a non-linear variation as sound sources are positioned more laterally. All species studied localize sounds best at the midline and progressively worse as the sound is located out towards the side. To understand why sound localization displays this variation with azimuthal angle, we took a first-principles, systemic, analytical approach to model localization acuity. We calculated how ITDs vary with sound frequency, head size and sound source location for humans. This allowed us to model ITD variation for previously published experimental acuity data and determine the distribution of just-noticeable differences in ITD. Our results suggest that the best-fit model is one whereby just-noticeable differences in ITDs are identified with uniform or close to uniform sensitivity across the physiological range. We discuss how our results have several implications for neural ITD processing in different species as well as development of the auditory system.
Contribution to the Understanding of Particle Motion Perception in Marine Invertebrates.
André, Michel; Kaifu, Kenzo; Solé, Marta; van der Schaar, Mike; Akamatsu, Tomonari; Balastegui, Andreu; Sánchez, Antonio M; Castell, Joan V
2016-01-01
Marine invertebrates potentially represent a group of species whose ecology may be influenced by artificial noise. Exposure to anthropogenic sound sources could have a direct consequence on the functionality and sensitivity of their sensory organs, the statocysts, which are responsible for their equilibrium and movements in the water column. The availability of novel laser Doppler vibrometer techniques has recently opened the possibility of measuring whole body (distance, velocity, and acceleration) vibration as a direct stimulus eliciting statocyst response, offering the scientific community a new level of understanding of the marine invertebrate hearing mechanism.
Da Costa, Sandra; Bourquin, Nathalie M.-P.; Knebel, Jean-François; Saenz, Melissa; van der Zwaag, Wietske; Clarke, Stephanie
2015-01-01
Environmental sounds are highly complex stimuli whose recognition depends on the interaction of top-down and bottom-up processes in the brain. Their semantic representations were shown to yield repetition suppression effects, i. e. a decrease in activity during exposure to a sound that is perceived as belonging to the same source as a preceding sound. Making use of the high spatial resolution of 7T fMRI we have investigated the representations of sound objects within early-stage auditory areas on the supratemporal plane. The primary auditory cortex was identified by means of tonotopic mapping and the non-primary areas by comparison with previous histological studies. Repeated presentations of different exemplars of the same sound source, as compared to the presentation of different sound sources, yielded significant repetition suppression effects within a subset of early-stage areas. This effect was found within the right hemisphere in primary areas A1 and R as well as two non-primary areas on the antero-medial part of the planum temporale, and within the left hemisphere in A1 and a non-primary area on the medial part of Heschl’s gyrus. Thus, several, but not all early-stage auditory areas encode the meaning of environmental sounds. PMID:25938430
Conversion of environmental data to a digital-spatial database, Puget Sound area, Washington
Uhrich, M.A.; McGrath, T.S.
1997-01-01
Data and maps from the Puget Sound Environmental Atlas, compiled for the U.S. Environmental Protection Agency, the Puget Sound Water Quality Authority, and the U.S. Army Corps of Engineers, have been converted into a digital-spatial database using a geographic information system. Environmental data for the Puget Sound area,collected from sources other than the Puget SoundEnvironmental Atlas by different Federal, State, andlocal agencies, also have been converted into thisdigital-spatial database. Background on the geographic-information-system planning process, the design and implementation of the geographic information-system database, and the reasons for conversion to this digital-spatial database are included in this report. The Puget Sound Environmental Atlas data layers include information about seabird nesting areas, eelgrass and kelp habitat, marine mammal and fish areas, and shellfish resources and bed certification. Data layers, from sources other than the Puget Sound Environmental Atlas, include the Puget Sound shoreline, the water-body system, shellfish growing areas, recreational shellfish beaches, sewage-treatment outfalls, upland hydrography,watershed and political boundaries, and geographicnames. The sources of data, descriptions of the datalayers, and the steps and errors of processing associated with conversion to a digital-spatial database used in development of the Puget Sound Geographic Information System also are included in this report. The appendixes contain data dictionaries for each of the resource layers and error values for the conversion of Puget SoundEnvironmental Atlas data.
Wave Field Synthesis of moving sources with arbitrary trajectory and velocity profile.
Firtha, Gergely; Fiala, Péter
2017-08-01
The sound field synthesis of moving sound sources is of great importance when dynamic virtual sound scenes are to be reconstructed. Previous solutions considered only virtual sources moving uniformly along a straight trajectory, synthesized employing a linear loudspeaker array. This article presents the synthesis of point sources following an arbitrary trajectory. Under high-frequency assumptions 2.5D Wave Field Synthesis driving functions are derived for arbitrary shaped secondary source contours by adapting the stationary phase approximation to the dynamic description of sources in motion. It is explained how a referencing function should be chosen in order to optimize the amplitude of synthesis on an arbitrary receiver curve. Finally, a finite difference implementation scheme is considered, making the presented approach suitable for real-time applications.
Grieco-Calub, Tina M.; Litovsky, Ruth Y.
2010-01-01
Objectives To measure sound source localization in children who have sequential bilateral cochlear implants (BICIs); to determine if localization accuracy correlates with performance on a right-left discrimination task (i.e., spatial acuity); to determine if there is a measurable bilateral benefit on a sound source identification task (i.e., localization accuracy) by comparing performance under bilateral and unilateral listening conditions; to determine if sound source localization continues to improve with longer durations of bilateral experience. Design Two groups of children participated in this study: a group of 21 children who received BICIs in sequential procedures (5–14 years old) and a group of 7 typically-developing children with normal acoustic hearing (5 years old). Testing was conducted in a large sound-treated booth with loudspeakers positioned on a horizontal arc with a radius of 1.2 m. Children participated in two experiments that assessed spatial hearing skills. Spatial hearing acuity was assessed with a discrimination task in which listeners determined if a sound source was presented on the right or left side of center; the smallest angle at which performance on this task was reliably above chance is the minimum audible angle. Sound localization accuracy was assessed with a sound source identification task in which children identified the perceived position of the sound source from a multi-loudspeaker array (7 or 15); errors are quantified using the root-mean-square (RMS) error. Results Sound localization accuracy was highly variable among the children with BICIs, with RMS errors ranging from 19°–56°. Performance of the NH group, with RMS errors ranging from 9°–29° was significantly better. Within the BICI group, in 11/21 children RMS errors were smaller in the bilateral vs. unilateral listening condition, indicating bilateral benefit. There was a significant correlation between spatial acuity and sound localization accuracy (R2=0.68, p<0.01), suggesting that children who achieve small RMS errors tend to have the smallest MAAs. Although there was large intersubject variability, testing of 11 children in the BICI group at two sequential visits revealed a subset of children who show improvement in spatial hearing skills over time. Conclusions A subset of children who use sequential BICIs can acquire sound localization abilities, even after long intervals between activation of hearing in the first- and second-implanted ears. This suggests that children with activation of the second implant later in life may be capable of developing spatial hearing abilities. The large variability in performance among the children with BICIs suggests that maturation of sound localization abilities in children with BICIs may be dependent on various individual subject factors such as age of implantation and chronological age. PMID:20592615
NASA Astrophysics Data System (ADS)
Wang, Xun; Quost, Benjamin; Chazot, Jean-Daniel; Antoni, Jérôme
2016-01-01
This paper considers the problem of identifying multiple sound sources from acoustical measurements obtained by an array of microphones. The problem is solved via maximum likelihood. In particular, an expectation-maximization (EM) approach is used to estimate the sound source locations and strengths, the pressure measured by a microphone being interpreted as a mixture of latent signals emitted by the sources. This work also considers two kinds of uncertainties pervading the sound propagation and measurement process: uncertain microphone locations and uncertain wavenumber. These uncertainties are transposed to the data in the belief functions framework. Then, the source locations and strengths can be estimated using a variant of the EM algorithm, known as the Evidential EM (E2M) algorithm. Eventually, both simulation and real experiments are shown to illustrate the advantage of using the EM in the case without uncertainty and the E2M in the case of uncertain measurement.
Auditory Localization: An Annotated Bibliography
1983-11-01
tranverse plane, natural sound localization in ,-- both horizontal and vertical planes can be performed with nearly the same accuracy as real sound sources...important for unscrambling the competing sounds which so often occur in natural environments. A workable sound sensor has been constructed and empirical
Blue whale vocalizations recorded around New Zealand: 1964-2013.
Miller, Brian S; Collins, Kym; Barlow, Jay; Calderan, Susannah; Leaper, Russell; McDonald, Mark; Ensor, Paul; Olson, Paula A; Olavarria, Carlos; Double, Michael C
2014-03-01
Previous underwater recordings made in New Zealand have identified a complex sequence of low frequency sounds that have been attributed to blue whales based on similarity to blue whale songs in other areas. Recordings of sounds with these characteristics were made opportunistically during the Southern Ocean Research Partnership's recent Antarctic Blue Whale Voyage. Detections of these sounds occurred all around the South Island of New Zealand during the voyage transits from Nelson, New Zealand to the Antarctic and return. By following acoustic bearings from directional sonobuoys, blue whales were visually detected and confirmed as the source of these sounds. These recordings, together with the historical recordings made northeast of New Zealand, indicate song types that persist over several decades and are indicative of the year-round presence of a population of blue whales that inhabits the waters around New Zealand. Measurements of the four-part vocalizations reveal that blue whale song in this region has changed slowly, but consistently over the past 50 years. The most intense units of these calls were detected as far south as 53°S, which represents a considerable range extension compared to the limited prior data on the spatial distribution of this population.
Cazau, Dorian; Adam, Olivier; Aubin, Thierry; Laitman, Jeffrey T; Reidenberg, Joy S
2016-10-10
Although mammalian vocalizations are predominantly harmonically structured, they can exhibit an acoustic complexity with nonlinear vocal sounds, including deterministic chaos and frequency jumps. Such sounds are normative events in mammalian vocalizations, and can be directly traceable to the nonlinear nature of vocal-fold dynamics underlying typical mammalian sound production. In this study, we give qualitative descriptions and quantitative analyses of nonlinearities in the song repertoire of humpback whales from the Ste Marie channel (Madagascar) to provide more insight into the potential communication functions and underlying production mechanisms of these features. A low-dimensional biomechanical modeling of the whale's U-fold (vocal folds homolog) is used to relate specific vocal mechanisms to nonlinear vocal features. Recordings of living humpback whales were searched for occurrences of vocal nonlinearities (instabilities). Temporal distributions of nonlinearities were assessed within sound units, and between different songs. The anatomical production sources of vocal nonlinearities and the communication context of their occurrences in recordings are discussed. Our results show that vocal nonlinearities may be a communication strategy that conveys information about the whale's body size and physical fitness, and thus may be an important component of humpback whale songs.
NASA Astrophysics Data System (ADS)
Cazau, Dorian; Adam, Olivier; Aubin, Thierry; Laitman, Jeffrey T.; Reidenberg, Joy S.
2016-10-01
Although mammalian vocalizations are predominantly harmonically structured, they can exhibit an acoustic complexity with nonlinear vocal sounds, including deterministic chaos and frequency jumps. Such sounds are normative events in mammalian vocalizations, and can be directly traceable to the nonlinear nature of vocal-fold dynamics underlying typical mammalian sound production. In this study, we give qualitative descriptions and quantitative analyses of nonlinearities in the song repertoire of humpback whales from the Ste Marie channel (Madagascar) to provide more insight into the potential communication functions and underlying production mechanisms of these features. A low-dimensional biomechanical modeling of the whale’s U-fold (vocal folds homolog) is used to relate specific vocal mechanisms to nonlinear vocal features. Recordings of living humpback whales were searched for occurrences of vocal nonlinearities (instabilities). Temporal distributions of nonlinearities were assessed within sound units, and between different songs. The anatomical production sources of vocal nonlinearities and the communication context of their occurrences in recordings are discussed. Our results show that vocal nonlinearities may be a communication strategy that conveys information about the whale’s body size and physical fitness, and thus may be an important component of humpback whale songs.
Detection of Sound Image Movement During Horizontal Head Rotation
Ohba, Kagesho; Iwaya, Yukio; Suzuki, Yôiti
2016-01-01
Movement detection for a virtual sound source was measured during the listener’s horizontal head rotation. Listeners were instructed to do head rotation at a given speed. A trial consisted of two intervals. During an interval, a virtual sound source was presented 60° to the right or left of the listener, who was instructed to rotate the head to face the sound image position. Then in one of a pair of intervals, the sound position was moved slightly in the middle of the rotation. Listeners were asked to judge the interval in a trial during which the sound stimuli moved. Results suggest that detection thresholds are higher when listeners do head rotation. Moreover, this effect was found to be independent of the rotation velocity. PMID:27698993
NASA Technical Reports Server (NTRS)
Lucas, Michael J.; Marcolini, Michael A.
1997-01-01
The Rotorcraft Noise Model (RNM) is an aircraft noise impact modeling computer program being developed for NASA-Langley Research Center which calculates sound levels at receiver positions either on a uniform grid or at specific defined locations. The basic computational model calculates a variety of metria. Acoustic properties of the noise source are defined by two sets of sound pressure hemispheres, each hemisphere being centered on a noise source of the aircraft. One set of sound hemispheres provides the broadband data in the form of one-third octave band sound levels. The other set of sound hemispheres provides narrowband data in the form of pure-tone sound pressure levels and phase. Noise contours on the ground are output graphically or in tabular format, and are suitable for inclusion in Environmental Impact Statements or Environmental Assessments.
Keil, Richard; Salemme, Keri; Forrest, Brittany; Neibauer, Jaqui; Logsdon, Miles
2011-11-01
Organic compounds were evaluated in March 2010 at 22 stations in Barkley Sound, Vancouver Island Canada and at 66 locations in Puget Sound. Of 37 compounds, 15 were xenobiotics, 8 were determined to have an anthropogenic imprint over natural sources, and 13 were presumed to be of natural or mixed origin. The three most frequently detected compounds were salicyclic acid, vanillin and thymol. The three most abundant compounds were diethylhexyl phthalate (DEHP), ethyl vanillin and benzaldehyde (∼600 n g L(-1) on average). Concentrations of xenobiotics were 10-100 times higher in Puget Sound relative to Barkley Sound. Three compound couplets are used to illustrate the influence of human activity on marine waters; vanillin and ethyl vanillin, salicylic acid and acetylsalicylic acid, and cinnamaldehyde and cinnamic acid. Ratios indicate that anthropogenic activities are the predominant source of these chemicals in Puget Sound. Published by Elsevier Ltd.
The noise generated by a landing gear wheel with hub and rim cavities
NASA Astrophysics Data System (ADS)
Wang, Meng; Angland, David; Zhang, Xin
2017-03-01
Wheels are one of the major noise sources of landing gears. Accurate numerical predictions of wheel noise can provide an insight into the physical mechanism of landing gear noise generation and can aid in the design of noise control devices. The major noise sources of a 33% scaled isolated landing gear wheel are investigated by simulating three different wheel configurations using high-order numerical simulations to compute the flow field and the FW-H equation to obtain the far-field acoustic pressures. The baseline configuration is a wheel with a hub cavity and two rim cavities. Two additional simulations are performed; one with the hub cavity covered (NHC) and the other with both the hub cavity and rim cavities covered (NHCRC). These simulations isolate the effects of the hub cavity and rim cavities on the overall wheel noise. The surface flow patterns are visualised by shear stress lines and show that the flow separations and attachments on the side of the wheel, in both the baseline and the configuration with only the hub cavity covered, are significantly reduced by covering both the hub and rim cavities. A frequency-domain FW-H equation is used to identify the noise source regions on the surface of the wheel. The tyre is the main low frequency noise source and shows a lift dipole and side force dipole pattern depending on the frequency. The hub cavity is identified as the dominant middle frequency noise source and radiates in a frequency range centered around the first and second depth modes of the cylindrical hub cavity. The rim cavities are the main high-frequency noise sources. With the hub cavity and rim cavities covered, the largest reduction in Overall Sound Pressure Level (OASPL) is achieved in the hub side direction. In the other directivities, there is also a reduction in the radiated sound.
En route noise levels from propfan test assessment airplane
NASA Technical Reports Server (NTRS)
Garber, Donald P.; Willshire, William L., Jr.
1994-01-01
The en route noise test was designed to characterize propagation of propfan noise from cruise altitudes to the ground. In-flight measurements of propfan source levels and directional patterns were made by a chase plane flying in formation with the propfan test assessment (PTA) airplane. Ground noise measurements were taken during repeated flights over a distributed microphone array. The microphone array on the ground was used to provide ensemble-averaged estimates of mean flyover noise levels, establish confidence limits for those means, and measure propagation-induced noise variability. Even for identical nominal cruise conditions, peak sound levels for individual overflights varied substantially about the average, particularly when overflights were performed on different days. Large day-to-day variations in peak level measurements appeared to be caused by large day-to-day differences in propagation conditions and tended to obscure small variations arising from operating conditions. A parametric evaluation of the sensitivity of this prediction method to weather measurement and source level uncertainties was also performed. In general, predictions showed good agreement with measurements. However, the method was unable to predict short-term variability of ensemble-averaged data within individual overflights. Although variations in absorption appear to be the dominant factor in variations of peak sound levels recorded on the ground, accurate predictions of those levels require that a complete description of operational conditions be taken into account. The comprehensive and integrated methods presented in this paper have adequately predicted ground-measured sound levels. On average, peak sound levels were predicted within 3 dB for each of the three different cruise conditions.
Mapping the sound field of an erupting submarine volcano using an acoustic glider.
Matsumoto, Haru; Haxel, Joseph H; Dziak, Robert P; Bohnenstiehl, Delwayne R; Embley, Robert W
2011-03-01
An underwater glider with an acoustic data logger flew toward a recently discovered erupting submarine volcano in the northern Lau basin. With the volcano providing a wide-band sound source, recordings from the two-day survey produced a two-dimensional sound level map spanning 1 km (depth) × 40 km(distance). The observed sound field shows depth- and range-dependence, with the first-order spatial pattern being consistent with the predictions of a range-dependent propagation model. The results allow constraining the acoustic source level of the volcanic activity and suggest that the glider provides an effective platform for monitoring natural and anthropogenic ocean sounds. © 2011 Acoustical Society of America
Improved Mirror Source Method in Roomacoustics
NASA Astrophysics Data System (ADS)
Mechel, F. P.
2002-10-01
Most authors in room acoustics qualify the mirror source method (MS-method) as the only exact method to evaluate sound fields in auditoria. But evidently nobody applies it. The reason for this discrepancy is the abundantly high numbers of needed mirror sources which are reported in the literature, although such estimations of needed numbers of mirror sources mostly are used for the justification of more or less heuristic modifications of the MS-method. The present, intentionally tutorial article accentuates the analytical foundations of the MS-method whereby the number of needed mirror sources is reduced already. Further, the task of field evaluation in three-dimensional spaces is reduced to a sequence of tasks in two-dimensional room edges. This not only allows the use of easier geometrical computations in two dimensions, but also the sound field in corner areas can be represented by a single (directional) source sitting on the corner line, so that only this "corner source" must be mirror-reflected in the further process. This procedure gives a drastic reduction of the number of needed equivalent sources. Finally, the traditional MS-method is not applicable in rooms with convex corners (the angle between the corner flanks, measured on the room side, exceeds 180°). In such cases, the MS-method is combined below with the second principle of superposition(PSP). It reduces the scattering task at convex corners to two sub-tasks between one flank and the median plane of the room wedge, i.e., always in concave corner areas where the MS-method can be applied.
NASA Astrophysics Data System (ADS)
Nishiura, Takanobu; Nakamura, Satoshi
2003-10-01
Humans communicate with each other through speech by focusing on the target speech among environmental sounds in real acoustic environments. We can easily identify the target sound from other environmental sounds. For hands-free speech recognition, the identification of the target speech from environmental sounds is imperative. This mechanism may also be important for a self-moving robot to sense the acoustic environments and communicate with humans. Therefore, this paper first proposes hidden Markov model (HMM)-based environmental sound source identification. Environmental sounds are modeled by three states of HMMs and evaluated using 92 kinds of environmental sounds. The identification accuracy was 95.4%. This paper also proposes a new HMM composition method that composes speech HMMs and an HMM of categorized environmental sounds for robust environmental sound-added speech recognition. As a result of the evaluation experiments, we confirmed that the proposed HMM composition outperforms the conventional HMM composition with speech HMMs and a noise (environmental sound) HMM trained using noise periods prior to the target speech in a captured signal. [Work supported by Ministry of Public Management, Home Affairs, Posts and Telecommunications of Japan.
Sound radiation from a flanged inclined duct.
McAlpine, Alan; Daymond-King, Alex P; Kempton, Andrew J
2012-12-01
A simple method to calculate sound radiation from a flanged inclined duct is presented. An inclined annular duct is terminated by a rigid vertical plane. The duct termination is representative of a scarfed exit. The concept of a scarfed duct has been examined in turbofan aero-engines as a means to, potentially, shield a portion of the radiated sound from being transmitted directly to the ground. The sound field inside the annular duct is expressed in terms of spinning modes. Exterior to the duct, the radiated sound field owing to each mode can be expressed in terms of its directivity pattern, which is found by evaluating an appropriate form of Rayleigh's integral. The asymmetry is shown to affect the amplitude of the principal lobe of the directivity pattern, and to alter the proportion of the sound power radiated up or down. The methodology detailed in this article provides a simple engineering approach to investigate the sound radiation for a three-dimensional problem.
Loss of urban forest canopy and the related effects on soundscape and human directed attention
NASA Astrophysics Data System (ADS)
Laverne, Robert James Paul
The specific questions addressed in this research are: Will the loss of trees in residential neighborhoods result in a change to the local soundscape? The investigation of this question leads to a related inquiry: Do the sounds of the environment in which a person is present affect their directed attention?. An invasive insect pest, the Emerald Ash Borer (Agrilus planipennis ), is killing millions of ash trees (genus Fraxinus) throughout North America. As the loss of tree canopy occurs, urban ecosystems change (including higher summer temperatures, more stormwater runoff, and poorer air quality) causing associated changes to human physical and mental health. Previous studies suggest that conditions in urban environments can result in chronic stress in humans and fatigue to directed attention, which is the ability to focus on tasks and to pay attention. Access to nature in cities can help refresh directed attention. The sights and sounds associated with parks, open spaces, and trees can serve as beneficial counterbalances to the irritating conditions associated with cities. This research examines changes to the quantity and quality of sounds in Arlington Heights, Illinois. A series of before-and-after sound recordings were gathered as trees died and were removed between 2013 and 2015. Comparison of recordings using the Raven sound analysis program revealed significant differences in some, but not all measures of sound attributes as tree canopy decreased. In general, more human-produced mechanical sounds (anthrophony) and fewer sounds associated with weather (geophony) were detected. Changes in sounds associated with animals (biophony) varied seasonally. Monitoring changes in the proportions of anthrophony, biophony and geophony can provide insight into changes in biodiversity, environmental health, and quality of life for humans. Before-tree-removal and after-tree-removal sound recordings served as the independent variable for randomly-assigned human volunteers as they performed the Stroop Test and the Necker Cube Pattern Control test to measure directed attention. The sound treatments were not found to have significant effects on the directed attention test scores. Future research is needed to investigate the characteristics of urban soundscapes that are detrimental or potentially conducive to human cognitive functioning.
Amplitude and Wavelength Measurement of Sound Waves in Free Space using a Sound Wave Phase Meter
NASA Astrophysics Data System (ADS)
Ham, Sounggil; Lee, Kiwon
2018-05-01
We developed a sound wave phase meter (SWPM) and measured the amplitude and wavelength of sound waves in free space. The SWPM consists of two parallel metal plates, where the front plate was operated as a diaphragm. An aluminum perforated plate was additionally installed in front of the diaphragm, and the same signal as that applied to the sound source was applied to the perforated plate. The SWPM measures both the sound wave signal due to the diaphragm vibration and the induction signal due to the electric field of the aluminum perforated plate. Therefore, the two measurement signals interfere with each other due to the phase difference according to the distance between the sound source and the SWPM, and the amplitude of the composite signal that is output as a result is periodically changed. We obtained the wavelength of the sound wave from this periodic amplitude change measured in the free space and compared it with the theoretically calculated values.
Khajenasiri, Farahnaz; Zamanian, Alireza; Zamanian, Zahra
2016-03-01
Sound is among the significant environmental factors for people's health, and it has an important role in both physical and psychological injuries, and it also affects individuals' performance and productivity. The aim of this study was to determine the effect of exposure to high noise levels on the performance and rate of error in manual activities. This was an interventional study conducted on 50 students at Shiraz University of Medical Sciences (25 males and 25 females) in which each person was considered as its own control to assess the effect of noise on her or his performance at the sound levels of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound source as well as applying the Two-Arm coordination Test. The data were analyzed using SPSS version 16. Repeated measurements were used to compare the length of performance as well as the errors measured in the test. Based on the results, we found a direct and significant association between the levels of sound and the length of performance. Moreover, the participant's performance was significantly different for different sound levels (at 110 dB as opposed to 70 and 90 dB, p < 0.05 and p < 0.001, respectively). This study found that a sound level of 110 dB had an important effect on the individuals' performances, i.e., the performances were decreased.
NASA Technical Reports Server (NTRS)
Smith, Wayne Farrior
1973-01-01
The effect of finite source size on the power statistics in a reverberant room for pure tone excitation was investigated. Theoretical results indicate that the standard deviation of low frequency, pure tone finite sources is always less than that predicted by point source theory and considerably less when the source dimension approaches one-half an acoustic wavelength or greater. A supporting experimental study was conducted utilizing an eight inch loudspeaker and a 30 inch loudspeaker at eleven source positions. The resulting standard deviation of sound power output of the smaller speaker is in excellent agreement with both the derived finite source theory and existing point source theory, if the theoretical data is adjusted to account for experimental incomplete spatial averaging. However, the standard deviation of sound power output of the larger speaker is measurably lower than point source theory indicates, but is in good agreement with the finite source theory.
Frid, Emma; Bresin, Roberto; Alborno, Paolo; Elblaus, Ludvig
2016-01-01
In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associations could be reflected in spontaneous movement characteristics. Three subsequent studies were conducted to investigate the relationship between properties of bodily movement and sound: (1) a motion capture experiment involving interactive sonification of a group of children spontaneously moving in a room, (2) an experiment involving perceptual ratings of sonified movement data and (3) an experiment involving matching between sonified movements and their visualizations in the form of abstract drawings. In (1) we used a system constituting of 17 IR cameras tracking passive reflective markers. The head positions in the horizontal plane of 3–4 children were simultaneously tracked and sonified, producing 3–4 sound sources spatially displayed through an 8-channel loudspeaker system. We analyzed children's spontaneous movement in terms of energy-, smoothness- and directness-index. Despite large inter-participant variability and group-specific effects caused by interaction among children when engaging in the spontaneous movement task, we found a small but significant effect of sound model. Results from (2) indicate that different sound models can be rated differently on a set of motion-related perceptual scales (e.g., expressivity and fluidity). Also, results imply that audio-only stimuli can evoke stronger perceived properties of movement (e.g., energetic, impulsive) than stimuli involving both audio and video representations. Findings in (3) suggest that sounds portraying bodily movement can be represented using abstract drawings in a meaningful way. We argue that the results from these studies support the existence of a cross-modal mapping of body motion qualities from bodily movement to sounds. Sound can be translated and understood from bodily motion, conveyed through sound visualizations in the shape of drawings and translated back from sound visualizations to audio. The work underlines the potential of using interactive sonification to communicate high-level features of human movement data. PMID:27891074
Frid, Emma; Bresin, Roberto; Alborno, Paolo; Elblaus, Ludvig
2016-01-01
In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associations could be reflected in spontaneous movement characteristics. Three subsequent studies were conducted to investigate the relationship between properties of bodily movement and sound: (1) a motion capture experiment involving interactive sonification of a group of children spontaneously moving in a room, (2) an experiment involving perceptual ratings of sonified movement data and (3) an experiment involving matching between sonified movements and their visualizations in the form of abstract drawings. In (1) we used a system constituting of 17 IR cameras tracking passive reflective markers. The head positions in the horizontal plane of 3-4 children were simultaneously tracked and sonified, producing 3-4 sound sources spatially displayed through an 8-channel loudspeaker system. We analyzed children's spontaneous movement in terms of energy-, smoothness- and directness-index. Despite large inter-participant variability and group-specific effects caused by interaction among children when engaging in the spontaneous movement task, we found a small but significant effect of sound model. Results from (2) indicate that different sound models can be rated differently on a set of motion-related perceptual scales (e.g., expressivity and fluidity). Also, results imply that audio-only stimuli can evoke stronger perceived properties of movement (e.g., energetic, impulsive) than stimuli involving both audio and video representations. Findings in (3) suggest that sounds portraying bodily movement can be represented using abstract drawings in a meaningful way. We argue that the results from these studies support the existence of a cross-modal mapping of body motion qualities from bodily movement to sounds. Sound can be translated and understood from bodily motion, conveyed through sound visualizations in the shape of drawings and translated back from sound visualizations to audio. The work underlines the potential of using interactive sonification to communicate high-level features of human movement data.
Three-dimensional beam pattern of regular sperm whale clicks confirms bent-horn hypothesis
NASA Astrophysics Data System (ADS)
Zimmer, Walter M. X.; Tyack, Peter L.; Johnson, Mark P.; Madsen, Peter T.
2005-03-01
The three-dimensional beam pattern of a sperm whale (Physeter macrocephalus) tagged in the Ligurian Sea was derived using data on regular clicks from the tag and from hydrophones towed behind a ship circling the tagged whale. The tag defined the orientation of the whale, while sightings and beamformer data were used to locate the whale with respect to the ship. The existence of a narrow, forward-directed P1 beam with source levels exceeding 210 dBpeak re: 1 μPa at 1 m is confirmed. A modeled forward-beam pattern, that matches clicks >20° off-axis, predicts a directivity index of 26.7 dB and source levels of up to 229 dBpeak re: 1 μPa at 1 m. A broader backward-directed beam is produced by the P0 pulse with source levels near 200 dBpeak re: 1 μPa at 1 m and a directivity index of 7.4 dB. A low-frequency component with source levels near 190 dBpeak re: 1 μPa at 1 m is generated at the onset of the P0 pulse by air resonance. The results support the bent-horn model of sound production in sperm whales. While the sperm whale nose appears primarily adapted to produce an intense forward-directed sonar signal, less-directional click components convey information to conspecifics, and give rise to echoes from the seafloor and the surface, which may be useful for orientation during dives..
Numerical method to compute acoustic scattering effect of a moving source.
Song, Hao; Yi, Mingxu; Huang, Jun; Pan, Yalin; Liu, Dawei
2016-01-01
In this paper, the aerodynamic characteristic of a ducted tail rotor in hover has been numerically studied using CFD method. An analytical time domain formulation based on Ffowcs Williams-Hawkings (FW-H) equation is derived for the prediction of the acoustic velocity field and used as Neumann boundary condition on a rigid scattering surface. In order to predict the aerodynamic noise, a hybrid method combing computational aeroacoustics with an acoustic thin-body boundary element method has been proposed. The aerodynamic results and the calculated sound pressure levels (SPLs) are compared with the known method for validation. Simulation results show that the duct can change the value of SPLs and the sound directivity. Compared with the isolate tail rotor, the SPLs of the ducted tail rotor are smaller at certain azimuth.
NASA Technical Reports Server (NTRS)
Meredith, R. W.; Becher, J.
1981-01-01
Parts were fabricated for the acoustic ground impedance meter and the instrument was tested. A rubber hose was used to connect the resonator neck to the chamber in order to suppress vibration from the volume velocity source which caused chatter. An analog to digital converter was successfully hardwired to the computer detection system. The cooling system for the resonant tube was modified to use liquid nitrogen cooling. This produced the required temperature for the tube, but the temperature gradients within each of the four tube sections reached unacceptable levels. Final measurements of the deexcitation of nitrogen by water vapor indicate that the responsible physical process is not the direct vibration-translation energy transfer, but is a vibration-vibration energy transfer.
Captive Bottlenose Dolphins Do Discriminate Human-Made Sounds Both Underwater and in the Air.
Lima, Alice; Sébilleau, Mélissa; Boye, Martin; Durand, Candice; Hausberger, Martine; Lemasson, Alban
2018-01-01
Bottlenose dolphins ( Tursiops truncatus ) spontaneously emit individual acoustic signals that identify them to group members. We tested whether these cetaceans could learn artificial individual sound cues played underwater and whether they would generalize this learning to airborne sounds. Dolphins are thought to perceive only underwater sounds and their training depends largely on visual signals. We investigated the behavioral responses of seven dolphins in a group to learned human-made individual sound cues, played underwater and in the air. Dolphins recognized their own sound cue after hearing it underwater as they immediately moved toward the source, whereas when it was airborne they gazed more at the source of their own sound cue but did not approach it. We hypothesize that they perhaps detected modifications of the sound induced by air or were confused by the novelty of the situation, but nevertheless recognized they were being "targeted." They did not respond when hearing another group member's cue in either situation. This study provides further evidence that dolphins respond to individual-specific sounds and that these marine mammals possess some capacity for processing airborne acoustic signals.
Exploring positive hospital ward soundscape interventions.
Mackrill, J; Jennings, P; Cain, R
2014-11-01
Sound is often considered as a negative aspect of an environment that needs mitigating, particularly in hospitals. It is worthwhile however, to consider how subjective responses to hospital sounds can be made more positive. The authors identified natural sound, steady state sound and written sound source information as having the potential to do this. Listening evaluations were conducted with 24 participants who rated their emotional (Relaxation) and cognitive (Interest and Understanding) response to a variety of hospital ward soundscape clips across these three interventions. A repeated measures ANOVA revealed that the 'Relaxation' response was significantly affected (n(2) = 0.05, p = 0.001) by the interventions with natural sound producing a 10.1% more positive response. Most interestingly, written sound source information produced a 4.7% positive change in response. The authors conclude that exploring different ways to improve the sounds of a hospital offers subjective benefits that move beyond sound level reduction. This is an area for future work to focus upon in an effort to achieve more positively experienced hospital soundscapes and environments. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Shock waves and the Ffowcs Williams-Hawkings equation
NASA Technical Reports Server (NTRS)
Isom, Morris P.; Yu, Yung H.
1991-01-01
The expansion of the double divergence of the generalized Lighthill stress tensor, which is the basis of the concept of the role played by shock and contact discontinuities as sources of dipole and monopole sound, is presently applied to the simplest transonic flows: (1) a fixed wing in steady motion, for which there is no sound field, and (2) a hovering helicopter blade that produces a sound field. Attention is given to the contribution of the shock to sound from the viewpoint of energy conservation; the shock emerges as the source of only the quantity of entropy.
NASA Astrophysics Data System (ADS)
Huang, Xianfeng; Meng, Yao; Huang, Riming
2017-10-01
This paper describes a theoretical method for predicting the improvement of the impact sound insulation to a floating floor with the resilient interlayer. Statistical energy analysis (SEA) model, which is skilful in calculating the floor impact sound, is set up for calculating the reduction in impact sound pressure level in downstairs room. The sound transmission paths which include direct path and flanking paths are analyzed to find the dominant one; the factors that affect impact sound reduction for a floating floor are explored. Then, the impact sound level in downstairs room is determined and comparisons between predicted and measured data are conducted. It is indicated that for the impact sound transmission across a floating floor, the flanking path impact sound level contribute tiny influence on overall sound level in downstairs room, and a floating floor with low stiffness interlayer exhibits favorable sound insulation on direct path. The SEA approach applies to the floating floors with resilient interlayers, which are experimentally verified, provides a guidance in sound insulation design.
Non-contact ultrasonic defect imaging in composites
NASA Astrophysics Data System (ADS)
Tenoudji, F. Cohen; Citerne, J. M.; Dutilleul, H.; Busquet, D.
2016-02-01
In the situations where conventional NDT ultrasonic techniques using immersion of the part under inspection or its contact with the transducers cannot be used, in-air investigation presents an alternative. The huge impedance mismatch between the part material and air (transmission loss in the order of 80 dB for a thin metallic plate) induces having to deal very small signals and unfavorable signal to noise ratios. The approach adopted here is the use of the crack of a spark generated by an induction coil as a sound source and an electrostatic polyethylene membrane microphone as a receiver [1]. The advantage of this source is that the spark power is high (several kilowatts) and its power is directly coupled to air during the energy release. In some difficult situations, an elliptical mirror is used to concentrate the sound beam power on the surface of the part [2,3]. Stability and reproducibility of the sound generated by the spark, which are a necessity in order to perform quantitative evaluations, are achieved in our experiment. This permits also an increase of the signal to noise ratio by signal accumulation. The sound pulse duration of few microseconds allows operating in pulse echo in some circumstances. The bandwidth of the source is large, of several hundred of kilohertz, and that of the microphone above 100 kHz allow the flexibility to address different kinds of materials. The technique allows an easy, in-air, non contact, inspection of structural composite parts, with pulse waves, with an excellent signal to noise ratio. An X-Y ultrasonic scanning ultrasonic system for material inspection using this technique has been realized. Results obtained in transmission and reflection are presented. Defects in carbon composite plates and in honeycomb are imaged in transmission Echographic measurements show that defect detection can be performed in thin plates using Lamb waves propagation when only one sided inspection of the part is possible.
NASA Astrophysics Data System (ADS)
Proskurov, S.; Darbyshire, O. R.; Karabasov, S. A.
2017-12-01
The present work discusses modifications to the stochastic Fast Random Particle Mesh (FRPM) method featuring both tonal and broadband noise sources. The technique relies on the combination of incorporated vortex-shedding resolved flow available from Unsteady Reynolds-Averaged Navier-Stokes (URANS) simulation with the fine-scale turbulence FRPM solution generated via the stochastic velocity fluctuations in the context of vortex sound theory. In contrast to the existing literature, our method encompasses a unified treatment for broadband and tonal acoustic noise sources at the source level, thus, accounting for linear source interference as well as possible non-linear source interaction effects. When sound sources are determined, for the sound propagation, Acoustic Perturbation Equations (APE-4) are solved in the time-domain. Results of the method's application for two aerofoil benchmark cases, with both sharp and blunt trailing edges are presented. In each case, the importance of individual linear and non-linear noise sources was investigated. Several new key features related to the unsteady implementation of the method were tested and brought into the equation. Encouraging results have been obtained for benchmark test cases using the new technique which is believed to be potentially applicable to other airframe noise problems where both tonal and broadband parts are important.
Padois, Thomas; Prax, Christian; Valeau, Vincent; Marx, David
2012-10-01
The possibility of using the time-reversal technique to localize acoustic sources in a wind-tunnel flow is investigated. While the technique is widespread, it has scarcely been used in aeroacoustics up to now. The proposed method consists of two steps: in a first experimental step, the acoustic pressure fluctuations are recorded over a linear array of microphones; in a second numerical step, the experimental data are time-reversed and used as input data for a numerical code solving the linearized Euler equations. The simulation achieves the back-propagation of the waves from the array to the source and takes into account the effect of the mean flow on sound propagation. The ability of the method to localize a sound source in a typical wind-tunnel flow is first demonstrated using simulated data. A generic experiment is then set up in an anechoic wind tunnel to validate the proposed method with a flow at Mach number 0.11. Monopolar sources are first considered that are either monochromatic or have a narrow or wide-band frequency content. The source position estimation is well-achieved with an error inferior to the wavelength. An application to a dipolar sound source shows that this type of source is also very satisfactorily characterized.
Different categories of living and non-living sound-sources activate distinct cortical networks
Engel, Lauren R.; Frum, Chris; Puce, Aina; Walker, Nathan A.; Lewis, James W.
2009-01-01
With regard to hearing perception, it remains unclear as to whether, or the extent to which, different conceptual categories of real-world sounds and related categorical knowledge are differentially represented in the brain. Semantic knowledge representations are reported to include the major divisions of living versus non-living things, plus more specific categories including animals, tools, biological motion, faces, and places—categories typically defined by their characteristic visual features. Here, we used functional magnetic resonance imaging (fMRI) to identify brain regions showing preferential activity to four categories of action sounds, which included non-vocal human and animal actions (living), plus mechanical and environmental sound-producing actions (non-living). The results showed a striking antero-posterior division in cortical representations for sounds produced by living versus non-living sources. Additionally, there were several significant differences by category, depending on whether the task was category-specific (e.g. human or not) versus non-specific (detect end-of-sound). In general, (1) human-produced sounds yielded robust activation in the bilateral posterior superior temporal sulci independent of task. Task demands modulated activation of left-lateralized fronto-parietal regions, bilateral insular cortices, and subcortical regions previously implicated in observation-execution matching, consistent with “embodied” and mirror-neuron network representations subserving recognition. (2) Animal action sounds preferentially activated the bilateral posterior insulae. (3) Mechanical sounds activated the anterior superior temporal gyri and parahippocampal cortices. (4) Environmental sounds preferentially activated dorsal occipital and medial parietal cortices. Overall, this multi-level dissociation of networks for preferentially representing distinct sound-source categories provides novel support for grounded cognition models that may underlie organizational principles for hearing perception. PMID:19465134
Directionality of dog vocalizations
NASA Astrophysics Data System (ADS)
Frommolt, Karl-Heinz; Gebler, Alban
2004-07-01
The directionality patterns of sound emission in domestic dogs were measured in an anechoic environment using a microphone array. Mainly long-distance signals from four dogs were investigated. The radiation pattern of the signals differed clearly from an omnidirectional one with average differences in sound-pressure level between the frontal and rear position of 3-7 dB depending from the individual. Frequency dependence of directionality was shown for the range from 250 to 3200 Hz. The results indicate that when studying acoustic communication in mammals, more attention should be paid to the directionality pattern of sound emission.
Tinnitus retraining therapy: a different view on tinnitus.
Jastreboff, Pawel J; Jastreboff, Margaret M
2006-01-01
Tinnitus retraining therapy (TRT) is a method for treating tinnitus and decreased sound tolerance, based on the neurophysiological model of tinnitus. This model postulates involvement of the limbic and autonomic nervous systems in all cases of clinically significant tinnitus and points out the importance of both conscious and subconscious connections, which are governed by principles of conditioned reflexes. The treatments for tinnitus and misophonia are based on the concept of extinction of these reflexes, labeled as habituation. TRT aims at inducing changes in the mechanisms responsible for transferring signal (i.e., tinnitus, or external sound in the case of misophonia) from the auditory system to the limbic and autonomic nervous systems, and through this, remove signal-induced reactions without attempting to directly attenuate the tinnitus source or tinnitus/misophonia-evoked reactions. As such, TRT is effective for any type of tinnitus regardless of its etiology. TRT consists of: (1) counseling based on the neurophysiological model of tinnitus, and (2) sound therapy (with or without instrumentation). The main role of counseling is to reclassify tinnitus into the category of neutral stimuli. The role of sound therapy is to decrease the strength of the tinnitus signal. It is crucial to assess and treat tinnitus, decreased sound tolerance, and hearing loss simultaneously. Results from various groups have shown that TRT can be an effective method of treatment. Copyright (c) 2006 S. Karger AG, Basel.
NASA Technical Reports Server (NTRS)
Conner, David A.; Page, Juliet A.
2002-01-01
To improve aircraft noise impact modeling capabilities and to provide a tool to aid in the development of low noise terminal area operations for rotorcraft and tiltrotors, the Rotorcraft Noise Model (RNM) was developed by the NASA Langley Research Center and Wyle Laboratories. RNM is a simulation program that predicts how sound will propagate through the atmosphere and accumulate at receiver locations located on flat ground or varying terrain, for single and multiple vehicle flight operations. At the core of RNM are the vehicle noise sources, input as sound hemispheres. As the vehicle "flies" along its prescribed flight trajectory, the source sound propagation is simulated and accumulated at the receiver locations (single points of interest or multiple grid points) in a systematic time-based manner. These sound signals at the receiver locations may then be analyzed to obtain single event footprints, integrated noise contours, time histories, or numerous other features. RNM may also be used to generate spectral time history data over a ground mesh for the creation of single event sound animation videos. Acoustic properties of the noise source(s) are defined in terms of sound hemispheres that may be obtained from theoretical predictions, wind tunnel experimental results, flight test measurements, or a combination of the three. The sound hemispheres may contain broadband data (source levels as a function of one-third octave band) and pure-tone data (in the form of specific frequency sound pressure levels and phase). A PC executable version of RNM is publicly available and has been adopted by a number of organizations for Environmental Impact Assessment studies of rotorcraft noise. This paper provides a review of the required input data, the theoretical framework of RNM's propagation model and the output results. Code validation results are provided from a NATO helicopter noise flight test as well as a tiltrotor flight test program that used the RNM as a tool to aid in the development of low noise approach profiles.
Ueno, Tomoka; Shimada, Yasushi; Matin, Khairul; Zhou, Yuan; Wada, Ikumi; Sadr, Alireza; Sumi, Yasunori; Tagami, Junji
2016-01-01
Abstract. The aim of this study was to evaluate the signal intensity and signal attenuation of swept source optical coherence tomography (SS-OCT) for dental caries in relation to the variation of mineral density. SS-OCT observation was performed on the enamel and dentin artificial demineralization and on natural caries. The artificial caries model on enamel and dentin surfaces was created using Streptococcus mutans biofilms incubated in an oral biofilm reactor. The lesions were centrally cross sectioned and SS-OCT scans were obtained in two directions to construct a three-dimensional data set, from the lesion surface (sagittal scan) and parallel to the lesion surface (horizontal scan). The integrated signal up to 200 μm in depth (IS200) and the attenuation coefficient (μ) of the enamel and dentin lesions were calculated from the SS-OCT signal in horizontal scans at five locations of lesion depth. The values were compared with the mineral density obtained from transverse microradiography. Both enamel and dentin demineralization showed significantly higher IS200 and μ than the sound tooth substrate from the sagittal scan. Enamel demineralization showed significantly higher IS200 than sound enamel, even with low levels of demineralization. In demineralized dentin, the μ from the horizontal scan consistently trended downward compared to the sound dentin. PMID:27704033
Joint seismic-infrasonic processing of recordings from a repeating source of atmospheric explosions.
Gibbons, Steven J; Ringdal, Frode; Kvaerna, Tormod
2007-11-01
A database has been established of seismic and infrasonic recordings from more than 100 well-constrained surface explosions, conducted by the Finnish military to destroy old ammunition. The recorded seismic signals are essentially identical and indicate that the variation in source location and magnitude is negligible. In contrast, the infrasonic arrivals on both seismic and infrasound sensors exhibit significant variation both with regard to the number of detected phases, phase travel times, and phase amplitudes, which would be attributable to atmospheric factors. This data set provides an excellent database for studies in sound propagation, infrasound array detection, and direction estimation.
A lifting-surface theory solution for the diffraction of internal sound sources by an engine nacelle
NASA Astrophysics Data System (ADS)
Martinez, R.
1986-07-01
Lifting-surface theory is used to solve the problem of diffraction by a rigid open-ended pipe of zero thickness and finite length, with application to the prediction of acoustic insertion-loss performance for the encasing structure of a ducted propeller or turbofan. An axisymmetric situation is assumed, and the incident field due to a force applied directly to the fluid in the cylinder axial direction is used. A virtual-source distribution of unsteady dipoles is found whose integrated component of radial velocity is set to cancel that of the incident field over the surface. The calculated virtual load is verified by whether its effect on the near-field input power at the actual source is consistent with the far-field power radiated by the system, a balance which is possible if the no-flow-through boundary condition has been satisfied over the rigid pipe surface such that the velocity component of the acoustic intensity is zero.
Passive Acoustic Thermometry Using Low-Frequency Deep Water Noise
2014-09-30
M. Fowler, S. Salo, Antarctic icebergs : A significant natural ocean sound source in the Southern Hemisphere. Geochem. Geophys. DOI: 10.1002...1974). 24. J. Tournadre, F. Girard-Ardhuin, B. Legrésy, Antarctic icebergs distributions, 2002-2010. J. Geophys. Res: Oceans 117, C05004, (2012...surface in the Polar Regions (e.g. due to loud iceberg cracking events with levels up to 245 dB re 1 μPa at 1 m) can efficiently couple directly to the
ERIC Educational Resources Information Center
Keller, Peter; Stevens, Catherine
2004-01-01
This article addresses the learnability of auditory icons, that is, environmental sounds that refer either directly or indirectly to meaningful events. Direct relations use the sound made by the target event whereas indirect relations substitute a surrogate for the target. Across 3 experiments, different indirect relations (ecological, in which…
Sex differences present in auditory looming perception, absent in auditory recession
NASA Astrophysics Data System (ADS)
Neuhoff, John G.; Seifritz, Erich
2005-04-01
When predicting the arrival time of an approaching sound source, listeners typically exhibit an anticipatory bias that affords a margin of safety in dealing with looming objects. The looming bias has been demonstrated behaviorally in the laboratory and in the field (Neuhoff 1998, 2001), neurally in fMRI studies (Seifritz et al., 2002), and comparatively in non-human primates (Ghazanfar, Neuhoff, and Logothetis, 2002). In the current work, male and female listeners were presented with three-dimensional looming sound sources and asked to press a button when the source was at the point of closest approach. Females exhibited a significantly greater anticipatory bias than males. Next, listeners were presented with sounds that either approached or receded and then stopped at three different terminal distances. Consistent with the time-to-arrival judgments, female terminal distance judgments for looming sources were significantly closer than male judgments. However, there was no difference between male and female terminal distance judgments for receding sounds. Taken together with the converging behavioral, neural, and comparative evidence, the current results illustrate the environmental salience of looming sounds and suggest that the anticipatory bias for auditory looming may have been shaped by evolution to provide a selective advantage in dealing with looming objects.
Understanding auditory distance estimation by humpback whales: a computational approach.
Mercado, E; Green, S R; Schneider, J N
2008-02-01
Ranging, the ability to judge the distance to a sound source, depends on the presence of predictable patterns of attenuation. We measured long-range sound propagation in coastal waters to assess whether humpback whales might use frequency degradation cues to range singing whales. Two types of neural networks, a multi-layer and a single-layer perceptron, were trained to classify recorded sounds by distance traveled based on their frequency content. The multi-layer network successfully classified received sounds, demonstrating that the distorting effects of underwater propagation on frequency content provide sufficient cues to estimate source distance. Normalizing received sounds with respect to ambient noise levels increased the accuracy of distance estimates by single-layer perceptrons, indicating that familiarity with background noise can potentially improve a listening whale's ability to range. To assess whether frequency patterns predictive of source distance were likely to be perceived by whales, recordings were pre-processed using a computational model of the humpback whale's peripheral auditory system. Although signals processed with this model contained less information than the original recordings, neural networks trained with these physiologically based representations estimated source distance more accurately, suggesting that listening whales should be able to range singers using distance-dependent changes in frequency content.
Empirical source noise prediction method with application to subsonic coaxial jet mixing noise
NASA Technical Reports Server (NTRS)
Zorumski, W. E.; Weir, D. S.
1982-01-01
A general empirical method, developed for source noise predictions, uses tensor splines to represent the dependence of the acoustic field on frequency and direction and Taylor's series to represent the dependence on source state parameters. The method is applied to prediction of mixing noise from subsonic circular and coaxial jets. A noise data base of 1/3-octave-band sound pressure levels (SPL's) from 540 tests was gathered from three countries: United States, United Kingdom, and France. The SPL's depend on seven variables: frequency, polar direction angle, and five source state parameters: inner and outer nozzle pressure ratios, inner and outer stream total temperatures, and nozzle area ratio. A least-squares seven-dimensional curve fit defines a table of constants which is used for the prediction method. The resulting prediction has a mean error of 0 dB and a standard deviation of 1.2 dB. The prediction method is used to search for a coaxial jet which has the greatest coaxial noise benefit as compared with an equivalent single jet. It is found that benefits of about 6 dB are possible.
Scaling of plane-wave functions in statistically optimized near-field acoustic holography.
Hald, Jørgen
2014-11-01
Statistically Optimized Near-field Acoustic Holography (SONAH) is a Patch Holography method, meaning that it can be applied in cases where the measurement area covers only part of the source surface. The method performs projections directly in the spatial domain, avoiding the use of spatial discrete Fourier transforms and the associated errors. First, an inverse problem is solved using regularization. For each calculation point a multiplication must then be performed with two transfer vectors--one to get the sound pressure and the other to get the particle velocity. Considering SONAH based on sound pressure measurements, existing derivations consider only pressure reconstruction when setting up the inverse problem, so the evanescent wave amplification associated with the calculation of particle velocity is not taken into account in the regularized solution of the inverse problem. The present paper introduces a scaling of the applied plane wave functions that takes the amplification into account, and it is shown that the previously published virtual source-plane retraction has almost the same effect. The effectiveness of the different solutions is verified through a set of simulated measurements.
Cognitive load of navigating without vision when guided by virtual sound versus spatial language.
Klatzky, Roberta L; Marston, James R; Giudice, Nicholas A; Golledge, Reginald G; Loomis, Jack M
2006-12-01
A vibrotactile N-back task was used to generate cognitive load while participants were guided along virtual paths without vision. As participants stepped in place, they moved along a virtual path of linear segments. Information was provided en route about the direction of the next turning point, by spatial language ("left," "right," or "straight") or virtual sound (i.e., the perceived azimuth of the sound indicated the target direction). The authors hypothesized that virtual sound, being processed at direct perceptual levels, would have lower load than even simple language commands, which require cognitive mediation. As predicted, whereas the guidance modes did not differ significantly in the no-load condition, participants showed shorter distance traveled and less time to complete a path when performing the N-back task while navigating with virtual sound as guidance. Virtual sound also produced better N-back performance than spatial language. By indicating the superiority of virtual sound for guidance when cognitive load is present, as is characteristic of everyday navigation, these results have implications for guidance systems for the visually impaired and others.
Lercher, Peter; De Coensel, Bert; Dekonink, Luc; Botteldooren, Dick
2017-01-01
Sufficient data refer to the relevant prevalence of sound exposure by mixed traffic sources in many nations. Furthermore, consideration of the potential effects of combined sound exposure is required in legal procedures such as environmental health impact assessments. Nevertheless, current practice still uses single exposure response functions. It is silently assumed that those standard exposure-response curves accommodate also for mixed exposures—although some evidence from experimental and field studies casts doubt on this practice. The ALPNAP-study population (N = 1641) shows sufficient subgroups with combinations of rail-highway, highway-main road and rail-highway-main road sound exposure. In this paper we apply a few suggested approaches of the literature to investigate exposure-response curves and its major determinants in the case of exposure to multiple traffic sources. Highly/moderate annoyance and full scale mean annoyance served as outcome. The results show several limitations of the current approaches. Even facing the inherent methodological limitations (energy equivalent summation of sound, rating of overall annoyance) the consideration of main contextual factors jointly occurring with the sources (such as vibration, air pollution) or coping activities and judgments of the wider area soundscape increases the variance explanation from up to 8% (bivariate), up to 15% (base adjustments) up to 55% (full contextual model). The added predictors vary significantly, depending on the source combination. (e.g., significant vibration effects with main road/railway, not highway). Although no significant interactions were found, the observed additive effects are of public health importance. Especially in the case of a three source exposure situation the overall annoyance is already high at lower levels and the contribution of the acoustic indicators is small compared with the non-acoustic and contextual predictors. Noise mapping needs to go down to levels of 40 dBA,Lden to ensure the protection of quiet areas and prohibit the silent “filling up” of these areas with new sound sources. Eventually, to better predict the annoyance in the exposure range between 40 and 60 dBA and support the protection of quiet areas in city and rural areas in planning sound indicators need to be oriented at the noticeability of sound and consider other traffic related by-products (air quality, vibration, coping strain) in future studies and environmental impact assessments. PMID:28632198
NASA Astrophysics Data System (ADS)
Mironov, M. A.
2011-11-01
A method of allowing for the spatial sound field structure in designing the sound-absorbing structures for turbojet aircraft engine ducts is proposed. The acoustic impedance of a duct should be chosen so as to prevent the reflection of the primary sound field, which is generated by the sound source in the absence of the duct, from the duct walls.
Quantifying the influence of flow asymmetries on glottal sound sources in speech
NASA Astrophysics Data System (ADS)
Erath, Byron; Plesniak, Michael
2008-11-01
Human speech is made possible by the air flow interaction with the vocal folds. During phonation, asymmetries in the glottal flow field may arise from flow phenomena (e.g. the Coanda effect) as well as from pathological vocal fold motion (e.g. unilateral paralysis). In this study, the effects of flow asymmetries on glottal sound sources were investigated. Dynamically-programmable 7.5 times life-size vocal fold models with 2 degrees-of-freedom (linear and rotational) were constructed to provide a first-order approximation of vocal fold motion. Important parameters (Reynolds, Strouhal, and Euler numbers) were scaled to physiological values. Normal and abnormal vocal fold motions were synthesized, and the velocity field and instantaneous transglottal pressure drop were measured. Variability in the glottal jet trajectory necessitated sorting of the data according to the resulting flow configuration. The dipole sound source is related to the transglottal pressure drop via acoustic analogies. Variations in the transglottal pressure drop (and subsequently the dipole sound source) arising from flow asymmetries are discussed.
Psychophysical evidence for auditory motion parallax.
Genzel, Daria; Schutte, Michael; Brimijoin, W Owen; MacNeilage, Paul R; Wiegrebe, Lutz
2018-04-17
Distance is important: From an ecological perspective, knowledge about the distance to either prey or predator is vital. However, the distance of an unknown sound source is particularly difficult to assess, especially in anechoic environments. In vision, changes in perspective resulting from observer motion produce a reliable, consistent, and unambiguous impression of depth known as motion parallax. Here we demonstrate with formal psychophysics that humans can exploit auditory motion parallax, i.e., the change in the dynamic binaural cues elicited by self-motion, to assess the relative depths of two sound sources. Our data show that sensitivity to relative depth is best when subjects move actively; performance deteriorates when subjects are moved by a motion platform or when the sound sources themselves move. This is true even though the dynamic binaural cues elicited by these three types of motion are identical. Our data demonstrate a perceptual strategy to segregate intermittent sound sources in depth and highlight the tight interaction between self-motion and binaural processing that allows assessment of the spatial layout of complex acoustic scenes.
Auditory event perception: the source-perception loop for posture in human gait.
Pastore, Richard E; Flint, Jesse D; Gaston, Jeremy R; Solomon, Matthew J
2008-01-01
There is a small but growing literature on the perception of natural acoustic events, but few attempts have been made to investigate complex sounds not systematically controlled within a laboratory setting. The present study investigates listeners' ability to make judgments about the posture (upright-stooped) of the walker who generated acoustic stimuli contrasted on each trial. We use a comprehensive three-stage approach to event perception, in which we develop a solid understanding of the source event and its sound properties, as well as the relationships between these two event stages. Developing this understanding helps both to identify the limitations of common statistical procedures and to develop effective new procedures for investigating not only the two information stages above, but also the decision strategies employed by listeners in making source judgments from sound. The result is a comprehensive, ultimately logical, but not necessarily expected picture of both the source-sound-perception loop and the utility of alternative research tools.
Nonlinear theory of shocked sound propagation in a nearly choked duct flow
NASA Technical Reports Server (NTRS)
Myers, M. K.; Callegari, A. J.
1982-01-01
The development of shocks in the sound field propagating through a nearly choked duct flow is analyzed by extending a quasi-one dimensional theory. The theory is applied to the case in which sound is introduced into the flow by an acoustic source located in the vicinity of a near-sonic throat. Analytical solutions for the field are obtained which illustrate the essential features of the nonlinear interaction between sound and flow. Numerical results are presented covering ranges of variation of source strength, throat Mach number, and frequency. It is found that the development of shocks leads to appreciable attenuation of acoustic power transmitted upstream through the near-sonic flow. It is possible, for example, that the power loss in the fundamental harmonic can be as much as 90% of that introduced at the source.
Noise abatement in a pine plantation
R. E. Leonard; L. P. Herrington
1971-01-01
Observations on sound propagation were made in two red pine plantations. Measurements were taken of attenuation of prerecorded frequencies at various distances from the sound source. Sound absorption was strongly dependent on frequencies. Peak absorption was at 500 Hz.
NASA Technical Reports Server (NTRS)
Panda, Jayanta; Seasholtz, Richard G.
2003-01-01
Noise sources in high-speed jets were identified by directly correlating flow density fluctuation (cause) to far-field sound pressure fluctuation (effect). The experimental study was performed in a nozzle facility at the NASA Glenn Research Center in support of NASA s initiative to reduce the noise emitted by commercial airplanes. Previous efforts to use this correlation method have failed because the tools for measuring jet turbulence were intrusive. In the present experiment, a molecular Rayleigh-scattering technique was used that depended on laser light scattering by gas molecules in air. The technique allowed accurate measurement of air density fluctuations from different points in the plume. The study was conducted in shock-free, unheated jets of Mach numbers 0.95, 1.4, and 1.8. The turbulent motion, as evident from density fluctuation spectra was remarkably similar in all three jets, whereas the noise sources were significantly different. The correlation study was conducted by keeping a microphone at a fixed location (at the peak noise emission angle of 30 to the jet axis and 50 nozzle diameters away) while moving the laser probe volume from point to point in the flow. The following figure shows maps of the nondimensional coherence value measured at different Strouhal frequencies ([frequency diameter]/jet speed) in the supersonic Mach 1.8 and subsonic Mach 0.95 jets. The higher the coherence, the stronger the source was.
NASA Astrophysics Data System (ADS)
Munuera Saura, Gregorio
The measurement rate of acoustic reduction (R) in an insulating material is determined in special designed enclosures according to UNE EN ISO 140-1 standard. The construction of a testing room that complies with the standards requires a significant financial investment, that too often can not be provided by general machinery manufacturers. The outdoor machines have to be designed considering the noise emission limits established in the European Directive 2000/14, making the installment of insulating materials in enclosures of machines necessary. Suppliers of insulating materials for industrial use does not normally provide the insulating properties of their products, which are often obtained from rock wool, polyethylene, polyurethane, synthetic rubber, etc. It is difficult to test these materials that are characterized by low density surface in a room standardized test procedure that is based on the measurement of sound pressure. The main problem is getting the validation of the measurement when the rate of acoustic reduction is relatively low, as in the case of materials mentioned above. An alternative procedure is proposed in this Doctoral Thesis based on the measurement of sound intensity. The vector nature of this parameter allows just the flow of energy through a given surface to be measured. The validation of the measurement is achieved with the highest degree of accuracy by applying the criteria included in the standard UNE EN ISO 9614. A steel plate 6 mm thick incorporated with a sound source for amplification of a pink noise signal is designed for carrying out the tests. Faced with the source is placed a piece of material to test. The source is connected and the transmitted sound intensity is measured with a probe consisting of two microphones located opposite one another. Previously it has been determined the flow of acoustic energy that impinges on the piece. The difference between the two measures provides directly the rate of acoustic reduction. The main advantage of the described test procedure is that it can be done in a room with normal dimensions and without specific conditioning. The vibroacoustic behaviour of the testing box and its influence on the measures is another important aspect to be considered. A test of modal analysis has been carried out with a steel plate similar to the enclosures of the box. A mathematical model based on the Statistical Energy Analysis SEA has been developed to estimate the paths of energy transmission from the sound source to the point of measurement with the sound intensity probe. The commercial software AUTOSEA2 LT is used. General conclusions regarding the alternative method of measurement and specific ones related to the capacity of insulation of the tested materials have been obtained. As for the general conclusions, the limitations of measuring with the intensity probe at low and high frequencies have been proved, validating measurements with an accuracy of 0.5 dB according to the criteria set forth in the standard UNE EN ISO 9614 Part 3 have been obtained, and a new test procedure for easy determination of the rate of acoustic reduction of an insulating material with low density surface has been established. Finally, the devised procedure allows future developments in the field of vibroacoustics, such as the application of the principle of reciprocity and the determination of acoustic impedance of materials by application of techniques for measuring of acoustics intensity.
Jiang, Tinglei; Long, Zhenyu; Ran, Xin; Zhao, Xue; Xu, Fei; Qiu, Fuyuan; Kanwal, Jagmeet S.
2016-01-01
ABSTRACT Bats vocalize extensively within different social contexts. The type and extent of information conveyed via their vocalizations and their perceptual significance, however, remains controversial and difficult to assess. Greater tube-nosed bats, Murina leucogaster, emit calls consisting of long rectangular broadband noise burst (rBNBl) syllables during aggression between males. To experimentally test the behavioral impact of these sounds for feeding, we deployed an approach and place-preference paradigm. Two food trays were placed on opposite sides and within different acoustic microenvironments, created by sound playback, within a specially constructed tent. Specifically, we tested whether the presence of rBNBl sounds at a food source effectively deters the approach of male bats in comparison to echolocation sounds and white noise. In each case, contrary to our expectation, males preferred to feed at a location where rBNBl sounds were present. We propose that the species-specific rBNBl provides contextual information, not present within non-communicative sounds, to facilitate approach towards a food source. PMID:27815241
Replacing the Orchestra? – The Discernibility of Sample Library and Live Orchestra Sounds
Wolf, Anna; Platz, Friedrich; Mons, Jan
2016-01-01
Recently, musical sounds from pre-recorded orchestra sample libraries (OSL) have become indispensable in music production for the stage or popular charts. Surprisingly, it is unknown whether human listeners can identify sounds as stemming from real orchestras or OSLs. Thus, an internet-based experiment was conducted to investigate whether a classic orchestral work, produced with sounds from a state-of-the-art OSL, could be reliably discerned from a live orchestra recording of the piece. It could be shown that the entire sample of listeners (N = 602) on average identified the correct sound source at 72.5%. This rate slightly exceeded Alan Turing's well-known upper threshold of 70% for a convincing, simulated performance. However, while sound experts tended to correctly identify the sound source, participants with lower listening expertise, who resembled the majority of music consumers, only achieved 68.6%. As non-expert listeners in the experiment were virtually unable to tell the real-life and OSL sounds apart, it is assumed that OSLs will become more common in music production for economic reasons. PMID:27382932
Clinical Validation of a Sound Processor Upgrade in Direct Acoustic Cochlear Implant Subjects
Kludt, Eugen; D’hondt, Christiane; Lenarz, Thomas; Maier, Hannes
2017-01-01
Objective: The objectives of the investigation were to evaluate the effect of a sound processor upgrade on the speech reception threshold in noise and to collect long-term safety and efficacy data after 2½ to 5 years of device use of direct acoustic cochlear implant (DACI) recipients. Study Design: The study was designed as a mono-centric, prospective clinical trial. Setting: Tertiary referral center. Patients: Fifteen patients implanted with a direct acoustic cochlear implant. Intervention: Upgrade with a newer generation of sound processor. Main Outcome Measures: Speech recognition test in quiet and in noise, pure tone thresholds, subject-reported outcome measures. Results: The speech recognition in quiet and in noise is superior after the sound processor upgrade and stable after long-term use of the direct acoustic cochlear implant. The bone conduction thresholds did not decrease significantly after long-term high level stimulation. Conclusions: The new sound processor for the DACI system provides significant benefits for DACI users for speech recognition in both quiet and noise. Especially the noise program with the use of directional microphones (Zoom) allows DACI patients to have much less difficulty when having conversations in noisy environments. Furthermore, the study confirms that the benefits of the sound processor upgrade are available to the DACI recipients even after several years of experience with a legacy sound processor. Finally, our study demonstrates that the DACI system is a safe and effective long-term therapy. PMID:28406848
Hemispherical breathing mode speaker using a dielectric elastomer actuator.
Hosoya, Naoki; Baba, Shun; Maeda, Shingo
2015-10-01
Although indoor acoustic characteristics should ideally be assessed by measuring the reverberation time using a point sound source, a regular polyhedron loudspeaker, which has multiple loudspeakers on a chassis, is typically used. However, such a configuration is not a point sound source if the size of the loudspeaker is large relative to the target sound field. This study investigates a small lightweight loudspeaker using a dielectric elastomer actuator vibrating in the breathing mode (the pulsating mode such as the expansion and contraction of a balloon). Acoustic testing with regard to repeatability, sound pressure, vibration mode profiles, and acoustic radiation patterns indicate that dielectric elastomer loudspeakers may be feasible.
High frequency sound propagation in a network of interconnecting streets
NASA Astrophysics Data System (ADS)
Hewett, D. P.
2012-12-01
We propose a new model for the propagation of acoustic energy from a time-harmonic point source through a network of interconnecting streets in the high frequency regime, in which the wavelength is small compared to typical macro-lengthscales such as street widths/lengths and building heights. Our model, which is based on geometrical acoustics (ray theory), represents the acoustic power flow from the source along any pathway through the network as the integral of a power density over the launch angle of a ray emanating from the source, and takes into account the key phenomena involved in the propagation, namely energy loss by wall absorption, energy redistribution at junctions, and, in 3D, energy loss to the atmosphere. The model predicts strongly anisotropic decay away from the source, with the power flow decaying exponentially in the number of junctions from the source, except along the axial directions of the network, where the decay is algebraic.
Experimental study of the thermal-acoustic efficiency in a long turbulent diffusion-flame burner
NASA Technical Reports Server (NTRS)
Mahan, J. R.
1983-01-01
A two-year study of noise production in a long tubular burner is described. The research was motivated by an interest in understanding and eventually reducing core noise in gas turbine engines. The general approach is to employ an acoustic source/propagation model to interpret the sound pressure spectrum in the acoustic far field of the burner in terms of the source spectrum that must have produced it. In the model the sources are assumed to be due uniquely to the unsteady component of combustion heat release; thus only direct combustion-noise is considered. The source spectrum is then the variation with frequency of the thermal-acoustic efficiency, defined as the fraction of combustion heat release which is converted into acoustic energy at a given frequency. The thrust of the research was to study the variation of the source spectrum with the design and operating parameters of the burner.
Jenny, Trevor; Anderson, Brian E
2011-08-01
Qualifying an anechoic chamber for frequencies that extend into the ultrasonic range is necessary for research work involving airborne ultrasonic sound. The ANSI S12.55/ISO 3745 standard which covers anechoic chamber qualification does not extend into the ultrasonic frequency range, nor have issues pertinent to this frequency range been fully discussed in the literature. An increasing number of technologies employ ultrasound; hence the need for an ultrasonic anechoic chamber. This paper will specifically discuss the need to account for atmospheric absorption and issues pertaining to source transducer directivity by presenting some results for qualification of a chamber at Brigham Young University.
NASA Astrophysics Data System (ADS)
Mäkelä, Anni; Witte, Ursula; Archambault, Philippe
2016-04-01
Rapid warming is dramatically reducing the extent and thickness of summer sea ice of the Arctic Ocean, changing both the quantity and type of marine primary production as the longer open water period favours phytoplankton growth and reduces ice algal production. The benthic ecosystem is dependent on this sinking organic matter for source of energy, and ice algae is thought to be a superior quality food source due to higher essential fatty acid content. The resilience of the benthos to changing quality and quantity of food was investigated through sediment incubation experiments in the summer 2013 in two highly productive Arctic polynyas in the North Water and Lancaster Sound, Canada. The pathways of organic matter processing and contribution of different organisms to these processes was assessed through 13C and 15N isotope assimilation into macroinfaunal tissues. In North Water Polynya, the total and biomass specific uptake of ice algal derived C and N was higher than the uptake of phytoplankton, whereas an opposite trend was observed in Lancaster Sound. Polychaetes, especially individuals of families Sabellidae and Spionidae, unselectively ingested both algal types and were significant in the overall organic matter processing at both sites. Feeding preference was observed in crustaceans, which preferentially fed on ice algae at Lancaster Sound, but preferred phytoplankton in North Water Polynya. Bivalves also had a significant role in the organic matter processing overall, but only showed preferential feeding on phytoplankton at Lancaster Sound polynya. Overall the filter feeders and surface deposit feeders occupying lowest trophic levels were responsible for majority of the processing of both algal types. The results provide direct evidence of preferential resource utilisation by benthic macrofauna and highlight spatial differences in the processes. This helps to predict future patterns of nutrient cycling in Arctic sediments, with implications to benthic-pelagic coupling and overall marine productivity.
Leis, Jeffrey M; Siebeck, Ulrike; Dixson, Danielle L
2011-11-01
Nearly all demersal teleost marine fishes have pelagic larval stages lasting from several days to several weeks, during which time they are subject to dispersal. Fish larvae have considerable swimming abilities, and swim in an oriented manner in the sea. Thus, they can influence their dispersal and thereby, the connectivity of their populations. However, the sensory cues marine fish larvae use for orientation in the pelagic environment remain unclear. We review current understanding of these cues and how sensory abilities of larvae develop and are used to achieve orientation with particular emphasis on coral-reef fishes. The use of sound is best understood; it travels well underwater with little attenuation, and is current-independent but location-dependent, so species that primarily utilize sound for orientation will have location-dependent orientation. Larvae of many species and families can hear over a range of ~100-1000 Hz, and can distinguish among sounds. They can localize sources of sounds, but the means by which they do so is unclear. Larvae can hear during much of their pelagic larval phase, and ontogenetically, hearing sensitivity, and frequency range improve dramatically. Species differ in sensitivity to sound and in the rate of improvement in hearing during ontogeny. Due to large differences among-species within families, no significant differences in hearing sensitivity among families have been identified. Thus, distances over which larvae can detect a given sound vary among species and greatly increase ontogenetically. Olfactory cues are current-dependent and location-dependent, so species that primarily utilize olfactory cues will have location-dependent orientation, but must be able to swim upstream to locate sources of odor. Larvae can detect odors (e.g., predators, conspecifics), during most of their pelagic phase, and at least on small scales, can localize sources of odors in shallow water, although whether they can do this in pelagic environments is unknown. Little is known of the ontogeny of olfactory ability or the range over which larvae can localize sources of odors. Imprinting on an odor has been shown in one species of reef-fish. Celestial cues are current- and location-independent, so species that primarily utilize them will have location-independent orientation that can apply over broad scales. Use of sun compass or polarized light for orientation by fish larvae is implied by some behaviors, but has not been proven. Use of neither magnetic fields nor direction of waves for orientation has been shown in marine fish larvae. We highlight research priorities in this area. © The Author 2011. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved.
An open access database for the evaluation of heart sound algorithms.
Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D
2016-12-01
In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.
Mothers Consistently Alter Their Unique Vocal Fingerprints When Communicating with Infants.
Piazza, Elise A; Iordan, Marius Cătălin; Lew-Williams, Casey
2017-10-23
The voice is the most direct link we have to others' minds, allowing us to communicate using a rich variety of speech cues [1, 2]. This link is particularly critical early in life as parents draw infants into the structure of their environment using infant-directed speech (IDS), a communicative code with unique pitch and rhythmic characteristics relative to adult-directed speech (ADS) [3, 4]. To begin breaking into language, infants must discern subtle statistical differences about people and voices in order to direct their attention toward the most relevant signals. Here, we uncover a new defining feature of IDS: mothers significantly alter statistical properties of vocal timbre when speaking to their infants. Timbre, the tone color or unique quality of a sound, is a spectral fingerprint that helps us instantly identify and classify sound sources, such as individual people and musical instruments [5-7]. We recorded 24 mothers' naturalistic speech while they interacted with their infants and with adult experimenters in their native language. Half of the participants were English speakers, and half were not. Using a support vector machine classifier, we found that mothers consistently shifted their timbre between ADS and IDS. Importantly, this shift was similar across languages, suggesting that such alterations of timbre may be universal. These findings have theoretical implications for understanding how infants tune in to their local communicative environments. Moreover, our classification algorithm for identifying infant-directed timbre has direct translational implications for speech recognition technology. Copyright © 2017 Elsevier Ltd. All rights reserved.
Beck, Christoph; Garreau, Guillaume; Georgiou, Julius
2016-01-01
Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.
Captive Bottlenose Dolphins Do Discriminate Human-Made Sounds Both Underwater and in the Air
Lima, Alice; Sébilleau, Mélissa; Boye, Martin; Durand, Candice; Hausberger, Martine; Lemasson, Alban
2018-01-01
Bottlenose dolphins (Tursiops truncatus) spontaneously emit individual acoustic signals that identify them to group members. We tested whether these cetaceans could learn artificial individual sound cues played underwater and whether they would generalize this learning to airborne sounds. Dolphins are thought to perceive only underwater sounds and their training depends largely on visual signals. We investigated the behavioral responses of seven dolphins in a group to learned human-made individual sound cues, played underwater and in the air. Dolphins recognized their own sound cue after hearing it underwater as they immediately moved toward the source, whereas when it was airborne they gazed more at the source of their own sound cue but did not approach it. We hypothesize that they perhaps detected modifications of the sound induced by air or were confused by the novelty of the situation, but nevertheless recognized they were being “targeted.” They did not respond when hearing another group member’s cue in either situation. This study provides further evidence that dolphins respond to individual-specific sounds and that these marine mammals possess some capacity for processing airborne acoustic signals. PMID:29445350
The influence of crowd density on the sound environment of commercial pedestrian streets.
Meng, Qi; Kang, Jian
2015-04-01
Commercial pedestrian streets are very common in China and Europe, with many situated in historic or cultural centres. The environments of these streets are important, including their sound environments. The objective of this study is to explore the relationships between the crowd density and the sound environments of commercial pedestrian streets. On-site measurements were performed at the case study site in Harbin, China, and a questionnaire was administered. The sound pressure measurements showed that the crowd density has an insignificant effect on sound pressure below 0.05 persons/m2, whereas when the crowd density is greater than 0.05 persons/m2, the sound pressure increases with crowd density. The sound sources were analysed, showing that several typical sound sources, such as traffic noise, can be masked by the sounds resulting from dense crowds. The acoustic analysis showed that crowd densities outside the range of 0.10 to 0.25 persons/m2 exhibited lower acoustic comfort evaluation scores. In terms of audiovisual characteristics, the subjective loudness increases with greater crowd density, while the acoustic comfort decreases. The results for an indoor underground shopping street are also presented for comparison. Copyright © 2014 Elsevier B.V. All rights reserved.
Effects of fiber motion on the acoustic behavior of an anisotropic, flexible fibrous material
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Rice, Edward J.; Groesbeck, Donald E.
1990-01-01
The acoustic behavior of a flexible fibrous material was studied experimentally. The material consisted of cylindrically shaped fibers arranged in a batting with the fibers primarily aligned parallel to the face of the batting. This type of material was considered anisotropic, with the acoustic propagation constant depending on whether the direction of sound propagation was parallel or normal to the fiber arrangement. Normal incidence sound absorption measurements were taken for both fiber orientations over the frequency range 140 to 1500 Hz and with bulk densities ranging from 4.6 to 67 kg/cu m. When the sound propagated in a direction normal to the fiber alignment, the measured sound absorption showed the occurrence of a strong resonance, which increased absorption above that attributed to viscous and thermal effects. When the sound propagated in a direction parallel to the fiber alignment, indications of strong resonances in the data were not present. The resonance in the data for fibers normal to the direction of sound propagation is attributed to fiber motion. An analytical model was developed for the acoustic behavior of the material displaying the same fiber motion characteristics shown in the measurements.
Airborne Acoustic Perception by a Jumping Spider.
Shamble, Paul S; Menda, Gil; Golden, James R; Nitzany, Eyal I; Walden, Katherine; Beatus, Tsevi; Elias, Damian O; Cohen, Itai; Miles, Ronald N; Hoy, Ronald R
2016-11-07
Jumping spiders (Salticidae) are famous for their visually driven behaviors [1]. Here, however, we present behavioral and neurophysiological evidence that these animals also perceive and respond to airborne acoustic stimuli, even when the distance between the animal and the sound source is relatively large (∼3 m) and with stimulus amplitudes at the position of the spider of ∼65 dB sound pressure level (SPL). Behavioral experiments with the jumping spider Phidippus audax reveal that these animals respond to low-frequency sounds (80 Hz; 65 dB SPL) by freezing-a common anti-predatory behavior characteristic of an acoustic startle response. Neurophysiological recordings from auditory-sensitive neural units in the brains of these jumping spiders showed responses to low-frequency tones (80 Hz at ∼65 dB SPL)-recordings that also represent the first record of acoustically responsive neural units in the jumping spider brain. Responses persisted even when the distances between spider and stimulus source exceeded 3 m and under anechoic conditions. Thus, these spiders appear able to detect airborne sound at distances in the acoustic far-field region, beyond the near-field range often thought to bound acoustic perception in arthropods that lack tympanic ears (e.g., spiders) [2]. Furthermore, direct mechanical stimulation of hairs on the patella of the foreleg was sufficient to generate responses in neural units that also responded to airborne acoustic stimuli-evidence that these hairs likely play a role in the detection of acoustic cues. We suggest that these auditory responses enable the detection of predators and facilitate an acoustic startle response. VIDEO ABSTRACT. Copyright © 2016 Elsevier Ltd. All rights reserved.
Loiselle, Louise H; Dorman, Michael F; Yost, William A; Cook, Sarah J; Gifford, Rene H
2016-08-01
To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Eleven bilateral listeners with MED-EL (Durham, NC) CIs and 8 listeners with hearing-preservation CIs with symmetrical low frequency, acoustic hearing using the MED-EL or Cochlear device were evaluated using 2 tests designed to task binaural hearing, localization, and a simulated cocktail party. Access to interaural cues for localization was constrained by the use of low-pass, high-pass, and wideband noise stimuli. Sound-source localization accuracy for listeners with bilateral CIs in response to the high-pass noise stimulus and sound-source localization accuracy for the listeners with hearing-preservation CIs in response to the low-pass noise stimulus did not differ significantly. Speech understanding in a cocktail party listening environment improved for all listeners when interaural cues, either interaural time difference or interaural level difference, were available. The findings of the current study indicate that similar degrees of benefit to sound-source localization and speech understanding in complex listening environments are possible with 2 very different rehabilitation strategies: the provision of bilateral CIs and the preservation of hearing.
Material sound source localization through headphones
NASA Astrophysics Data System (ADS)
Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada
2012-09-01
In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.
Aeroacoustic analysis of the human phonation process based on a hybrid acoustic PIV approach
NASA Astrophysics Data System (ADS)
Lodermeyer, Alexander; Tautz, Matthias; Becker, Stefan; Döllinger, Michael; Birk, Veronika; Kniesburges, Stefan
2018-01-01
The detailed analysis of sound generation in human phonation is severely limited as the accessibility to the laryngeal flow region is highly restricted. Consequently, the physical basis of the underlying fluid-structure-acoustic interaction that describes the primary mechanism of sound production is not yet fully understood. Therefore, we propose the implementation of a hybrid acoustic PIV procedure to evaluate aeroacoustic sound generation during voice production within a synthetic larynx model. Focusing on the flow field downstream of synthetic, aerodynamically driven vocal folds, we calculated acoustic source terms based on the velocity fields obtained by time-resolved high-speed PIV applied to the mid-coronal plane. The radiation of these sources into the acoustic far field was numerically simulated and the resulting acoustic pressure was finally compared with experimental microphone measurements. We identified the tonal sound to be generated downstream in a small region close to the vocal folds. The simulation of the sound propagation underestimated the tonal components, whereas the broadband sound was well reproduced. Our results demonstrate the feasibility to locate aeroacoustic sound sources inside a synthetic larynx using a hybrid acoustic PIV approach. Although the technique employs a 2D-limited flow field, it accurately reproduces the basic characteristics of the aeroacoustic field in our larynx model. In future studies, not only the aeroacoustic mechanisms of normal phonation will be assessable, but also the sound generation of voice disorders can be investigated more profoundly.
Degerman, Alexander; Rinne, Teemu; Särkkä, Anna-Kaisa; Salmi, Juha; Alho, Kimmo
2008-06-01
Event-related brain potentials (ERPs) and magnetic fields (ERFs) were used to compare brain activity associated with selective attention to sound location or pitch in humans. Sixteen healthy adults participated in the ERP experiment, and 11 adults in the ERF experiment. In different conditions, the participants focused their attention on a designated sound location or pitch, or pictures presented on a screen, in order to detect target sounds or pictures among the attended stimuli. In the Attend Location condition, the location of sounds varied randomly (left or right), while their pitch (high or low) was kept constant. In the Attend Pitch condition, sounds of varying pitch (high or low) were presented at a constant location (left or right). Consistent with previous ERP results, selective attention to either sound feature produced a negative difference (Nd) between ERPs to attended and unattended sounds. In addition, ERPs showed a more posterior scalp distribution for the location-related Nd than for the pitch-related Nd, suggesting partially different generators for these Nds. The ERF source analyses found no source distribution differences between the pitch-related Ndm (the magnetic counterpart of the Nd) and location-related Ndm in the superior temporal cortex (STC), where the main sources of the Ndm effects are thought to be located. Thus, the ERP scalp distribution differences between the location-related and pitch-related Nd effects may have been caused by activity of areas outside the STC, perhaps in the inferior parietal regions.
A method for calculating strut and splitter plate noise in exit ducts: Theory and verification
NASA Technical Reports Server (NTRS)
Fink, M. R.
1978-01-01
Portions of a four-year analytical and experimental investigation relative to noise radiation from engine internal components in turbulent flow are summarized. Spectra measured for such airfoils over a range of chord, thickness ratio, flow velocity, and turbulence level were compared with predictions made by an available rigorous thin-airfoil analytical method. This analysis included the effects of flow compressibility and source noncompactness. Generally good agreement was obtained. This noise calculation method for isolated airfoils in turbulent flow was combined with a method for calculating transmission of sound through a subsonic exit duct and with an empirical far-field directivity shape. These three elements were checked separately and were individually shown to give close agreement with data. This combination provides a method for predicting engine internally generated aft-radiated noise from radial struts and stators, and annular splitter rings. Calculated sound power spectra, directivity, and acoustic pressure spectra were compared with the best available data. These data were for noise caused by a fan exit duct annular splitter ring, larger-chord stator blades, and turbine exit struts.
Sound Explorations from the Ages of 10 to 37 Months: The Ontogenesis of Musical Conducts
ERIC Educational Resources Information Center
Delalande, Francois; Cornara, Silvia
2010-01-01
One of the forms of first musical conduct is the exploration of sound sources. When young children produce sounds with any object, these sounds may surprise them and so they make the sounds again--not exactly the same, but introducing some variation. A process of repetition with slight changes is set in motion which can be analysed, as did Piaget,…
Monitoring the Ocean Using High Frequency Ambient Sound
2008-10-01
even identify specific groups within the resident killer whale type ( Puget Sound Southern Resident pods J, K and L) because these groups have...particular, the different populations of killer whales in the NE Pacific Ocean. This has been accomplished by detecting transient sounds with short...high sea state (the sound of spray), general shipping - close and distant, clanking and whale calls and clicking. These sound sources form the basis
Numerical analysis of flow induced noise propagation in supercavitating vehicles at subsonic speeds.
Ramesh, Sai Sudha; Lim, Kian Meng; Zheng, Jianguo; Khoo, Boo Cheong
2014-04-01
Flow supercavitation begins when fluid is accelerated over a sharp edge, usually at the nose of an underwater vehicle, where phase change occurs and causes low density gaseous cavity to gradually envelop the whole object (supercavity) and thereby enabling higher speeds of underwater vehicles. The process of supercavity inception/development by means of "natural cavitation" and its sustainment through ventilated cavitation result in turbulence and fluctuations at the water-vapor interface that manifest themselves as major sources of hydrodynamic noise. Therefore in the present context, three main sources are investigated, namely, (1) flow generated noise due to turbulent pressure fluctuations around the supercavity, (2) small scale pressure fluctuations at the vapor-water interface, and (3) pressure fluctuations due to direct impingement of ventilated gas-jets on the supercavity wall. An understanding of their relative contributions toward self-noise is very crucial for the efficient operation of high frequency acoustic sensors that facilitate the vehicle's guidance system. Qualitative comparisons of acoustic pressure distribution resulting from aforementioned sound sources are presented by employing a recently developed boundary integral method. By using flow data from a specially developed unsteady computational fluid dynamics solver for simulating supercavitating flows, the boundary-element method based acoustic solver was developed for computing flow generated sound.
Combination sound and vibration isolation curb for rooftop air-handling systems
NASA Astrophysics Data System (ADS)
Paige, Thomas S.
2005-09-01
This paper introduces the new Model ESSR Sound and Vibration Isolation Curb manufactured by Kinetics Noise Control, Inc. This product was specially designed to address all of the common transmission paths associated with noise and vibration sources from roof-mounted air-handling equipment. These include: reduction of airborne fan noise in supply and return air ductwork, reduction of duct rumble and breakout noise, reduction of direct airborne sound transmission through the roof deck, and reduction of vibration and structure-borne noise transmission to the building structure. Upgrade options are available for increased seismic restraint and wind-load protection. The advantages of this new system over the conventional approach of installing separate duct silencers in the room ceiling space below the rooftop unit are discussed. Several case studies are presented with the emphasis on completed projects pertaining to classrooms and school auditorium applications. Some success has also been achieved by adding active noise control components to improve low-frequency attenuation. This is an innovative product designed for conformance with the new classroom acoustics standard ANSI S12.60.
The FOXSI sounding rocket: Latest analysis and results
NASA Astrophysics Data System (ADS)
Buitrago-Casas, Juan Camilo; Glesener, Lindsay; Christe, Steven; Krucker, Sam; Ishikawa, Shin-Nosuke; Takahashi, Tadayuki; Ramsey, Brian; Han, Raymond
2016-05-01
Hard X-ray (HXR) observations are a linchpin for studying particle acceleration and hot thermal plasma emission in the solar corona. Current and past indirectly imaging instruments lack the sensitivity and dynamic range needed to observe faint HXR signatures, especially in the presences of brighter sources. These limitations are overcome by using HXR direct focusing optics coupled with semiconductor detectors. The Focusing Optics X-ray Solar Imager (FOXSI) sounding rocket experiment is a state of the art solar telescope that develops and applies these capabilities.The FOXSI sounding rocket has successfully flown twice, observing active regions, microflares, and areas of the quiet-Sun. Thanks to its far superior imaging dynamic range, FOXSI performs cleaner hard X-ray imaging spectroscopy than previous instruments that use indirect imaging methods like RHESSI.We present a description of the FOXSI rocket payload, paying attention to the optics and semiconductor detectors calibrations, as well as the upgrades made for the second flight. We also introduce some of the latest FOXSI data analysis, including imaging spectroscopy of microflares and active regions observed during the two flights, and the differential emission measure distribution of the nonflaring corona.
Nonreciprocal Linear Transmission of Sound in a Viscous Environment with Broken P Symmetry.
Walker, E; Neogi, A; Bozhko, A; Zubov, Yu; Arriaga, J; Heo, H; Ju, J; Krokhin, A A
2018-05-18
Reciprocity is a fundamental property of the wave equation in a linear medium that originates from time-reversal symmetry, or T symmetry. For electromagnetic waves, reciprocity can be violated by an external magnetic field. It is much harder to realize nonreciprocity for acoustic waves. Here we report the first experimental observation of linear nonreciprocal transmission of ultrasound through a water-submerged phononic crystal consisting of asymmetric rods. Viscosity of water is the factor that breaks the T symmetry. Asymmetry, or broken P symmetry along the direction of sound propagation, is the second necessary factor for nonreciprocity. Experimental results are in agreement with numerical simulations based on the Navier-Stokes equation. Our study demonstrates that a medium with broken PT symmetry is acoustically nonreciprocal. The proposed passive nonreciprocal device is cheap, robust, and does not require an energy source.
NASA Astrophysics Data System (ADS)
Carlowicz, Michael
As scientists carefully study some aspects of the ocean environment, are they unintentionally distressing others? That is a question to be answered by Robert Benson and his colleagues in the Center for Bioacoustics at Texas A&M University.With help from a 3-year, $316,000 grant from the U.S. Office of Naval Research, Benson will study how underwater noise produced by naval operations and other sources may affect marine mammals. In Benson's study, researchers will generate random sequences of low-frequency, high-intensity (180-decibel) sounds in the Gulf of Mexico, working at an approximate distance of 1 km from sperm whale herds. Using an array of hydrophones, the scientists will listen to the characteristic clicks and whistles of the sperm whales to detect changes in the animals' direction, speed, and depth, as derived from fluctuations in their calls.
Nonreciprocal Linear Transmission of Sound in a Viscous Environment with Broken P Symmetry
NASA Astrophysics Data System (ADS)
Walker, E.; Neogi, A.; Bozhko, A.; Zubov, Yu.; Arriaga, J.; Heo, H.; Ju, J.; Krokhin, A. A.
2018-05-01
Reciprocity is a fundamental property of the wave equation in a linear medium that originates from time-reversal symmetry, or T symmetry. For electromagnetic waves, reciprocity can be violated by an external magnetic field. It is much harder to realize nonreciprocity for acoustic waves. Here we report the first experimental observation of linear nonreciprocal transmission of ultrasound through a water-submerged phononic crystal consisting of asymmetric rods. Viscosity of water is the factor that breaks the T symmetry. Asymmetry, or broken P symmetry along the direction of sound propagation, is the second necessary factor for nonreciprocity. Experimental results are in agreement with numerical simulations based on the Navier-Stokes equation. Our study demonstrates that a medium with broken PT symmetry is acoustically nonreciprocal. The proposed passive nonreciprocal device is cheap, robust, and does not require an energy source.
NASA Astrophysics Data System (ADS)
Seo, Jung-Hee; Bakhshaee, Hani; Zhu, Chi; Mittal, Rajat
2015-11-01
Patterns of blood flow associated with abnormal heart conditions generate characteristic sounds that can be measured on the chest surface using a stethoscope. This technique of `cardiac auscultation' has been used effectively for over a hundred years to diagnose heart conditions, but the mechanisms that generate heart sounds, as well as the physics of sound transmission through the thorax, are not well understood. Here we present a new computational method for simulating the physics of heart murmur generation and transmission and use it to simulate the murmurs associated with a modeled aortic stenosis. The flow in the model aorta is simulated by the incompressible Navier-Stokes equations and the three-dimensional elastic wave generation and propagation on the surrounding viscoelastic structure are solved with a high-order finite difference method in the time domain. The simulation results are compared with experimental measurements and show good agreement. The present study confirms that the pressure fluctuations on the vessel wall are the source of these heart murmurs, and both compression and shear waves likely plan an important role in cardiac auscultation. Supported by the NSF Grants IOS-1124804 and IIS-1344772, Computational resource by XSEDE NSF grant TG-CTS100002.
Olechno, Joseph; Ellson, Richard; Browning, Brent; Stearns, Richard; Mutz, Mitchell; Travis, Michael; Qureshi, Shehrzad; Shieh, Jean
2005-08-01
Acoustic auditing is a non-destructive, non-invasive technique to monitor the composition and volume of fluids in open or sealed microplates and storage tubes. When acoustic energy encounters an interface between two materials, some of the energy passes through the interface, while the remainder is reflected. Acoustic energy applied to the bottom of a multi-well plate or a storage tube is reflected by the fluid contents of the microplate or tube. The amplitude of these reflections or echoes correlates directly with properties of the fluid, including the speed of sound and the concentration of water in the fluid. Once the speed of sound in the solution is known from the analysis of these echoes, it is easy to determine the depth of liquid and, thereby, the volume by monitoring how long it takes for sound energy to reflect off the fluid meniscus. This technique is rapid (>100,000 samples per day), precise (<1% coefficient of variation for hydration measurements, <4% coefficient of variation for volume measurements), and robust. It does not require uncapping tubes or unsealing or unlidding microplates. The sound energy is extremely gentle and has no deleterious impact upon the fluid or compounds dissolved in it.
An integrated analysis-synthesis array system for spatial sound fields.
Bai, Mingsian R; Hua, Yi-Hsin; Kuo, Chia-Hao; Hsieh, Yu-Hao
2015-03-01
An integrated recording and reproduction array system for spatial audio is presented within a generic framework akin to the analysis-synthesis filterbanks in discrete time signal processing. In the analysis stage, a microphone array "encodes" the sound field by using the plane-wave decomposition. Direction of arrival of plane-wave components that comprise the sound field of interest are estimated by multiple signal classification. Next, the source signals are extracted by using a deconvolution procedure. In the synthesis stage, a loudspeaker array "decodes" the sound field by reconstructing the plane-wave components obtained in the analysis stage. This synthesis stage is carried out by pressure matching in the interior domain of the loudspeaker array. The deconvolution problem is solved by truncated singular value decomposition or convex optimization algorithms. For high-frequency reproduction that suffers from the spatial aliasing problem, vector panning is utilized. Listening tests are undertaken to evaluate the deconvolution method, vector panning, and a hybrid approach that combines both methods to cover frequency ranges below and above the spatial aliasing frequency. Localization and timbral attributes are considered in the subjective evaluation. The results show that the hybrid approach performs the best in overall preference. In addition, there is a trade-off between reproduction performance and the external radiation.
The Problems with "Noise Numbers" for Wind Farm Noise Assessment
ERIC Educational Resources Information Center
Thorne, Bob
2011-01-01
Human perception responds primarily to sound character rather than sound level. Wind farms are unique sound sources and exhibit special audible and inaudible characteristics that can be described as modulating sound or as a tonal complex. Wind farm compliance measures based on a specified noise number alone will fail to address problems with noise…
Acoustic guide for noise-transmission testing of aircraft
NASA Technical Reports Server (NTRS)
Vaicaitis, Rimas (Inventor)
1987-01-01
Selective testing of aircraft or other vehicular components without requiring disassembly of the vehicle or components was accomplished by using a portable guide apparatus. The device consists of a broadband noise source, a guide to direct the acoustic energy, soft sealing insulation to seal the guide to the noise source and to the vehicle component, and noise measurement microphones, both outside the vehicle at the acoustic guide output and inside the vehicle to receive attenuated sound. By directing acoustic energy only to selected components of a vehicle via the acoustic guide, it is possible to test a specific component, such as a door or window, without picking up extraneous noise which may be transmitted to the vehicle interior through other components or structure. This effect is achieved because no acoustic energy strikes the vehicle exterior except at the selected component. Also, since the test component remains attached to the vehicle, component dynamics with vehicle frame are not altered.
1982-12-01
Coppens showed great kindness by accepting supervision of this research when time was short. Vis con - cern, understanding and direzticn led to an...related to computer processing time and storage requirements. These factors will not he addressed directly in this resear:h because the pro - cessing...computational efficiency. Disadvantages are a uniform mesh and periodic boundary con - ditions to satisfy the FFT, and filtering of tho sound speed profile by