50 years of progress in microphone arrays for speech processing
NASA Astrophysics Data System (ADS)
Elko, Gary W.; Frisk, George V.
2004-10-01
In the early 1980s, Jim Flanagan had a dream of covering the walls of a room with microphones. He occasionally referred to this concept as acoustic wallpaper. Being a new graduate in the field of acoustics and signal processing, it was fortunate that Bell Labs was looking for someone to investigate this area of microphone arrays for telecommunication. The job interview was exciting, with all of the big names in speech signal processing and acoustics sitting in the audience, many of whom were the authors of books and articles that were seminal contributions to the fields of acoustics and signal processing. If there ever was an opportunity of a lifetime, this was it. Fortunately, some of the work had already begun, and Sessler and West had already laid the groundwork for directional electret microphones. This talk will describe some of the very early work done at Bell Labs on microphone arrays and reflect on some of the many systems, from large 400-element arrays, to small two-microphone arrays. These microphone array systems were built under Jim Flanagan's leadership in an attempt to realize his vision of seamless hands-free speech communication between people and the communication of people with machines.
A Background Noise Reduction Technique Using Adaptive Noise Cancellation for Microphone Arrays
NASA Technical Reports Server (NTRS)
Spalt, Taylor B.; Fuller, Christopher R.; Brooks, Thomas F.; Humphreys, William M., Jr.; Brooks, Thomas F.
2011-01-01
Background noise in wind tunnel environments poses a challenge to acoustic measurements due to possible low or negative Signal to Noise Ratios (SNRs) present in the testing environment. This paper overviews the application of time domain Adaptive Noise Cancellation (ANC) to microphone array signals with an intended application of background noise reduction in wind tunnels. An experiment was conducted to simulate background noise from a wind tunnel circuit measured by an out-of-flow microphone array in the tunnel test section. A reference microphone was used to acquire a background noise signal which interfered with the desired primary noise source signal at the array. The technique s efficacy was investigated using frequency spectra from the array microphones, array beamforming of the point source region, and subsequent deconvolution using the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) algorithm. Comparisons were made with the conventional techniques for improving SNR of spectral and Cross-Spectral Matrix subtraction. The method was seen to recover the primary signal level in SNRs as low as -29 dB and outperform the conventional methods. A second processing approach using the center array microphone as the noise reference was investigated for more general applicability of the ANC technique. It outperformed the conventional methods at the -29 dB SNR but yielded less accurate results when coherence over the array dropped. This approach could possibly improve conventional testing methodology but must be investigated further under more realistic testing conditions.
Arrays of Miniature Microphones for Aeroacoustic Testing
NASA Technical Reports Server (NTRS)
Shams, Qamar A.; Humphreys, William M.; Sealey, Bradley S.; Bartram, Scott M.; Zuckewar, Allan J.; Comeaux, Toby; Adams, James K.
2007-01-01
A phased-array system comprised of custom-made and commercially available microelectromechanical system (MEMS) silicon microphones and custom ancillary hardware has been developed for use in aeroacoustic testing in hard-walled and acoustically treated wind tunnels. Recent advances in the areas of multi-channel signal processing and beam forming have driven the construction of phased arrays containing ever-greater numbers of microphones. Traditional obstacles to this trend have been posed by (1) the high costs of conventional condenser microphones, associated cabling, and support electronics and (2) the difficulty of mounting conventional microphones in the precise locations required for high-density arrays. The present development overcomes these obstacles. One of the hallmarks of the new system is a series of fabricated platforms on which multiple microphones can be mounted. These mounting platforms, consisting of flexible polyimide circuit-board material (see left side of figure), include all the necessary microphone power and signal interconnects. A single bus line connects all microphones to a common power supply, while the signal lines terminate in one or more data buses on the sides of the circuit board. To minimize cross talk between array channels, ground lines are interposed as shields between all the data bus signal lines. The MEMS microphones are electrically connected to the boards via solder pads that are built into the printed wiring. These flexible circuit boards share many characteristics with their traditional rigid counterparts, but can be manufactured much thinner, as small as 0.1 millimeter, and much lighter with boards weighing as much as 75 percent less than traditional rigid ones. For a typical hard-walled wind-tunnel installation, the flexible printed-circuit board is bonded to the tunnel wall and covered with a face sheet that contains precise cutouts for the microphones. Once the face sheet is mounted, a smooth surface is established over the entire array due to the flush mounting of all microphones (see right side of figure). The face sheet is made from a continuous glass-woven-fabric base impregnated with an epoxy resin binder. This material offers a combination of high mechanical strength and low dielectric loss, making it suitable for withstanding the harsh test section environment present in many wind tunnels, while at the same time protecting the underlying polyimide board. Customized signal-conditioning hardware consisting of line drivers and antialiasing filters are coupled with the array. The line drivers are constructed using low-supply-current, high-gain-bandwidth operational amplifiers designed to transmit the microphone signals several dozen feet from the array to external acquisition hardware. The anti-alias filters consist of individual Chebyshev low-pass filters (one for each microphone channel) housed on small printed-circuit boards mounted on one or more motherboards. The mother/daughter board design results in a modular system, which is easy to debug and service and which enables the filter characteristics to be changed by swapping daughter boards with ones containing different filter parameters. The filter outputs are passed to commercially- available acquisition hardware to digitize and store the conditioned microphone signals. Wind-tunnel testing of the new MEMS microphone polyimide mounting system shows that the array performance is comparable to that of traditional arrays, but with significantly less cost of construction.
Factors affecting the performance of large-aperture microphone arrays.
Silverman, Harvey F; Patterson, William R; Sachar, Joshua
2002-05-01
Large arrays of microphones have been proposed and studied as a possible means of acquiring data in offices, conference rooms, and auditoria without requiring close-talking microphones. When such an array essentially surrounds all possible sources, it is said to have a large aperture. Large-aperture arrays have attractive properties of spatial resolution and signal-to-noise enhancement. This paper presents a careful comparison of theoretical and measured performance for an array of 256 microphones using simple delay-and-sum beamforming. This is the largest currently functional, all digital-signal-processing array that we know of. The array is wall-mounted in the moderately adverse environment of a general-purpose laboratory (8 m x 8 m x 3 m). The room has a T60 reverberation time of 550 ms. Reverberation effects in this room severely impact the array's performance. However, the width of the main lobe remains comparable to that of a simplified prediction. Broadband spatial resolution shows a single central peak with 10 dB gain about 0.4 m in diameter at the -3 dB level. Away from that peak, the response is approximately flat over most of the room. Optimal weighting for signal-to-noise enhancement degrades the spatial resolution minimally. Experimentally, we verify that signal-to-noise gain is less than proportional to the square root of the number of microphones probably due to the partial correlation of the noise between channels, to variation of signal intensity with polar angle about the source, and to imperfect correlation of the signal over the array caused by reverberations. We show measurements of the relative importance of each effect in our environment.
Factors affecting the performance of large-aperture microphone arrays
NASA Astrophysics Data System (ADS)
Silverman, Harvey F.; Patterson, William R.; Sachar, Joshua
2002-05-01
Large arrays of microphones have been proposed and studied as a possible means of acquiring data in offices, conference rooms, and auditoria without requiring close-talking microphones. When such an array essentially surrounds all possible sources, it is said to have a large aperture. Large-aperture arrays have attractive properties of spatial resolution and signal-to-noise enhancement. This paper presents a careful comparison of theoretical and measured performance for an array of 256 microphones using simple delay-and-sum beamforming. This is the largest currently functional, all digital-signal-processing array that we know of. The array is wall-mounted in the moderately adverse environment of a general-purpose laboratory (8 m×8 m×3 m). The room has a T60 reverberation time of 550 ms. Reverberation effects in this room severely impact the array's performance. However, the width of the main lobe remains comparable to that of a simplified prediction. Broadband spatial resolution shows a single central peak with 10 dB gain about 0.4 m in diameter at the -3 dB level. Away from that peak, the response is approximately flat over most of the room. Optimal weighting for signal-to-noise enhancement degrades the spatial resolution minimally. Experimentally, we verify that signal-to-noise gain is less than proportional to the square root of the number of microphones probably due to the partial correlation of the noise between channels, to variation of signal intensity with polar angle about the source, and to imperfect correlation of the signal over the array caused by reverberations. We show measurements of the relative importance of each effect in our environment.
Removing Background Noise with Phased Array Signal Processing
NASA Technical Reports Server (NTRS)
Podboy, Gary; Stephens, David
2015-01-01
Preliminary results are presented from a test conducted to determine how well microphone phased array processing software could pull an acoustic signal out of background noise. The array consisted of 24 microphones in an aerodynamic fairing designed to be mounted in-flow. The processing was conducted using Functional Beam forming software developed by Optinav combined with cross spectral matrix subtraction. The test was conducted in the free-jet of the Nozzle Acoustic Test Rig at NASA GRC. The background noise was produced by the interaction of the free-jet flow with the solid surfaces in the flow. The acoustic signals were produced by acoustic drivers. The results show that the phased array processing was able to pull the acoustic signal out of the background noise provided the signal was no more than 20 dB below the background noise level measured using a conventional single microphone equipped with an aerodynamic forebody.
NASA Astrophysics Data System (ADS)
Sarradj, Ennes
2010-04-01
Phased microphone arrays are used in a variety of applications for the estimation of acoustic source location and spectra. The popular conventional delay-and-sum beamforming methods used with such arrays suffer from inaccurate estimations of absolute source levels and in some cases also from low resolution. Deconvolution approaches such as DAMAS have better performance, but require high computational effort. A fast beamforming method is proposed that can be used in conjunction with a phased microphone array in applications with focus on the correct quantitative estimation of acoustic source spectra. This method bases on an eigenvalue decomposition of the cross spectral matrix of microphone signals and uses the eigenvalues from the signal subspace to estimate absolute source levels. The theoretical basis of the method is discussed together with an assessment of the quality of the estimation. Experimental tests using a loudspeaker setup and an airfoil trailing edge noise setup in an aeroacoustic wind tunnel show that the proposed method is robust and leads to reliable quantitative results.
A four-element end-fire microphone array for acoustic measurements in wind tunnels
NASA Technical Reports Server (NTRS)
Soderman, P. T.; Noble, S. C.
1974-01-01
A prototype four-element end-fire microphone array was designed and built for evaluation as a directional acoustic receiver for use in large wind tunnels. The microphone signals were digitized, time delayed, summed, and reconverted to analog form in such a way as to create a directional response with the main lobe along the array axis. The measured array directivity agrees with theoretical predictions confirming the circuit design of the electronic control module. The array with 0.15 m (0.5 ft) microphone spacing rejected reverberations and background noise in the Ames 40- by 80-foot wind tunnel by 5 to 12 db for frequencies above 400 Hz.
Warren, Megan R; Sangiamo, Daniel T; Neunuebel, Joshua P
2018-03-01
An integral component in the assessment of vocal behavior in groups of freely interacting animals is the ability to determine which animal is producing each vocal signal. This process is facilitated by using microphone arrays with multiple channels. Here, we made important refinements to a state-of-the-art microphone array based system used to localize vocal signals produced by freely interacting laboratory mice. Key changes to the system included increasing the number of microphones as well as refining the methodology for localizing and assigning vocal signals to individual mice. We systematically demonstrate that the improvements in the methodology for localizing mouse vocal signals led to an increase in the number of signals detected as well as the number of signals accurately assigned to an animal. These changes facilitated the acquisition of larger and more comprehensive data sets that better represent the vocal activity within an experiment. Furthermore, this system will allow more thorough analyses of the role that vocal signals play in social communication. We expect that such advances will broaden our understanding of social communication deficits in mouse models of neurological disorders. Copyright © 2018 Elsevier B.V. All rights reserved.
Acoustic Location of Lightning Using Interferometric Techniques
NASA Astrophysics Data System (ADS)
Erives, H.; Arechiga, R. O.; Stock, M.; Lapierre, J. L.; Edens, H. E.; Stringer, A.; Rison, W.; Thomas, R. J.
2013-12-01
Acoustic arrays have been used to accurately locate thunder sources in lightning flashes. The acoustic arrays located around the Magdalena mountains of central New Mexico produce locations which compare quite well with source locations provided by the New Mexico Tech Lightning Mapping Array. These arrays utilize 3 outer microphones surrounding a 4th microphone located at the center, The location is computed by band-passing the signal to remove noise, and then computing the cross correlating the outer 3 microphones with respect the center reference microphone. While this method works very well, it works best on signals with high signal to noise ratios; weaker signals are not as well located. Therefore, methods are being explored to improve the location accuracy and detection efficiency of the acoustic location systems. The signal received by acoustic arrays is strikingly similar to th signal received by radio frequency interferometers. Both acoustic location systems and radio frequency interferometers make coherent measurements of a signal arriving at a number of closely spaced antennas. And both acoustic and interferometric systems then correlate these signals between pairs of receivers to determine the direction to the source of the received signal. The primary difference between the two systems is the velocity of propagation of the emission, which is much slower for sound. Therefore, the same frequency based techniques that have been used quite successfully with radio interferometers should be applicable to acoustic based measurements as well. The results presented here are comparisons between the location results obtained with current cross correlation method and techniques developed for radio frequency interferometers applied to acoustic signals. The data were obtained during the summer 2013 storm season using multiple arrays sensitive to both infrasonic frequency and audio frequency acoustic emissions from lightning. Preliminary results show that interferometric techniques have good potential for improving the lightning location accuracy and detection efficiency of acoustic arrays.
NASA Astrophysics Data System (ADS)
Bader, Rolf
This chapter deals with microphone arrays. It is arranged according to the different methods available to proceed through the different problems and through the different mathematical methods. After discussing general properties of different array types, such as plane arrays, spherical arrays, or scanning arrays, it proceeds to the signal processing tools that are most used in speech processing. In the third section, backpropagating methods based on the Helmholtz-Kirchhoff integral are discussed, which result in spatial radiation patterns of vibrating bodies or air.
DOT National Transportation Integrated Search
2006-05-08
This paper describes the integration of wavelet analysis and time-domain beamforming : of microphone array output signals for analyzing the acoustic emissions from airplane : generated wake vortices. This integrated process provides visual and quanti...
Performance Analysis of a Cost-Effective Electret Condenser Microphone Directional Array
NASA Technical Reports Server (NTRS)
Humphreys, William M., Jr.; Gerhold, Carl H.; Zuckerwar, Allan J.; Herring, Gregory C.; Bartram, Scott M.
2003-01-01
Microphone directional array technology continues to be a critical part of the overall instrumentation suite for experimental aeroacoustics. Unfortunately, high sensor cost remains one of the limiting factors in the construction of very high-density arrays (i.e., arrays containing several hundred channels or more) which could be used to implement advanced beamforming algorithms. In an effort to reduce the implementation cost of such arrays, the authors have undertaken a systematic performance analysis of a prototype 35-microphone array populated with commercial electret condenser microphones. An ensemble of microphones coupling commercially available electret cartridges with passive signal conditioning circuitry was fabricated for use with the Langley Large Aperture Directional Array (LADA). A performance analysis consisting of three phases was then performed: (1) characterize the acoustic response of the microphones via laboratory testing and calibration, (2) evaluate the beamforming capability of the electret-based LADA using a series of independently controlled point sources in an anechoic environment, and (3) demonstrate the utility of an electret-based directional array in a real-world application, in this case a cold flow jet operating at high subsonic velocities. The results of the investigation revealed a microphone frequency response suitable for directional array use over a range of 250 Hz - 40 kHz, a successful beamforming evaluation using the electret-populated LADA to measure simple point sources at frequencies up to 20 kHz, and a successful demonstration using the array to measure noise generated by the cold flow jet. This paper presents an overview of the tests conducted along with sample data obtained from those tests.
NASA Technical Reports Server (NTRS)
Lockard, David P.; Humphreys, William M.; Khorrami, Mehdi R.; Fares, Ehab; Casalino, Damiano; Ravetta, Patricio A.
2015-01-01
An 18%-scale, semi-span model is used as a platform for examining the efficacy of microphone array processing using synthetic data from numerical simulations. Two hybrid RANS/LES codes coupled with Ffowcs Williams-Hawkings solvers are used to calculate 97 microphone signals at the locations of an array employed in the NASA LaRC 14x22 tunnel. Conventional, DAMAS, and CLEAN-SC array processing is applied in an identical fashion to the experimental and computational results for three different configurations involving deploying and retracting the main landing gear and a part span flap. Despite the short time records of the numerical signals, the beamform maps are able to isolate the noise sources, and the appearance of the DAMAS synthetic array maps is generally better than those from the experimental data. The experimental CLEAN-SC maps are similar in quality to those from the simulations indicating that CLEAN-SC may have less sensitivity to background noise. The spectrum obtained from DAMAS processing of synthetic array data is nearly identical to the spectrum of the center microphone of the array, indicating that for this problem array processing of synthetic data does not improve spectral comparisons with experiment. However, the beamform maps do provide an additional means of comparison that can reveal differences that cannot be ascertained from spectra alone.
Assessment of a directional microphone array for hearing-impaired listeners.
Soede, W; Bilsen, F A; Berkhout, A J
1993-08-01
Hearing-impaired listeners often have great difficulty understanding speech in surroundings with background noise or reverberation. Based on array techniques, two microphone prototypes (broadside and endfire) have been developed with strongly directional characteristics [Soede et al., "Development of a new directional hearing instrument based on array technology," J. Acoust. Soc. Am. 94, 785-798 (1993)]. Physical measurements show that the arrays attenuate reverberant sound by 6 dB (free-field) and can improve the signal-to-noise ratio by 7 dB in a diffuse noise field (measured with a KEMAR manikin). For the clinical assessment of these microphones an experimental setup was made in a sound-insulated listening room with one loudspeaker in front of the listener simulating the partner in a discussion and eight loudspeakers placed on the edges of a cube producing a diffuse background noise. The hearing-impaired subject wearing his own (familiar) hearing aid is placed in the center of the cube. The speech-reception threshold in noise for simple Dutch sentences was determined with a normal single omnidirectional microphone and with one of the microphone arrays. The results of monaural listening tests with hearing impaired subjects show that in comparison with an omnidirectional hearing-aid microphone the broadside and endfire microphone array gives a mean improvement of the speech reception threshold in noise of 7.0 dB (26 subjects) and 6.8 dB (27 subjects), respectively. Binaural listening with two endfire microphone arrays gives a binaural improvement which is comparable to the binaural improvement obtained by listening with two normal ears or two conventional hearing aids.
Parallel Processing of Large Scale Microphone Arrays for Sound Capture
NASA Astrophysics Data System (ADS)
Jan, Ea-Ee.
1995-01-01
Performance of microphone sound pick up is degraded by deleterious properties of the acoustic environment, such as multipath distortion (reverberation) and ambient noise. The degradation becomes more prominent in a teleconferencing environment in which the microphone is positioned far away from the speaker. Besides, the ideal teleconference should feel as easy and natural as face-to-face communication with another person. This suggests hands-free sound capture with no tether or encumbrance by hand-held or body-worn sound equipment. Microphone arrays for this application represent an appropriate approach. This research develops new microphone array and signal processing techniques for high quality hands-free sound capture in noisy, reverberant enclosures. The new techniques combine matched-filtering of individual sensors and parallel processing to provide acute spatial volume selectivity which is capable of mitigating the deleterious effects of noise interference and multipath distortion. The new method outperforms traditional delay-and-sum beamformers which provide only directional spatial selectivity. The research additionally explores truncated matched-filtering and random distribution of transducers to reduce complexity and improve sound capture quality. All designs are first established by computer simulation of array performance in reverberant enclosures. The simulation is achieved by a room model which can efficiently calculate the acoustic multipath in a rectangular enclosure up to a prescribed order of images. It also calculates the incident angle of the arriving signal. Experimental arrays were constructed and their performance was measured in real rooms. Real room data were collected in a hard-walled laboratory and a controllable variable acoustics enclosure of similar size, approximately 6 x 6 x 3 m. An extensive speech database was also collected in these two enclosures for future research on microphone arrays. The simulation results are shown to be consistent with the real room data. Localization of sound sources has been explored using cross-power spectrum time delay estimation and has been evaluated using real room data under slightly, moderately and highly reverberant conditions. To improve the accuracy and reliability of the source localization, an outlier detector that removes incorrect time delay estimation has been invented. To provide speaker selectivity for microphone array systems, a hands-free speaker identification system has been studied. A recently invented feature using selected spectrum information outperforms traditional recognition methods. Measured results demonstrate the capabilities of speaker selectivity from a matched-filtered array. In addition, simulation utilities, including matched -filtering processing of the array and hands-free speaker identification, have been implemented on the massively -parallel nCube super-computer. This parallel computation highlights the requirements for real-time processing of array signals.
NASA Technical Reports Server (NTRS)
Greenwood, Eric II; Schmitz, Fredric H.
2009-01-01
A new method of separating the contributions of helicopter main and tail rotor noise sources is presented, making use of ground-based acoustic measurements. The method employs time-domain de-Dopplerization to transform the acoustic pressure time-history data collected from an array of ground-based microphones to the equivalent time-history signals observed by an array of virtual inflight microphones traveling with the helicopter. The now-stationary signals observed by the virtual microphones are then periodically averaged with the main and tail rotor once per revolution triggers. The averaging process suppresses noise which is not periodic with the respective rotor, allowing for the separation of main and tail rotor pressure time-histories. The averaged measurements are then interpolated across the range of directivity angles captured by the microphone array in order to generate separate acoustic hemispheres for the main and tail rotor noise sources. The new method is successfully applied to ground-based microphone measurements of a Bell 206B3 helicopter and demonstrates the strong directivity characteristics of harmonic noise radiation from both the main and tail rotors of that helicopter.
Advanced flow noise reducing acoustic sensor arrays
NASA Astrophysics Data System (ADS)
Fine, Kevin; Drzymkowski, Mark; Cleckler, Jay
2009-05-01
SARA, Inc. has developed microphone arrays that are as effective at reducing flow noise as foam windscreens and sufficiently rugged for tough battlefield environments. These flow noise reducing (FNR) sensors have a metal body and are flat and conformally mounted so they can be attached to the roofs of land vehicles and are resistant to scrapes from branches. Flow noise at low Mach numbers is created by turbulent eddies moving with the fluid flow and inducing pressure variations on microphones. Our FNR sensors average the pressure over the diameter (~20 cm) of their apertures, reducing the noise created by all but the very largest eddies. This is in contrast to the acoustic wave which has negligible variation over the aperture at the frequencies of interest (f less or equal than 400 Hz). We have also post-processed the signals to further reduce the flow noise. Two microphones separated along the flow direction exhibit highly correlated noise. The time shift of the correlation corresponds to the time for the eddies in the flow to travel between the microphones. We have created linear microphone arrays parallel to the flow and have reduced flow noise as much as 10 to 15 dB by subtracting time-shifted signals.
Spatial sound field synthesis and upmixing based on the equivalent source method.
Bai, Mingsian R; Hsu, Hoshen; Wen, Jheng-Ciang
2014-01-01
Given scarce number of recorded signals, spatial sound field synthesis with an extended sweet spot is a challenging problem in acoustic array signal processing. To address the problem, a synthesis and upmixing approach inspired by the equivalent source method (ESM) is proposed. The synthesis procedure is based on the pressure signals recorded by a microphone array and requires no source model. The array geometry can also be arbitrary. Four upmixing strategies are adopted to enhance the resolution of the reproduced sound field when there are more channels of loudspeakers than the microphones. Multi-channel inverse filtering with regularization is exploited to deal with the ill-posedness in the reconstruction process. The distance between the microphone and loudspeaker arrays is optimized to achieve the best synthesis quality. To validate the proposed system, numerical simulations and subjective listening experiments are performed. The results demonstrated that all upmixing methods improved the quality of reproduced target sound field over the original reproduction. In particular, the underdetermined ESM interpolation method yielded the best spatial sound field synthesis in terms of the reproduction error, timbral quality, and spatial quality.
Noise Reduction with Microphone Arrays for Speaker Identification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, Z
Reducing acoustic noise in audio recordings is an ongoing problem that plagues many applications. This noise is hard to reduce because of interfering sources and non-stationary behavior of the overall background noise. Many single channel noise reduction algorithms exist but are limited in that the more the noise is reduced; the more the signal of interest is distorted due to the fact that the signal and noise overlap in frequency. Specifically acoustic background noise causes problems in the area of speaker identification. Recording a speaker in the presence of acoustic noise ultimately limits the performance and confidence of speaker identificationmore » algorithms. In situations where it is impossible to control the environment where the speech sample is taken, noise reduction filtering algorithms need to be developed to clean the recorded speech of background noise. Because single channel noise reduction algorithms would distort the speech signal, the overall challenge of this project was to see if spatial information provided by microphone arrays could be exploited to aid in speaker identification. The goals are: (1) Test the feasibility of using microphone arrays to reduce background noise in speech recordings; (2) Characterize and compare different multichannel noise reduction algorithms; (3) Provide recommendations for using these multichannel algorithms; and (4) Ultimately answer the question - Can the use of microphone arrays aid in speaker identification?« less
Design of Small MEMS Microphone Array Systems for Direction Finding of Outdoors Moving Vehicles
Zhang, Xin; Huang, Jingchang; Song, Enliang; Liu, Huawei; Li, Baoqing; Yuan, Xiaobing
2014-01-01
In this paper, a MEMS microphone array system scheme is proposed which implements real-time direction of arrival (DOA) estimation for moving vehicles. Wind noise is the primary source of unwanted noise on microphones outdoors. A multiple signal classification (MUSIC) algorithm is used in this paper for direction finding associated with spatial coherence to discriminate between the wind noise and the acoustic signals of a vehicle. The method is implemented in a SHARC DSP processor and the real-time estimated DOA is uploaded through Bluetooth or a UART module. Experimental results in different places show the validity of the system and the deviation is no bigger than 6° in the presence of wind noise. PMID:24603636
Design of small MEMS microphone array systems for direction finding of outdoors moving vehicles.
Zhang, Xin; Huang, Jingchang; Song, Enliang; Liu, Huawei; Li, Baoqing; Yuan, Xiaobing
2014-03-05
In this paper, a MEMS microphone array system scheme is proposed which implements real-time direction of arrival (DOA) estimation for moving vehicles. Wind noise is the primary source of unwanted noise on microphones outdoors. A multiple signal classification (MUSIC) algorithm is used in this paper for direction finding associated with spatial coherence to discriminate between the wind noise and the acoustic signals of a vehicle. The method is implemented in a SHARC DSP processor and the real-time estimated DOA is uploaded through Bluetooth or a UART module. Experimental results in different places show the validity of the system and the deviation is no bigger than 6° in the presence of wind noise.
Multi-microphone adaptive array augmented with visual cueing.
Gibson, Paul L; Hedin, Dan S; Davies-Venn, Evelyn E; Nelson, Peggy; Kramer, Kevin
2012-01-01
We present the development of an audiovisual array that enables hearing aid users to converse with multiple speakers in reverberant environments with significant speech babble noise where their hearing aids do not function well. The system concept consists of a smartphone, a smartphone accessory, and a smartphone software application. The smartphone accessory concept is a multi-microphone audiovisual array in a form factor that allows attachment to the back of the smartphone. The accessory will also contain a lower power radio by which it can transmit audio signals to compatible hearing aids. The smartphone software application concept will use the smartphone's built in camera to acquire images and perform real-time face detection using the built-in face detection support of the smartphone. The audiovisual beamforming algorithm uses the location of talking targets to improve the signal to noise ratio and consequently improve the user's speech intelligibility. Since the proposed array system leverages a handheld consumer electronic device, it will be portable and low cost. A PC based experimental system was developed to demonstrate the feasibility of an audiovisual multi-microphone array and these results are presented.
Chung, King
2004-01-01
This review discusses the challenges in hearing aid design and fitting and the recent developments in advanced signal processing technologies to meet these challenges. The first part of the review discusses the basic concepts and the building blocks of digital signal processing algorithms, namely, the signal detection and analysis unit, the decision rules, and the time constants involved in the execution of the decision. In addition, mechanisms and the differences in the implementation of various strategies used to reduce the negative effects of noise are discussed. These technologies include the microphone technologies that take advantage of the spatial differences between speech and noise and the noise reduction algorithms that take advantage of the spectral difference and temporal separation between speech and noise. The specific technologies discussed in this paper include first-order directional microphones, adaptive directional microphones, second-order directional microphones, microphone matching algorithms, array microphones, multichannel adaptive noise reduction algorithms, and synchrony detection noise reduction algorithms. Verification data for these technologies, if available, are also summarized. PMID:15678225
NASA Astrophysics Data System (ADS)
Liu, Xingchen; Hu, Zhiyong; He, Qingbo; Zhang, Shangbin; Zhu, Jun
2017-10-01
Doppler distortion and background noise can reduce the effectiveness of wayside acoustic train bearing monitoring and fault diagnosis. This paper proposes a method of combining a microphone array and matching pursuit algorithm to overcome these difficulties. First, a dictionary is constructed based on the characteristics and mechanism of a far-field assumption. Then, the angle of arrival of the train bearing is acquired when applying matching pursuit to analyze the acoustic array signals. Finally, after obtaining the resampling time series, the Doppler distortion can be corrected, which is convenient for further diagnostic work. Compared with traditional single-microphone Doppler correction methods, the advantages of the presented array method are its robustness to background noise and its barely requiring pre-measuring parameters. Simulation and experimental study show that the proposed method is effective in performing wayside acoustic bearing fault diagnosis.
Chen, Hanchi; Abhayapala, Thushara D; Zhang, Wen
2015-11-01
Soundfield analysis based on spherical harmonic decomposition has been widely used in various applications; however, a drawback is the three-dimensional geometry of the microphone arrays. In this paper, a method to design two-dimensional planar microphone arrays that are capable of capturing three-dimensional (3D) spatial soundfields is proposed. Through the utilization of both omni-directional and first order microphones, the proposed microphone array is capable of measuring soundfield components that are undetectable to conventional planar omni-directional microphone arrays, thus providing the same functionality as 3D arrays designed for the same purpose. Simulations show that the accuracy of the planar microphone array is comparable to traditional spherical microphone arrays. Due to its compact shape, the proposed microphone array greatly increases the feasibility of 3D soundfield analysis techniques in real-world applications.
Noise-Canceling Helmet Audio System
NASA Technical Reports Server (NTRS)
Seibert, Marc A.; Culotta, Anthony J.
2007-01-01
A prototype helmet audio system has been developed to improve voice communication for the wearer in a noisy environment. The system was originally intended to be used in a space suit, wherein noise generated by airflow of the spacesuit life-support system can make it difficult for remote listeners to understand the astronaut s speech and can interfere with the astronaut s attempt to issue vocal commands to a voice-controlled robot. The system could be adapted to terrestrial use in helmets of protective suits that are typically worn in noisy settings: examples include biohazard, fire, rescue, and diving suits. The system (see figure) includes an array of microphones and small loudspeakers mounted at fixed positions in a helmet, amplifiers and signal-routing circuitry, and a commercial digital signal processor (DSP). Notwithstanding the fixed positions of the microphones and loudspeakers, the system can accommodate itself to any normal motion of the wearer s head within the helmet. The system operates in conjunction with a radio transceiver. An audio signal arriving via the transceiver intended to be heard by the wearer is adjusted in volume and otherwise conditioned and sent to the loudspeakers. The wearer s speech is collected by the microphones, the outputs of which are logically combined (phased) so as to form a microphone- array directional sensitivity pattern that discriminates in favor of sounds coming from vicinity of the wearer s mouth and against sounds coming from elsewhere. In the DSP, digitized samples of the microphone outputs are processed to filter out airflow noise and to eliminate feedback from the loudspeakers to the microphones. The resulting conditioned version of the wearer s speech signal is sent to the transceiver.
A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS
NASA Astrophysics Data System (ADS)
Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto
At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.
Sound source tracking device for telematic spatial sound field reproduction
NASA Astrophysics Data System (ADS)
Cardenas, Bruno
This research describes an algorithm that localizes sound sources for use in telematic applications. The localization algorithm is based on amplitude differences between various channels of a microphone array of directional shotgun microphones. The amplitude differences will be used to locate multiple performers and reproduce their voices, which were recorded at close distance with lavalier microphones, spatially corrected using a loudspeaker rendering system. In order to track multiple sound sources in parallel the information gained from the lavalier microphones will be utilized to estimate the signal-to-noise ratio between each performer and the concurrent performers.
NASA Technical Reports Server (NTRS)
Humphreys, William M., Jr.; Lockard, David P.; Khorrami, Mehdi R.; Culliton, William G.; McSwain, Robert G.; Ravetta, Patricio A.; Johns, Zachary
2016-01-01
A new aeroacoustic measurement capability has been developed consisting of a large channelcount, field-deployable microphone phased array suitable for airframe noise flyover measurements for a range of aircraft types and scales. The array incorporates up to 185 hardened, weather-resistant sensors suitable for outdoor use. A custom 4-mA current loop receiver circuit with temperature compensation was developed to power the sensors over extended cable lengths with minimal degradation of the signal to noise ratio and frequency response. Extensive laboratory calibrations and environmental testing of the sensors were conducted to verify the design's performance specifications. A compact data system combining sensor power, signal conditioning, and digitization was assembled for use with the array. Complementing the data system is a robust analysis system capable of near real-time presentation of beamformed and deconvolved contour plots and integrated spectra obtained from array data acquired during flyover passes. Additional instrumentation systems needed to process the array data were also assembled. These include a commercial weather station and a video monitoring / recording system. A detailed mock-up of the instrumentation suite (phased array, weather station, and data processor) was performed in the NASA Langley Acoustic Development Laboratory to vet the system performance. The first deployment of the system occurred at Finnegan Airfield at Fort A.P. Hill where the array was utilized to measure the vehicle noise from a number of sUAS (small Unmanned Aerial System) aircraft. A unique in-situ calibration method for the array microphones using a hovering aerial sound source was attempted for the first time during the deployment.
Optimum sensor placement for microphone arrays
NASA Astrophysics Data System (ADS)
Rabinkin, Daniel V.
Microphone arrays can be used for high-quality sound pickup in reverberant and noisy environments. Sound capture using conventional single microphone methods suffers severe degradation under these conditions. The beamforming capabilities of microphone array systems allow highly directional sound capture, providing enhanced signal-to-noise ratio (SNR) when compared to single microphone performance. The overall performance of an array system is governed by its ability to locate and track sound sources and its ability to capture sound from desired spatial volumes. These abilities are strongly affected by the spatial placement of microphone sensors. A method is needed to optimize placement for a specified number of sensors in a given acoustical environment. The objective of the optimization is to obtain the greatest average system SNR for sound capture in the region of interest. A two-step sound source location method is presented. In the first step, time delay of arrival (TDOA) estimates for select microphone pairs are determined using a modified version of the Omologo-Svaizer cross-power spectrum phase expression. In the second step, the TDOA estimates are used in a least-mean-squares gradient descent search algorithm to obtain a location estimate. Statistics for TDOA estimate error as a function of microphone pair/sound source geometry and acoustic environment are gathered from a set of experiments. These statistics are used to model position estimation accuracy for a given array geometry. The effectiveness of sound source capture is also dependent on array geometry and the acoustical environment. Simple beamforming and time delay compensation (TDC) methods provide spatial selectivity but suffer performance degradation in reverberant environments. Matched filter array (MFA) processing can mitigate the effects of reverberation. The shape and gain advantage of the capture region for these techniques is described and shown to be highly influenced by the placement of array sensors. A procedure is developed to evaluate a given array configuration based on the above-mentioned metrics. Constrained placement optimizations are performed that maximize SNR for both TDC and MFA capture methods. Results are compared for various acoustic environments and various enclosure sizes. General guidelines are presented for placement strategy and bandwidth dependence, as they relate to reverberation levels, ambient noise, and enclosure geometry. An overall performance function is described based on these metrics. Performance of the microphone array system is also constrained by the design limitations of the supporting hardware. Two newly developed hardware architectures are presented that support the described algorithms. A low- cost 8-channel system with off-the-shelf componentry was designed and its performance evaluated. A massively parallel 512-channel custom-built system is in development-its capabilities and the rationale for its design are described.
Population density estimated from locations of individuals on a passive detector array
Efford, Murray G.; Dawson, Deanna K.; Borchers, David L.
2009-01-01
The density of a closed population of animals occupying stable home ranges may be estimated from detections of individuals on an array of detectors, using newly developed methods for spatially explicit capture–recapture. Likelihood-based methods provide estimates for data from multi-catch traps or from devices that record presence without restricting animal movement ("proximity" detectors such as camera traps and hair snags). As originally proposed, these methods require multiple sampling intervals. We show that equally precise and unbiased estimates may be obtained from a single sampling interval, using only the spatial pattern of detections. This considerably extends the range of possible applications, and we illustrate the potential by estimating density from simulated detections of bird vocalizations on a microphone array. Acoustic detection can be defined as occurring when received signal strength exceeds a threshold. We suggest detection models for binary acoustic data, and for continuous data comprising measurements of all signals above the threshold. While binary data are often sufficient for density estimation, modeling signal strength improves precision when the microphone array is small.
Llamas: Large-area microphone arrays and sensing systems
NASA Astrophysics Data System (ADS)
Sanz-Robinson, Josue
Large-area electronics (LAE) provides a platform to build sensing systems, based on distributing large numbers of densely spaced sensors over a physically-expansive space. Due to their flexible, "wallpaper-like" form factor, these systems can be seamlessly deployed in everyday spaces. They go beyond just supplying sensor readings, but rather they aim to transform the wealth of data from these sensors into actionable inferences about our physical environment. This requires vertically integrated systems that span the entirety of the signal processing chain, including transducers and devices, circuits, and signal processing algorithms. To this end we develop hybrid LAE / CMOS systems, which exploit the complementary strengths of LAE, enabling spatially distributed sensors, and CMOS ICs, providing computational capacity for signal processing. To explore the development of hybrid sensing systems, based on vertical integration across the signal processing chain, we focus on two main drivers: (1) thin-film diodes, and (2) microphone arrays for blind source separation: 1) Thin-film diodes are a key building block for many applications, such as RFID tags or power transfer over non-contact inductive links, which require rectifiers for AC-to-DC conversion. We developed hybrid amorphous / nanocrystalline silicon diodes, which are fabricated at low temperatures (<200 °C) to be compatible with processing on plastic, and have high current densities (5 A/cm2 at 1 V) and high frequency operation (cutoff frequency of 110 MHz). 2) We designed a system for separating the voices of multiple simultaneous speakers, which can ultimately be fed to a voice-command recognition engine for controlling electronic systems. On a device level, we developed flexible PVDF microphones, which were used to create a large-area microphone array. On a circuit level we developed localized a-Si TFT amplifiers, and a custom CMOS IC, for system control, sensor readout and digitization. On a signal processing level we developed an algorithm for blind source separation in a real, reverberant room, based on beamforming and binary masking. It requires no knowledge about the location of the speakers or microphones. Instead, it uses cluster analysis techniques to determine the time delays for beamforming; thus, adapting to the unique acoustic environment of the room.
Motorcycle detection and counting using stereo camera, IR camera, and microphone array
NASA Astrophysics Data System (ADS)
Ling, Bo; Gibson, David R. P.; Middleton, Dan
2013-03-01
Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.
Lightning Location Using Acoustic Signals
NASA Astrophysics Data System (ADS)
Badillo, E.; Arechiga, R. O.; Thomas, R. J.
2013-05-01
In the summer of 2011 and 2012 a network of acoustic arrays was deployed in the Magdalena mountains of central New Mexico to locate lightning flashes. A Times-Correlation (TC) ray-tracing-based-technique was developed in order to obtain the location of lightning flashes near the network. The TC technique, locates acoustic sources from lightning. It was developed to complement the lightning location of RF sources detected by the Lightning Mapping Array (LMA) developed at Langmuir Laboratory, in New Mexico Tech. The network consisted of four arrays with four microphones each. The microphones on each array were placed in a triangular configuration with one of the microphones in the center of the array. The distance between the central microphone and the rest of them was about 30 m. The distance between centers of the arrays ranged from 500 m to 1500 m. The TC technique uses times of arrival (TOA) of acoustic waves to trace back the location of thunder sources. In order to obtain the times of arrival, the signals were filtered in a frequency band of 2 to 20 hertz and cross-correlated. Once the times of arrival were obtained, the Levenberg-Marquardt algorithm was applied to locate the spatial coordinates (x,y, and z) of thunder sources. Two techniques were used and contrasted to compute the accuracy of the TC method: Nearest-Neighbors (NN), between acoustic and LMA located sources, and standard deviation from the curvature matrix of the system as a measure of dispersion of the results. For the best case scenario, a triggered lightning event, the TC method applied with four microphones, located sources with a median error of 152 m and 142.9 m using nearest-neighbors and standard deviation respectively.; Results of the TC method in the lightning event recorded at 18:47:35 UTC, August 6, 2012. Black dots represent the results computed. Light color dots represent the LMA data for the same event. The results were obtained with the MGTM station (four channels). This figure shows a map of Altitude vs Longitude (in km).
Signal Restoration of Non-stationary Acoustic Signals in the Time Domain
NASA Technical Reports Server (NTRS)
Babkin, Alexander S.
1988-01-01
Signal restoration is a method of transforming a nonstationary signal acquired by a ground based microphone to an equivalent stationary signal. The benefit of the signal restoration is a simplification of the flight test requirements because it could dispense with the need to acquire acoustic data with another aircraft flying in concert with the rotorcraft. The data quality is also generally improved because the contamination of the signal by the propeller and wind noise is not present. The restoration methodology can also be combined with other data acquisition methods, such as a multiple linear microphone array for further improvement of the test results. The methodology and software are presented for performing the signal restoration in the time domain. The method has no restrictions on flight path geometry or flight regimes. Only requirement is that the aircraft spatial position be known relative to the microphone location and synchronized with the acoustic data. The restoration process assumes that the moving source radiates a stationary signal, which is then transformed into a nonstationary signal by various modulation processes. The restoration contains only the modulation due to the source motion.
Bai, Mingsian R; Lai, Chang-Sheng; Wu, Po-Chen
2017-07-01
Circular microphone arrays (CMAs) are sufficient in many immersive audio applications because azimuthal angles of sources are considered more important than the elevation angles in those occasions. However, the fact that CMAs do not resolve the elevation angle well can be a limitation for some applications which involves three-dimensional sound images. This paper proposes a 2.5-dimensional (2.5-D) CMA comprised of a CMA and a vertical logarithmic-spacing linear array (LLA) on the top. In the localization stage, two delay-and-sum beamformers are applied to the CMA and the LLA, respectively. The direction of arrival (DOA) is estimated from the product of two array output signals. In the separation stage, Tikhonov regularization and convex optimization are employed to extract the source amplitudes on the basis of the estimated DOA. The extracted signals from two arrays are further processed by the normalized least-mean-square algorithm with the internal iteration to yield the source signal with improved quality. To validate the 2.5-D CMA experimentally, a three-dimensionally printed circular array comprised of a 24-element CMA and an eight-element LLA is constructed. Objective perceptual evaluation of speech quality test and a subjective listening test are also undertaken.
Passive Wake Acoustics Measurements at Denver International Airport
NASA Technical Reports Server (NTRS)
Wang, Frank Y.; Wassaf, Hadi; Dougherty, Robert P.; Clark, Kevin; Gulsrud, Andrew; Fenichel, Neil; Bryant, Wayne H.
2004-01-01
From August to September 2003, NASA conducted an extensive measurement campaign to characterize the acoustic signal of wake vortices. A large, both spatially as well as in number of elements, phased microphone array was deployed at Denver International Airport for this effort. This paper will briefly describe the program background, the microphone array, as well as the supporting ground-truth and meteorological sensor suite. Sample results to date are then presented and discussed. It is seen that, in the frequency range processed so far, wake noise is generated predominantly from a very confined area around the cores.
Measurement Of Trailing Edge Noise using Directional Array and Coherent Output Power Methods
NASA Technical Reports Server (NTRS)
Hutcheson, Florence V.; Brooks, Thomas F.
2002-01-01
The use of a directional array of microphones for the measurement of trailing edge (TE) noise is described. The capabilities of this method are evaluated via measurements of TE noise from a NACA 63-215 airfoil model and from a cylindrical rod. This TE noise measurement approach is compared to one that is based on the cross spectral analysis of output signals from a pair of microphones (COP method). Advantages and limitations of both methods are examined. It is shown that the microphone array can accurately measures TE noise and captures its two-dimensional characteristic over a large frequency range for any TE configuration as long as noise contamination from extraneous sources is within bounds. The COP method is shown to also accurately measure TE noise but over a more limited frequency range that narrows for increased TE thickness. Finally, the applicability and generality of an airfoil self-noise prediction method was evaluated via comparison to the experimental data obtained using the COP and array measurement methods. The predicted and experimental results are shown to agree over large frequency ranges.
Improved methods for fan sound field determination
NASA Technical Reports Server (NTRS)
Cicon, D. E.; Sofrin, T. G.; Mathews, D. C.
1981-01-01
Several methods for determining acoustic mode structure in aircraft turbofan engines using wall microphone data were studied. A method for reducing data was devised and implemented which makes the definition of discrete coherent sound fields measured in the presence of engine speed fluctuation more accurate. For the analytical methods, algorithms were developed to define the dominant circumferential modes from full and partial circumferential arrays of microphones. Axial arrays were explored to define mode structure as a function of cutoff ratio, and the use of data taken at several constant speeds was also evaluated in an attempt to reduce instrumentation requirements. Sensitivities of the various methods to microphone density, array size and measurement error were evaluated and results of these studies showed these new methods to be impractical. The data reduction method used to reduce the effects of engine speed variation consisted of an electronic circuit which windowed the data so that signal enhancement could occur only when the speed was within a narrow range.
Broadband implementation of coprime linear microphone arrays for direction of arrival estimation.
Bush, Dane; Xiang, Ning
2015-07-01
Coprime arrays represent a form of sparse sensing which can achieve narrow beams using relatively few elements, exceeding the spatial Nyquist sampling limit. The purpose of this paper is to expand on and experimentally validate coprime array theory in an acoustic implementation. Two nested sparse uniform linear subarrays with coprime number of elements ( M and N) each produce grating lobes that overlap with one another completely in just one direction. When the subarray outputs are combined it is possible to retain the shared beam while mostly canceling the other superfluous grating lobes. In this way a small number of microphones ( N+M-1) creates a narrow beam at higher frequencies, comparable to a densely populated uniform linear array of MN microphones. In this work beampatterns are simulated for a range of single frequencies, as well as bands of frequencies. Narrowband experimental beampatterns are shown to correspond with simulated results even at frequencies other than the arrays design frequency. Narrowband side lobe locations are shown to correspond to the theoretical values. Side lobes in the directional pattern are mitigated by increasing bandwidth of analyzed signals. Direction of arrival estimation is also implemented for two simultaneous noise sources in a free field condition.
Acoustic Localization with Infrasonic Signals
NASA Astrophysics Data System (ADS)
Threatt, Arnesha; Elbing, Brian
2015-11-01
Numerous geophysical and anthropogenic events emit infrasonic frequencies (<20 Hz), including volcanoes, hurricanes, wind turbines and tornadoes. These sounds, which cannot be heard by the human ear, can be detected from large distances (in excess of 100 miles) due to low frequency acoustic signals having a very low decay rate in the atmosphere. Thus infrasound could be used for long-range, passive monitoring and detection of these events. An array of microphones separated by known distances can be used to locate a given source, which is known as acoustic localization. However, acoustic localization with infrasound is particularly challenging due to contamination from other signals, sensitivity to wind noise and producing a trusted source for system development. The objective of the current work is to create an infrasonic source using a propane torch wand or a subwoofer and locate the source using multiple infrasonic microphones. This presentation will present preliminary results from various microphone configurations used to locate the source.
The Effects of Linear Microphone Array Changes on Computed Sound Exposure Level Footprints
NASA Technical Reports Server (NTRS)
Mueller, Arnold W.; Wilson, Mark R.
1997-01-01
Airport land planning commissions often are faced with determining how much area around an airport is affected by the sound exposure levels (SELS) associated with helicopter operations. This paper presents a study of the effects changing the size and composition of a microphone array has on the computed SEL contour (ground footprint) areas used by such commissions. Descent flight acoustic data measured by a fifteen microphone array were reprocessed for five different combinations of microphones within this array. This resulted in data for six different arrays for which SEL contours were computed. The fifteen microphone array was defined as the 'baseline' array since it contained the greatest amount of data. The computations used a newly developed technique, the Acoustic Re-propagation Technique (ART), which uses parts of the NASA noise prediction program ROTONET. After the areas of the SEL contours were calculated the differences between the areas were determined. The area differences for the six arrays are presented that show a five and a three microphone array (with spacing typical of that required by the FAA FAR Part 36 noise certification procedure) compare well with the fifteen microphone array. All data were obtained from a database resulting from a joint project conducted by NASA and U.S. Army researchers at Langley and Ames Research Centers. A brief description of the joint project test design, microphone array set-up, and data reduction methodology associated with the database are discussed.
Morgenstern, Hai; Rafaely, Boaz
2018-02-01
Spatial analysis of room acoustics is an ongoing research topic. Microphone arrays have been employed for spatial analyses with an important objective being the estimation of the direction-of-arrival (DOA) of direct sound and early room reflections using room impulse responses (RIRs). An optimal method for DOA estimation is the multiple signal classification algorithm. When RIRs are considered, this method typically fails due to the correlation of room reflections, which leads to rank deficiency of the cross-spectrum matrix. Preprocessing methods for rank restoration, which may involve averaging over frequency, for example, have been proposed exclusively for spherical arrays. However, these methods fail in the case of reflections with equal time delays, which may arise in practice and could be of interest. In this paper, a method is proposed for systems that combine a spherical microphone array and a spherical loudspeaker array, referred to as multiple-input multiple-output systems. This method, referred to as modal smoothing, exploits the additional spatial diversity for rank restoration and succeeds where previous methods fail, as demonstrated in a simulation study. Finally, combining modal smoothing with a preprocessing method is proposed in order to increase the number of DOAs that can be estimated using low-order spherical loudspeaker arrays.
A directional microphone array for acoustic studies of wind tunnel models
NASA Technical Reports Server (NTRS)
Soderman, P. T.; Noble, S. C.
1974-01-01
An end-fire microphone array that utilizes a digital time delay system has been designed and evaluated for measuring noise in wind tunnels. The directional response of both a four- and eight-element linear array of microphones has enabled substantial rejection of background noise and reverberations in the NASA Ames 40- by 80-foot wind tunnel. In addition, it is estimated that four- and eight-element arrays reject 6 and 9 dB, respectively, of microphone wind noise, as compared with a conventional omnidirectional microphone with nose cone. Array response to two types of jet engine models in the wind tunnel is presented. Comparisons of array response to loudspeakers in the wind tunnel and in free field are made.
Spatial acoustic signal processing for immersive communication
NASA Astrophysics Data System (ADS)
Atkins, Joshua
Computing is rapidly becoming ubiquitous as users expect devices that can augment and interact naturally with the world around them. In these systems it is necessary to have an acoustic front-end that is able to capture and reproduce natural human communication. Whether the end point is a speech recognizer or another human listener, the reduction of noise, reverberation, and acoustic echoes are all necessary and complex challenges. The focus of this dissertation is to provide a general method for approaching these problems using spherical microphone and loudspeaker arrays.. In this work, a theory of capturing and reproducing three-dimensional acoustic fields is introduced from a signal processing perspective. In particular, the decomposition of the spatial part of the acoustic field into an orthogonal basis of spherical harmonics provides not only a general framework for analysis, but also many processing advantages. The spatial sampling error limits the upper frequency range with which a sound field can be accurately captured or reproduced. In broadband arrays, the cost and complexity of using multiple transducers is an issue. This work provides a flexible optimization method for determining the location of array elements to minimize the spatial aliasing error. The low frequency array processing ability is also limited by the SNR, mismatch, and placement error of transducers. To address this, a robust processing method is introduced and used to design a reproduction system for rendering over arbitrary loudspeaker arrays or binaurally over headphones. In addition to the beamforming problem, the multichannel acoustic echo cancellation (MCAEC) issue is also addressed. A MCAEC must adaptively estimate and track the constantly changing loudspeaker-room-microphone response to remove the sound field presented over the loudspeakers from that captured by the microphones. In the multichannel case, the system is overdetermined and many adaptive schemes fail to converge to the true impulse response. This forces the need to track both the near and far end room responses. A transform domain method that mitigates this problem is derived and implemented. Results with a real system using a 16-channel loudspeaker array and 32-channel microphone array are presented.
Experimental investigation into infrasonic emissions from atmospheric turbulence.
Shams, Qamar A; Zuckerwar, Allan J; Burkett, Cecil G; Weistroffer, George R; Hugo, Derek R
2013-03-01
Clear air turbulence (CAT) is the leading cause of in-flight injuries and in severe cases can result in fatalities. The purpose of this work is to design and develop an infrasonic array network for early warning of clear air turbulence. The infrasonic system consists of an infrasonic three-microphone array, compact windscreens, and data management system. Past experimental efforts to detect acoustic emissions from CAT have been limited. An array of three infrasonic microphones, operating in the field at NASA Langley Research Center, on several occasions received signals interpreted as infrasonic emissions from CAT. Following comparison with current lidar and other past methods, the principle of operation, the experimental methods, and experimental data are presented for case studies and confirmed by pilot reports. The power spectral density of the received signals was found to fit a power law having an exponent of -6 to -7, which is found to be characteristics of infrasonic emissions from CAT, in contrast to findings of the past.
Acoustic source localization in mixed field using spherical microphone arrays
NASA Astrophysics Data System (ADS)
Huang, Qinghua; Wang, Tong
2014-12-01
Spherical microphone arrays have been used for source localization in three-dimensional space recently. In this paper, a two-stage algorithm is developed to localize mixed far-field and near-field acoustic sources in free-field environment. In the first stage, an array signal model is constructed in the spherical harmonics domain. The recurrent relation of spherical harmonics is independent of far-field and near-field mode strengths. Therefore, it is used to develop spherical estimating signal parameter via rotational invariance technique (ESPRIT)-like approach to estimate directions of arrival (DOAs) for both far-field and near-field sources. In the second stage, based on the estimated DOAs, simple one-dimensional MUSIC spectrum is exploited to distinguish far-field and near-field sources and estimate the ranges of near-field sources. The proposed algorithm can avoid multidimensional search and parameter pairing. Simulation results demonstrate the good performance for localizing far-field sources, or near-field ones, or mixed field sources.
Hydrogel microphones for stealthy underwater listening
Gao, Yang; Song, Jingfeng; Li, Shumin; Elowsky, Christian; Zhou, You; Ducharme, Stephen; Chen, Yong Mei; Zhou, Qin; Tan, Li
2016-01-01
Exploring the abundant resources in the ocean requires underwater acoustic detectors with a high-sensitivity reception of low-frequency sound from greater distances and zero reflections. Here we address both challenges by integrating an easily deformable network of metal nanoparticles in a hydrogel matrix for use as a cavity-free microphone. Since metal nanoparticles can be densely implanted as inclusions, and can even be arranged in coherent arrays, this microphone can detect static loads and air breezes from different angles, as well as underwater acoustic signals from 20 Hz to 3 kHz at amplitudes as low as 4 Pa. Unlike dielectric capacitors or cavity-based microphones that respond to stimuli by deforming the device in thickness directions, this hydrogel device responds with a transient modulation of electric double layers, resulting in an extraordinary sensitivity (217 nF kPa−1 or 24 μC N−1 at a bias of 1.0 V) without using any signal amplification tools. PMID:27554792
Source sparsity control of sound field reproduction using the elastic-net and the lasso minimizers.
Gauthier, P-A; Lecomte, P; Berry, A
2017-04-01
Sound field reproduction is aimed at the reconstruction of a sound pressure field in an extended area using dense loudspeaker arrays. In some circumstances, sound field reproduction is targeted at the reproduction of a sound field captured using microphone arrays. Although methods and algorithms already exist to convert microphone array recordings to loudspeaker array signals, one remaining research question is how to control the spatial sparsity in the resulting loudspeaker array signals and what would be the resulting practical advantages. Sparsity is an interesting feature for spatial audio since it can drastically reduce the number of concurrently active reproduction sources and, therefore, increase the spatial contrast of the solution at the expense of a difference between the target and reproduced sound fields. In this paper, the application of the elastic-net cost function to sound field reproduction is compared to the lasso cost function. It is shown that the elastic-net can induce solution sparsity and overcomes limitations of the lasso: The elastic-net solves the non-uniqueness of the lasso solution, induces source clustering in the sparse solution, and provides a smoother solution within the activated source clusters.
Measurement of Trailing Edge Noise Using Directional Array and Coherent Output Power Methods
NASA Technical Reports Server (NTRS)
Hutcheson, Florence V.; Brooks, Thomas F.
2002-01-01
The use of a directional (or phased) array of microphones for the measurement of trailing edge (TE) noise is described and tested. The capabilities of this method arc evaluated via measurements of TE noise from a NACA 63-215 airfoil model and from a cylindrical rod. This TE noise measurement approach is compared to one that is based on thc cross spectral analysis of output signals from a pair of microphones placed on opposite sides of an airframe model (COP method). Advantages and limitations of both methods arc examined. It is shown that the microphone array can accurately measures TE noise and captures its two-dimensional characteristic over a large frequency range for any TE configuration as long as noise contamination from extraneous sources is within bounds. The COP method is shown to also accurately measure TE noise but over a more limited frequency range that narrows for increased TE thickness. Finally, the applicability and generality of an airfoil self-noise prediction method was evaluated via comparison to the experimental data obtained using the COP and array measurement methods. The predicted and experimental results are shown to agree over large frequency ranges.
Optimization of Microphone Locations for Acoustic Liner Impedance Eduction
NASA Technical Reports Server (NTRS)
Jones, M. G.; Watson, W. R.; June, J. C.
2015-01-01
Two impedance eduction methods are explored for use with data acquired in the NASA Langley Grazing Flow Impedance Tube. The first is an indirect method based on the convected Helmholtz equation, and the second is a direct method based on the Kumaresan and Tufts algorithm. Synthesized no-flow data, with random jitter to represent measurement error, are used to evaluate a number of possible microphone locations. Statistical approaches are used to evaluate the suitability of each set of microphone locations. Given the computational resources required, small sample statistics are employed for the indirect method. Since the direct method is much less computationally intensive, a Monte Carlo approach is employed to gather its statistics. A comparison of results achieved with full and reduced sets of microphone locations is used to determine which sets of microphone locations are acceptable. For the indirect method, each array that includes microphones in all three regions (upstream and downstream hard wall sections, and liner test section) provides acceptable results, even when as few as eight microphones are employed. The best arrays employ microphones well away from the leading and trailing edges of the liner. The direct method is constrained to use microphones opposite the liner. Although a number of arrays are acceptable, the optimum set employs 14 microphones positioned well away from the leading and trailing edges of the liner. The selected sets of microphone locations are also evaluated with data measured for ceramic tubular and perforate-over-honeycomb liners at three flow conditions (Mach 0.0, 0.3, and 0.5). They compare favorably with results attained using all 53 microphone locations. Although different optimum microphone locations are selected for the two impedance eduction methods, there is significant overlap. Thus, the union of these two microphone arrays is preferred, as it supports usage of both methods. This array contains 3 microphones in the upstream hard wall section, 14 microphones opposite the liner, and 3 microphones in the downstream hard wall section.
Robust speaker's location detection in a vehicle environment using GMM models.
Hu, Jwu-Sheng; Cheng, Chieh-Cheng; Liu, Wei-Han
2006-04-01
Abstract-Human-computer interaction (HCI) using speech communication is becoming increasingly important, especially in driving where safety is the primary concern. Knowing the speaker's location (i.e., speaker localization) not only improves the enhancement results of a corrupted signal, but also provides assistance to speaker identification. Since conventional speech localization algorithms suffer from the uncertainties of environmental complexity and noise, as well as from the microphone mismatch problem, they are frequently not robust in practice. Without a high reliability, the acceptance of speech-based HCI would never be realized. This work presents a novel speaker's location detection method and demonstrates high accuracy within a vehicle cabinet using a single linear microphone array. The proposed approach utilize Gaussian mixture models (GMM) to model the distributions of the phase differences among the microphones caused by the complex characteristic of room acoustic and microphone mismatch. The model can be applied both in near-field and far-field situations in a noisy environment. The individual Gaussian component of a GMM represents some general location-dependent but content and speaker-independent phase difference distributions. Moreover, the scheme performs well not only in nonline-of-sight cases, but also when the speakers are aligned toward the microphone array but at difference distances from it. This strong performance can be achieved by exploiting the fact that the phase difference distributions at different locations are distinguishable in the environment of a car. The experimental results also show that the proposed method outperforms the conventional multiple signal classification method (MUSIC) technique at various SNRs.
Ground effects on aircraft noise. [near grazing incidence
NASA Technical Reports Server (NTRS)
Willshire, W. L., Jr.; Hilton, D. A.
1979-01-01
A flight experiment was conducted to investigate air-to-ground propagation of sound near grazing incidence. A turbojet-powered aircraft was flown at low altitudes over the ends of two microphone arrays. An eight-microphone array was positioned along a 1850 m concrete runway. The second array consisted of 12 microphones positioned parallel to the runway over grass. Twenty-eight flights were flown at altitudes ranging from 10 m to 160 m. The acoustic data recorded in the field reduced to one-third-octave band spectra and time correlated with the flight and weather information. A small portion of the data was further reduced to values of ground attenuation as a function of frequency and incidence angle by two different methods. In both methods, the acoustic signals compared originated from identical sources. Attenuation results obtained by using the two methods were in general agreement. The measured ground attenuation was largest in the frequency range of 200 to 400 Hz. A strong dependence was found between ground attenuation and incidence angle with little attenuation measured for angles of incidence greater than 10 to 15 degrees.
Localization of sound sources in a room with one microphone
NASA Astrophysics Data System (ADS)
Peić Tukuljac, Helena; Lissek, Hervé; Vandergheynst, Pierre
2017-08-01
Estimation of the location of sound sources is usually done using microphone arrays. Such settings provide an environment where we know the difference between the received signals among different microphones in the terms of phase or attenuation, which enables localization of the sound sources. In our solution we exploit the properties of the room transfer function in order to localize a sound source inside a room with only one microphone. The shape of the room and the position of the microphone are assumed to be known. The design guidelines and limitations of the sensing matrix are given. Implementation is based on the sparsity in the terms of voxels in a room that are occupied by a source. What is especially interesting about our solution is that we provide localization of the sound sources not only in the horizontal plane, but in the terms of the 3D coordinates inside the room.
Real time aircraft fly-over noise discrimination
NASA Astrophysics Data System (ADS)
Genescà, M.; Romeu, J.; Pàmies, T.; Sánchez, A.
2009-06-01
A method for measuring aircraft noise time history with automatic elimination of simultaneous urban noise is presented in this paper. A 3 m-long 12-microphone sparse array has been proven to give good performance in a wide range of urban placements. Nowadays, urban placements have to be avoided because their background noise has a great influence on the measurements made by sound level meters or single microphones. Because of the small device size and low number of microphones (that make it so easy to set up), the resolution of the device is not high enough to provide a clean aircraft noise time history by only applying frequency domain beamforming to the spatial cross-correlations of the microphones' signals. Therefore, a new step to the processing algorithm has been added to eliminate this handicap.
Application of a New Infrasound Sensor Technology in a Long Range Infrasound Propagation Experiment
NASA Astrophysics Data System (ADS)
Talmadge, C. L.; Waxler, R.; Hetzer, C. H.; Kleniert, D. E., Jr.; Dillion, K.; Assink, J.; Aydin, A.
2009-12-01
A low-cost ruggedized infrasound sensor has been developed at the NCPA laboratory of the University of Mississippi for outdoor infrasound measurements. This sensor has similar performance characteristics to other "standard" infrasound sensors, such as the Chaparral 50. A total of 50 sensors were constructed for this experiment, of which 42 were deployed on the Nevada and Utah desert for a period of four months. A long-range infrasound propagation experiment using these sensors was performed during the summer and fall of 2009. Source sizes varied in size from 4, 20 and 80 equivalent tons of TNT. The blasts were carried out typically on the Monday of each week in the afternoon, and were part of a scheduled demolition of first, second and third stages of trident missiles. In addition to a source capture location 23-km south of the site of the blasts, a series of 8 5-element arrays are located to the west of the blast location, at approximate ranges of 180 through 250 km in 10-km steps. Each array consisted of elements at -150-m, -50-m, 0-m, 50-m and 150-m relative to the center of the array along an east-west direction, and all microphones were equipped with 4 50-ft porous hoses connected to the microphone manifold for wind noise suppression. The signals from the microphones were digitized using GPS-synchronized, 24-bit DAQ systems. A Westerly direction for the deployment of the microphones was motivated by the presence of a strong stratospheric duct that persists through the summer months in the northern hemisphere at these latitudes. In this paper, we will discuss feasibility issues related the design of the NCPA microphone that makes possible deployments on these on large scales. Signal to noise issues related to temperature and wind fluctuations will also be discussed. Future plans include a larger scale deployment of several hundred microphones during 2010. We will discuss how the lessons learned from this series of measurements impacts that future deployment.
A combined microphone and camera calibration technique with application to acoustic imaging.
Legg, Mathew; Bradley, Stuart
2013-10-01
We present a calibration technique for an acoustic imaging microphone array, combined with a digital camera. Computer vision and acoustic time of arrival data are used to obtain microphone coordinates in the camera reference frame. Our new method allows acoustic maps to be plotted onto the camera images without the need for additional camera alignment or calibration. Microphones and cameras may be placed in an ad-hoc arrangement and, after calibration, the coordinates of the microphones are known in the reference frame of a camera in the array. No prior knowledge of microphone positions, inter-microphone spacings, or air temperature is required. This technique is applied to a spherical microphone array and a mean difference of 3 mm was obtained between the coordinates obtained with this calibration technique and those measured using a precision mechanical method.
Studying Room Acoustics using a Monopole-Dipole Microphone Array
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Abel, Jonathan S.; Gills, Stephen R. (Technical Monitor)
1997-01-01
The use of a soundfield microphone for examining the directional nature of a room impulse response was reported recently. By cross-correlating monopole and co-located dipole microphone signals aligned with left-right, up-down, and front-back axes, a sense of signal direction of arrival is revealed. The current study is concerned with the array's ability to detect individual reflections and directions of arrival, as a function of the cross-correlation window duration. If is window is too long, weak reflections are overlooked; if too short, spurious detections result. Guidelines are presented for setting the window width according to perceptual criteria. Formulas are presented describing the accuracy with which direction of arrival can be estimated as a function of room specifics and measurement noise. The direction of arrival of early reflections is more accurately determined than that of later reflections which are quieter and more numerous. The transition from a fairly directional sound field at the beginning of the room impulse response to a uni-directional diffuse field is examined. Finally, it is shown that measurements from additional dipole orientations can significantly improve the ability to detect reflections and estimate their directions of arrival.
Oreinos, Chris; Buchholz, Jörg M
2015-06-01
Recently, an increased interest has been demonstrated in evaluating hearing aids (HAs) inside controlled, but at the same time, realistic sound environments. A promising candidate that employs loudspeakers for realizing such sound environments is the listener-centered method of higher-order ambisonics (HOA). Although the accuracy of HOA has been widely studied, it remains unclear to what extent the results can be generalized when (1) a listener wearing HAs that may feature multi-microphone directional algorithms is considered inside the reconstructed sound field and (2) reverberant scenes are recorded and reconstructed. For the purpose of objectively validating HOA for listening tests involving HAs, a framework was developed to simulate the entire path of sounds presented in a modeled room, recorded by a HOA microphone array, decoded to a loudspeaker array, and finally received at the ears and HA microphones of a dummy listener fitted with HAs. Reproduction errors at the ear signals and at the output of a cardioid HA microphone were analyzed for different anechoic and reverberant scenes. It was found that the diffuse reverberation reduces the considered time-averaged HOA reconstruction errors which, depending on the considered application, suggests that reverberation can increase the usable frequency range of a HOA system.
Keidser, Gitte; Rohrseitz, Kristin; Dillon, Harvey; Hamacher, Volkmar; Carter, Lyndal; Rass, Uwe; Convery, Elizabeth
2006-10-01
This study examined the effect that signal processing strategies used in modern hearing aids, such as multi-channel WDRC, noise reduction, and directional microphones have on interaural difference cues and horizontal localization performance relative to linear, time-invariant amplification. Twelve participants were bilaterally fitted with BTE devices. Horizontal localization testing using a 360 degrees loudspeaker array and broadband pulsed pink noise was performed two weeks, and two months, post-fitting. The effect of noise reduction was measured with a constant noise present at 80 degrees azimuth. Data were analysed independently in the left/right and front/back dimension and showed that of the three signal processing strategies, directional microphones had the most significant effect on horizontal localization performance and over time. Specifically, a cardioid microphone could decrease front/back errors over time, whereas left/right errors increased when different microphones were fitted to left and right ears. Front/back confusions were generally prominent. Objective measurements of interaural differences on KEMAR explained significant shifts in left/right errors. In conclusion, there is scope for improving the sense of localization in hearing aid users.
Passive wake acoustics measurements at Denver International Airport
DOT National Transportation Integrated Search
2004-04-26
From August to September 2003, NASA conducted an extensive measurement campaign to characterize the acoustic signal of wake vortices. A large, both spatially as well as in number of elements, phased microphone array was deployed at Denver Internation...
Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G.
2017-01-01
In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators. PMID:29099790
Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Kumon, Makoto; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G
2017-11-03
In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators.
Wake Vortex Avoidance System and Method
NASA Technical Reports Server (NTRS)
Shams, Qamar A. (Inventor); Zuckerwar, Allan J. (Inventor); Knight, Howard K. (Inventor)
2017-01-01
A wake vortex avoidance system includes a microphone array configured to detect low frequency sounds. A signal processor determines a geometric mean coherence based on the detected low frequency sounds. A display displays wake vortices based on the determined geometric mean coherence.
Measurement of Model Noise in a Hard-Wall Wind Tunnel
NASA Technical Reports Server (NTRS)
Soderman, Paul T.
2006-01-01
Identification, analysis, and control of fluid-mechanically-generated sound from models of aircraft and automobiles in special low-noise, semi-anechoic wind tunnels are an important research endeavor. Such studies can also be done in aerodynamic wind tunnels that have hard walls if phased microphone arrays are used to focus on the noise-source regions and reject unwanted reflections or background noise. Although it may be difficult to simulate the total flyover or drive-by noise in a closed wind tunnel, individual noise sources can be isolated and analyzed. An acoustic and aerodynamic study was made of a 7-percent-scale aircraft model in a NASA Ames 7-by-10-ft (about 2-by-3-m) wind tunnel for the purpose of identifying and attenuating airframe noise sources. Simulated landing, takeoff, and approach configurations were evaluated at Mach 0.26. Using a phased microphone array mounted in the ceiling over the inverted model, various noise sources in the high-lift system, landing gear, fins, and miscellaneous other components were located and compared for sound level and frequency at one flyover location. Numerous noise-alleviation devices and modifications of the model were evaluated. Simultaneously with acoustic measurements, aerodynamic forces were recorded to document aircraft conditions and any performance changes caused by geometric modifications. Most modern microphone-array systems function in the frequency domain in the sense that spectra of the microphone outputs are computed, then operations are performed on the matrices of microphone-signal cross-spectra. The entire acoustic field at one station in such a system is acquired quickly and interrogated during postprocessing. Beam-forming algorithms are employed to scan a plane near the model surface and locate noise sources while rejecting most background noise and spurious reflections. In the case of the system used in this study, previous studies in the wind tunnel have identified noise sources up to 19 dB below the normal background noise of the wind tunnel. Theoretical predictions of array performance are used to minimize the width and the side lobes of the beam pattern of the microphone array for a given test arrangement. To capture flyover noise of the inverted model, a 104-element microphone array in a 622-mm-diameter cluster was installed in a 19-mm-thick poly(methyl methacrylate) plate in the ceiling of the test section of the wind tunnel above the aircraft model (see Figure 1). The microphones were of the condenser type, and their diaphragms were mounted flush in the array plate, which was recessed 12.7 mm into the ceiling and covered by a porous aromatic polyamide cloth (not shown in the figure) to minimize boundary-layer noise. This design caused the level of flow noise to be much less than that of flush-mount designs. The drawback of this design was that the cloth attenuated sound somewhat and created acoustic resonances that could grow to several dB at a frequency of 10 kHz.
Design and Use of Microphone Directional Arrays for Aeroacoustic Measurements
NASA Technical Reports Server (NTRS)
Humphreys, William M., Jr.; Brooks, Thomas F.; Hunter, William W., Jr.; Meadows, Kristine R.
1998-01-01
An overview of the development of two microphone directional arrays for aeroacoustic testing is presented. These arrays were specifically developed to measure airframe noise in the NASA Langley Quiet Flow Facility. A large aperture directional array using 35 flush-mounted microphones was constructed to obtain high resolution noise localization maps around airframe models. This array possesses a maximum diagonal aperture size of 34 inches. A unique logarithmic spiral layout design was chosen for the targeted frequency range of 2-30 kHz. Complementing the large array is a small aperture directional array, constructed to obtain spectra and directivity information from regions on the model. This array, possessing 33 microphones with a maximum diagonal aperture size of 7.76 inches, is easily moved about the model in elevation and azimuth. Custom microphone shading algorithms have been developed to provide a frequency- and position-invariant sensing area from 10-40 kHz with an overall targeted frequency range for the array of 5-60 kHz. Both arrays are employed in acoustic measurements of a 6 percent of full scale airframe model consisting of a main element NACA 632-215 wing section with a 30 percent chord half-span flap. Representative data obtained from these measurements is presented, along with details of the array calibration and data post-processing procedures.
Micromachined microphone array on a chip for turbulent boundary layer measurements
NASA Astrophysics Data System (ADS)
Krause, Joshua Steven
A surface micromachined microphone array on a single chip has been successfully designed, fabricated, characterized, and tested for aeroacoustic purposes. The microphone was designed to have venting through the diaphragm, 64 elements (8x8) on the chip, and used a capacitive transduction scheme. The microphone was fabricated using the MEMSCAP PolyMUMPs process (a foundry polysilicon surface micromachining process) along with facilities at Tufts Micro and Nano Fabrication Facility (TMNF) where a Parylene-C passivation layer deposition and release of the microstructures were performed. The devices are packaged with low profile interconnects, presenting a maximum of 100 mum of surface topology. The design of an individual microphone was completed through the use of a lumped element model (LEM) to determine the theoretical performance of the microphone. Off-chip electronics were created to allow the microphone array outputs to be redirected to one of two channels, allowing dynamic reconfiguration of the effective transducer shape in software and provide 80 dB off isolation. The characterization was completed through the use of laser Doppler vibrometry (LDV), acoustic plane wave tube and free-field calibration, and electrical noise floor testing in a Faraday cage. Measured microphone sensitivity is 0.15 mV/Pa for an individual microphone and 8.7 mV/Pa for the entire array, in close agreement with model predictions. The microphones and electronics operate over the 200--40 000 Hz band. The dynamic range extends from 60 dB SPL in a 1 Hz band to greater than 150 dB SPL. Element variability was +/-0.05 mV/Pa in sensitivity with an array yield of 95%. Wind tunnel testing at flow rates of up to 205.8 m/s indicates that the devices continue to operate in flow without damage, and can be successfully reconfigured on the fly. Care has been taken to systematically remove contaminating signals (acoustic, vibration, and noise floor) from the wind tunnel data to determine actual turbulent pressure fluctuations beneath the turbulent boundary layer to an uncertainty level of 1 dB. Analysis of measured boundary layer pressure spectra at six flow rates from 34.3 m/s to 205.8 m/s indicate single point wall spectral measurements in close agreement to the empirical models of Goody, Chase-Howe, and Efimtsov above Mach 0.4. The MEMS data more closely resembles the magnitude of the Efimtsov model at higher frequencies (25% higher above 3 kHz for the Mach 0.6 case); however, the shape of the spectral model is closer to the model of Goody (50% lower for the Mach 0.6 case for all frequencies). The Chase-Howe model does fall directly on the MEMS data starting at 6 kHz, but has a sharper slope and does not resemble the data at below 6 kHz.
NASA Technical Reports Server (NTRS)
Sheplak, Mark (Inventor); Nishida, Toshikaza (Inventor); Humphreys, William M. (Inventor); Arnold, David P. (Inventor)
2006-01-01
Embodiments of the present invention described and shown in the specification aid drawings include a combination responsive to an acoustic wave that can be utilized as a dynamic pressure sensor. In one embodiment of the present invention, the combination has a substrate having a first surface and an opposite second surface, a microphone positioned on the first surface of the substrate and having an input and a first output and a second output, wherein the input receives a biased voltage, and the microphone generates an output signal responsive to the acoustic wave between the first output and the second output. The combination further has an amplifier positioned on the first surface of the substrate and having a first input and a second input and an output, wherein the first input of the amplifier is electrically coupled to the first output of the microphone and the second input of the amplifier is electrically coupled to the second output of the microphone for receiving the output sinual from the microphone. The amplifier is spaced from the microphone with a separation smaller than 0.5 mm.
Phase Calibration of Microphones by Measurement in the Free-field
NASA Technical Reports Server (NTRS)
Shams, Qamar A.; Bartram, Scott M.; Humphreys, William M.; Zuckewar, Allan J.
2006-01-01
Over the past several years, significant effort has been expended at NASA Langley developing new Micro-Electro-Mechanical System (MEMS)-based microphone directional array instrumentation for high-frequency aeroacoustic measurements in wind tunnels. This new type of array construction solves two challenges which have limited the widespread use of large channel-count arrays, namely by providing a lower cost-per-channel and a simpler method for mounting microphones in wind tunnels and in field-deployable arrays. The current generation of array instrumentation is capable of extracting accurate noise source location and directivity on a variety of airframe components using sophisticated data reduction algorithms [1-2]. Commercially-available MEMS microphones are condenser-type devices and have some desirable characteristics when compared with conventional condenser-type microphones. The most important advantages of MEMS microphones are their size, price, and power consumption. However, the commercially-available units suffer from certain important shortcomings. Based on experiments with array prototypes, it was found that both the bandwidth and the sound pressure limit of the microphones should be increased significantly to improve the performance and flexibility of the microphone array [3]. It was also desired to modify the packaging to eliminate unwanted Helmholtz resonance s exhibited by the commercial devices. Thus, new requirements were defined as follows: Frequency response: 100 Hz to 100 KHz (+/-3dB) Upper sound pressure limit: Design 1: 130 dB SPL (THD less than 5%) Design 2: 150-160 dB SPL (THD less than 5%) Packaging: 3.73 x 6.13 x 1.3 mm can with laser-etched lid. In collaboration with Novusonic Acoustic Innovation, NASA modified a Knowles SiSonic MEMS design to meet these new requirements. Coupled with the design of the enhanced MEMS microphones was the development of a new calibration method for simultaneously obtaining the sensitivity and phase response of the devices over their entire broadband frequency range. Traditionally, electrostatic actuators (EA) have been used to characterize air-condenser microphones; however, MEMS microphones are not adaptable to the EA method due to their construction and very small diaphragm size [4]. Hence a substitution based, free-field method was developed to calibrate these microphones at frequencies up to 80 kHz. The technique relied on the use of a random, ultrasonic broadband centrifugal sound source located in a small anechoic chamber. The free-field sensitivity (voltage per unit sound pressure) was obtained using the procedure outlined in reference 4. Phase calibrations of the MEMS microphones were derived from cross spectral phase comparisons between the reference and test substitution microphones and an adjacent and invariant grazing-incidence 1/8-inch standard microphone. The free-field calibration procedure along with representative sensitivity and phase responses for the new high-frequency MEMS microphones are presented here.
Blind source separation and localization using microphone arrays
NASA Astrophysics Data System (ADS)
Sun, Longji
The blind source separation and localization problem for audio signals is studied using microphone arrays. Pure delay mixtures of source signals typically encountered in outdoor environments are considered. Our proposed approach utilizes the subspace methods, including multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms, to estimate the directions of arrival (DOAs) of the sources from the collected mixtures. Since audio signals are generally considered broadband, the DOA estimates at frequencies with the large sum of squared amplitude values are combined to obtain the final DOA estimates. Using the estimated DOAs, the corresponding mixing and demixing matrices are computed, and the source signals are recovered using the inverse short time Fourier transform. Subspace methods take advantage of the spatial covariance matrix of the collected mixtures to achieve robustness to noise. While the subspace methods have been studied for localizing radio frequency signals, audio signals have their special properties. For instance, they are nonstationary, naturally broadband and analog. All of these make the separation and localization for the audio signals more challenging. Moreover, our algorithm is essentially equivalent to the beamforming technique, which suppresses the signals in unwanted directions and only recovers the signals in the estimated DOAs. Several crucial issues related to our algorithm and their solutions have been discussed, including source number estimation, spatial aliasing, artifact filtering, different ways of mixture generation, and source coordinate estimation using multiple arrays. Additionally, comprehensive simulations and experiments have been conducted to examine various aspects of the algorithm. Unlike the existing blind source separation and localization methods, which are generally time consuming, our algorithm needs signal mixtures of only a short duration and therefore supports real-time implementation.
Methods for Room Acoustic Analysis and Synthesis using a Monopole-Dipole Microphone Array
NASA Technical Reports Server (NTRS)
Abel, J. S.; Begault, Durand R.; Null, Cynthia H. (Technical Monitor)
1998-01-01
In recent work, a microphone array consisting of an omnidirectional microphone and colocated dipole microphones having orthogonally aligned dipole axes was used to examine the directional nature of a room impulse response. The arrival of significant reflections was indicated by peaks in the power of the omnidirectional microphone response; reflection direction of arrival was revealed by comparing zero-lag crosscorrelations between the omnidirectional response and the dipole responses to the omnidirectional response power to estimate arrival direction cosines with respect to the dipole axes.
Calibration of High Frequency MEMS Microphones
NASA Technical Reports Server (NTRS)
Shams, Qamar A.; Humphreys, William M.; Bartram, Scott M.; Zuckewar, Allan J.
2007-01-01
Understanding and controlling aircraft noise is one of the major research topics of the NASA Fundamental Aeronautics Program. One of the measurement technologies used to acquire noise data is the microphone directional array (DA). Traditional direction array hardware, consisting of commercially available condenser microphones and preamplifiers can be too expensive and their installation in hard-walled wind tunnel test sections too complicated. An emerging micro-machining technology coupled with the latest cutting edge technologies for smaller and faster systems have opened the way for development of MEMS microphones. The MEMS microphone devices are available in the market but suffer from certain important shortcomings. Based on early experiments with array prototypes, it has been found that both the bandwidth and the sound pressure level dynamic range of the microphones should be increased significantly to improve the performance and flexibility of the overall array. Thus, in collaboration with an outside MEMS design vendor, NASA Langley modified commercially available MEMS microphone as shown in Figure 1 to meet the new requirements. Coupled with the design of the enhanced MEMS microphones was the development of a new calibration method for simultaneously obtaining the sensitivity and phase response of the devices over their entire broadband frequency range. Over the years, several methods have been used for microphone calibration. Some of the common methods of microphone calibration are Coupler (Reciprocity, Substitution, and Simultaneous), Pistonphone, Electrostatic actuator, and Free-field calibration (Reciprocity, Substitution, and Simultaneous). Traditionally, electrostatic actuators (EA) have been used to characterize air-condenser microphones for wideband frequency ranges; however, MEMS microphones are not adaptable to the EA method due to their construction and very small diaphragm size. Hence a substitution-based, free-field method was developed to calibrate these microphones at frequencies up to 80 kHz. The technique relied on the use of a random, ultrasonic broadband centrifugal sound source located in a small anechoic chamber. Phase calibrations of the MEMS microphones were derived from cross spectral phase comparisons between the reference and test substitution microphones and an adjacent and invariant grazing-incidence 1/8-inch standard microphone.
Implementation of a Virtual Microphone Array to Obtain High Resolution Acoustic Images
Izquierdo, Alberto; Suárez, Luis; Suárez, David
2017-01-01
Using arrays with digital MEMS (Micro-Electro-Mechanical System) microphones and FPGA-based (Field Programmable Gate Array) acquisition/processing systems allows building systems with hundreds of sensors at a reduced cost. The problem arises when systems with thousands of sensors are needed. This work analyzes the implementation and performance of a virtual array with 6400 (80 × 80) MEMS microphones. This virtual array is implemented by changing the position of a physical array of 64 (8 × 8) microphones in a grid with 10 × 10 positions, using a 2D positioning system. This virtual array obtains an array spatial aperture of 1 × 1 m2. Based on the SODAR (SOund Detection And Ranging) principle, the measured beampattern and the focusing capacity of the virtual array have been analyzed, since beamforming algorithms assume to be working with spherical waves, due to the large dimensions of the array in comparison with the distance between the target (a mannequin) and the array. Finally, the acoustic images of the mannequin, obtained for different frequency and range values, have been obtained, showing high angular resolutions and the possibility to identify different parts of the body of the mannequin. PMID:29295485
Background Acoustics Levels in the 9x15 Wind Tunnel and Linear Array Testing
NASA Technical Reports Server (NTRS)
Stephens, David
2011-01-01
The background noise level in the 9x15 foot wind tunnel at NASA Glenn has been documented, and the results compare favorably with historical measurements. A study of recessed microphone mounting techniques was also conducted, and a recessed cavity with a micronic wire mesh screen reduces hydrodynamic noise by around 10 dB. A three-microphone signal processing technique can provide additional benefit, rejecting up to 15 dB of noise contamination at some frequencies. The screen and cavity system offers considerable benefit to test efficiency, although there are additional calibration requirements.
NASA Technical Reports Server (NTRS)
Radcliffe, Eliott (Inventor); Naguib, Ahmed (Inventor); Humphreys, Jr., William M. (Inventor)
2014-01-01
A feedback-controlled microphone includes a microphone body and a membrane operatively connected to the body. The membrane is configured to be initially deflected by acoustic pressure such that the initial deflection is characterized by a frequency response. The microphone also includes a sensor configured to detect the frequency response of the initial deflection and generate an output voltage indicative thereof. The microphone additionally includes a compensator in electric communication with the sensor and configured to establish a regulated voltage in response to the output voltage. Furthermore, the microphone includes an actuator in electric communication with the compensator, wherein the actuator is configured to secondarily deflect the membrane in opposition to the initial deflection such that the frequency response is adjusted. An acoustic beam forming microphone array including a plurality of the above feedback-controlled microphones is also disclosed.
NASA Technical Reports Server (NTRS)
Brown, William (Inventor); Yu, Zhenhong (Inventor); Kebabian, Paul L. (Inventor); Assif, James (Inventor)
2017-01-01
In one embodiment, a photoacoustic effect measurement instrument for measuring a species (e.g., a species of PM) in a gas employs a pair of differential acoustic cells including a sample cell that receives sample gas including the species, and a reference cell that receives a filtered version of the sample gas from which the species has been substantially removed. An excitation light source provides an amplitude modulated beam to each of the acoustic cells. An array of multiple microphones is mounted to each of the differential acoustic cells, and measures an acoustic wave generated in the respective acoustic cell by absorption of light by sample gas therein to produce a respective signal. The microphones are isolated from sample gas internal to the acoustic cell by a film. A preamplifier determines a differential signal and a controller calculates concentration of the species based on the differential signal.
Assessment of Systematic Measurement Errors for Acoustic Travel-Time Tomography of the Atmosphere
2013-01-01
measurements include assess- ment of the time delays in electronic circuits and mechanical hardware (e.g., drivers and microphones) of a tomography array ...hardware and electronic circuits of the tomography array and errors in synchronization of the transmitted and recorded signals. For example, if...coordinates can be as large as 30 cm. These errors are equivalent to the systematic errors in the travel times of 0.9 ms. Third, loudspeakers which are used
The capture and recreation of 3D auditory scenes
NASA Astrophysics Data System (ADS)
Li, Zhiyun
The main goal of this research is to develop the theory and implement practical tools (in both software and hardware) for the capture and recreation of 3D auditory scenes. Our research is expected to have applications in virtual reality, telepresence, film, music, video games, auditory user interfaces, and sound-based surveillance. The first part of our research is concerned with sound capture via a spherical microphone array. The advantage of this array is that it can be steered into any 3D directions digitally with the same beampattern. We develop design methodologies to achieve flexible microphone layouts, optimal beampattern approximation and robustness constraint. We also design novel hemispherical and circular microphone array layouts for more spatially constrained auditory scenes. Using the captured audio, we then propose a unified and simple approach for recreating them by exploring the reciprocity principle that is satisfied between the two processes. Our approach makes the system easy to build, and practical. Using this approach, we can capture the 3D sound field by a spherical microphone array and recreate it using a spherical loudspeaker array, and ensure that the recreated sound field matches the recorded field up to a high order of spherical harmonics. For some regular or semi-regular microphone layouts, we design an efficient parallel implementation of the multi-directional spherical beamformer by using the rotational symmetries of the beampattern and of the spherical microphone array. This can be implemented in either software or hardware and easily adapted for other regular or semi-regular layouts of microphones. In addition, we extend this approach for headphone-based system. Design examples and simulation results are presented to verify our algorithms. Prototypes are built and tested in real-world auditory scenes.
Evaluation of Methods for In-Situ Calibration of Field-Deployable Microphone Phased Arrays
NASA Technical Reports Server (NTRS)
Humphreys, William M.; Lockard, David P.; Khorrami, Mehdi R.; Culliton, William G.; McSwain, Robert G.
2017-01-01
Current field-deployable microphone phased arrays for aeroacoustic flight testing require the placement of hundreds of individual sensors over a large area. Depending on the duration of the test campaign, the microphones may be required to stay deployed at the testing site for weeks or even months. This presents a challenge in regards to tracking the response (i.e., sensitivity) of the individual sensors as a function of time in order to evaluate the health of the array. To address this challenge, two different methods for in-situ tracking of microphone responses are described. The first relies on the use of an aerial sound source attached as a payload on a hovering small Unmanned Aerial System (sUAS) vehicle. The second relies on the use of individually excited ground-based sound sources strategically placed throughout the array pattern. Testing of the two methods was performed in microphone array deployments conducted at Fort A.P. Hill in 2015 and at Edwards Air Force Base in 2016. The results indicate that the drift in individual sensor responses can be tracked reasonably well using both methods. Thus, in-situ response tracking methods are useful as a diagnostic tool for monitoring the health of a phased array during long duration deployments.
Wake Vortex Detection: Phased Microphone vs. Linear Infrasonic Array
NASA Technical Reports Server (NTRS)
Shams, Qamar A.; Zuckerwar, Allan J.; Sullivan, Nicholas T.; Knight, Howard K.
2014-01-01
Sensor technologies can make a significant impact on the detection of aircraft-generated vortices in an air space of interest, typically in the approach or departure corridor. Current state-of-the art sensor technologies do not provide three-dimensional measurements needed for an operational system or even for wake vortex modeling to advance the understanding of vortex behavior. Most wake vortex sensor systems used today have been developed only for research applications and lack the reliability needed for continuous operation. The main challenges for the development of an operational sensor system are reliability, all-weather operation, and spatial coverage. Such a sensor has been sought for a period of last forty years. Acoustic sensors were first proposed and tested by National Oceanic and Atmospheric Administration (NOAA) early in 1970s for tracking wake vortices but these acoustic sensors suffered from high levels of ambient noise. Over a period of the last fifteen years, there has been renewed interest in studying noise generated by aircraft wake vortices, both numerically and experimentally. The German Aerospace Center (DLR) was the first to propose the application of a phased microphone array for the investigation of the noise sources of wake vortices. The concept was first demonstrated at Berlins Airport Schoenefeld in 2000. A second test was conducted in Tarbes, France, in 2002, where phased microphone arrays were applied to study the wake vortex noise of an Airbus 340. Similarly, microphone phased arrays and other opto-acoustic microphones were evaluated in a field test at the Denver International Airport in 2003. For the Tarbes and Denver tests, the wake trajectories of phased microphone arrays and lidar were compared as these were installed side by side. Due to a built-in pressure equalization vent these microphones were not suitable for capturing acoustic noise below 20 Hz. Our group at NASA Langley Research Center developed and installed an infrasonic array at the Newport News-Williamsburg International Airport early in the year 2013. A pattern of pressure burst, high-coherence intervals, and diminishing-coherence intervals was observed for all takeoff and landing events without exception. The results of a phased microphone vs. linear infrasonic array comparison will be presented.
NASA Technical Reports Server (NTRS)
Miles, J. H.
1975-01-01
Ground reflection effects on the propagation of jet noise over an asphalt surface are discussed for data obtained using a 33.02-cm diameter nozzle with microphones at several heights and distances from the nozzle axis. Ground reflection effects are analyzed using the concept of a reflected signal transfer function which represents the influence of both the reflecting surface and the atmosphere on the propagation of the reflected signal in a mathematical model. The mathematical model used as a basis for the computer program was successful in significantly reducing the ground reflection effects. The range of values of the single complex number used to define the reflected signal transfer function was larger than expected when determined only by the asphalt surface. This may indicate that the atmosphere is affecting the propagation of the reflected signal more than the asphalt surface. The selective placement of the reinforcements and cancellations in the design of an experiment to minimize ground reflection effects is also discussed.
NASA Technical Reports Server (NTRS)
Miles, J. H.
1975-01-01
Ground reflection effects on the propagation of jet noise over an asphalt surface are discussed for data obtained using a 33.02 cm (13-in.) diameter nozzle with microphones at several heights and distances from the nozzle axis. Analysis of ground reflection effects is accomplished using the concept of a reflected signal transfer function which represents the influence of both the reflecting surface and the atmosphere on the propagation of the reflected signal in a mathematical model. The mathematical model used as a basis for the computer program was successful in significantly reducing the ground reflection effects. The range of values of the single complex number used to define the reflected signal transfer function was larger than expected when determined only by the asphalt surface. This may indicate that the atmosphere is affecting the propagation of the reflected signal more than the asphalt surface. Also discussed is the selective placement of the reinforcements and cancellations in the design of an experiment to minimize ground reflection effects.
Plane-wave decomposition by spherical-convolution microphone array
NASA Astrophysics Data System (ADS)
Rafaely, Boaz; Park, Munhum
2004-05-01
Reverberant sound fields are widely studied, as they have a significant influence on the acoustic performance of enclosures in a variety of applications. For example, the intelligibility of speech in lecture rooms, the quality of music in auditoria, the noise level in offices, and the production of 3D sound in living rooms are all affected by the enclosed sound field. These sound fields are typically studied through frequency response measurements or statistical measures such as reverberation time, which do not provide detailed spatial information. The aim of the work presented in this seminar is the detailed analysis of reverberant sound fields. A measurement and analysis system based on acoustic theory and signal processing, designed around a spherical microphone array, is presented. Detailed analysis is achieved by decomposition of the sound field into waves, using spherical Fourier transform and spherical convolution. The presentation will include theoretical review, simulation studies, and initial experimental results.
Dorman, Michael F; Natale, Sarah; Loiselle, Louise
2018-03-01
Sentence understanding scores for patients with cochlear implants (CIs) when tested in quiet are relatively high. However, sentence understanding scores for patients with CIs plummet with the addition of noise. To assess, for patients with CIs (MED-EL), (1) the value to speech understanding of two new, noise-reducing microphone settings and (2) the effect of the microphone settings on sound source localization. Single-subject, repeated measures design. For tests of speech understanding, repeated measures on (1) number of CIs (one, two), (2) microphone type (omni, natural, adaptive beamformer), and (3) type of noise (restaurant, cocktail party). For sound source localization, repeated measures on type of signal (low-pass [LP], high-pass [HP], broadband noise). Ten listeners, ranging in age from 48 to 83 yr (mean = 57 yr), participated in this prospective study. Speech understanding was assessed in two noise environments using monaural and bilateral CIs fit with three microphone types. Sound source localization was assessed using three microphone types. In Experiment 1, sentence understanding scores (in terms of percent words correct) were obtained in quiet and in noise. For each patient, noise was first added to the signal to drive performance off of the ceiling in the bilateral CI-omni microphone condition. The other conditions were then administered at that signal-to-noise ratio in quasi-random order. In Experiment 2, sound source localization accuracy was assessed for three signal types using a 13-loudspeaker array over a 180° arc. The dependent measure was root-mean-score error. Both the natural and adaptive microphone settings significantly improved speech understanding in the two noise environments. The magnitude of the improvement varied between 16 and 19 percentage points for tests conducted in the restaurant environment and between 19 and 36 percentage points for tests conducted in the cocktail party environment. In the restaurant and cocktail party environments, both the natural and adaptive settings, when implemented on a single CI, allowed scores that were as good as, or better, than scores in the bilateral omni test condition. Sound source localization accuracy was unaltered by either the natural or adaptive settings for LP, HP, or wideband noise stimuli. The data support the use of the natural microphone setting as a default setting. The natural setting (1) provides better speech understanding in noise than the omni setting, (2) does not impair sound source localization, and (3) retains low-frequency sensitivity to signals from the rear. Moreover, bilateral CIs equipped with adaptive beamforming technology can engender speech understanding scores in noise that fall only a little short of scores for a single CI in quiet. American Academy of Audiology
Assessment of ground effects on the propagation of aircraft noise: The T-38A flight experiment
NASA Technical Reports Server (NTRS)
Willshire, W. L., Jr.
1980-01-01
A flight experiment was conducted to investigate air to ground propagation of sound at gazing angles of incidence. A turbojet powered airplane was flown at altitudes ranging from 10 to 160 m over a 20-microphone array positioned over grass and concrete. The dependence of ground effects on frequency, incidence angle, and slant range was determined using two analysis methods. In one method, a microphone close to the flight path is compared to down range microphones. In the other method, comparisons are made between two microphones which were equidistant from the flight path but positioned over the two surfaces. In both methods, source directivity angle was the criterion by which portions of the microphone signals were compared. The ground effects were largest in the frequency range of 200 to 400 Hz and were found to be dependent on incidence angle and slant range. Ground effects measured for angles of incidence greater than 10 deg to 15 deg were near zero. Measured attenuation increased with increasing slant range for slant ranges less than 750 m. Theoretical predictions were found to be in good agreement with the major details of the measured results.
Imaging of heart acoustic based on the sub-space methods using a microphone array.
Moghaddasi, Hanie; Almasganj, Farshad; Zoroufian, Arezoo
2017-07-01
Heart disease is one of the leading causes of death around the world. Phonocardiogram (PCG) is an important bio-signal which represents the acoustic activity of heart, typically without any spatiotemporal information of the involved acoustic sources. The aim of this study is to analyze the PCG by employing a microphone array by which the heart internal sound sources could be localized, too. In this paper, it is intended to propose a modality by which the locations of the active sources in the heart could also be investigated, during a cardiac cycle. In this way, a microphone array with six microphones is employed as the recording set up to be put on the human chest. In the following, the Group Delay MUSIC algorithm which is a sub-space based localization method is used to estimate the location of the heart sources in different phases of the PCG. We achieved to 0.14cm mean error for the sources of first heart sound (S 1 ) simulator and 0.21cm mean error for the sources of second heart sound (S 2 ) simulator with Group Delay MUSIC algorithm. The acoustical diagrams created for human subjects show distinct patterns in various phases of the cardiac cycles such as the first and second heart sounds. Moreover, the evaluated source locations for the heart valves are matched with the ones that are obtained via the 4-dimensional (4D) echocardiography applied, to a real human case. Imaging of heart acoustic map presents a new outlook to indicate the acoustic properties of cardiovascular system and disorders of valves and thereby, in the future, could be used as a new diagnostic tool. Copyright © 2017. Published by Elsevier B.V.
Infrasound array observation at Sakurajima volcano
NASA Astrophysics Data System (ADS)
Yokoo, A.; Suzuki, Y. J.; Iguchi, M.
2012-12-01
Showa crater at the southeastern flank of the Sakurajima volcano has erupted since 2006, accompanying intermittent Vulcanian eruptions with small scale ash emissions. We conducted an array observation in the last half of 2011 in order to locate infrasound source generated by the eruptions. The array located 3.5 km apart from the crater was composed of 5 microphones (1kHz sampling) aligned in the radial direction from the crater with 100-m-intervals, and additional 4 microphones (200Hz sampling) in tangential direction to the first line in December 2011. Two peaks, around 2Hz and 0.5Hz, in power spectrum of the infrasound were identified; the former peak would be related to the eigen frequency of the vent of Showa crater, but the latter would be related to ejection of eruption clouds. They should be checked by experimental studies. The first 10 s infrasound signal was made by explosion directly and the following small amplitude infrasound tremors for about 2 min were mostly composed of diffraction and reflection waves from the topography around the volcano, mainly the wall of the Aira Caldera. It shows propagation direction of infrasound tremor after the explosion signals should be carefully examined. Clear change in the height of the infrasound source was not identified while volcanic cloud grew up. Strong eddies of the growing volcanic cloud would not be main sources of such weak infrasound signals, thus, infrasound waves are emitted mainly from (or through) the vent itself.
Acoustic Source Localization in Aircraft Interiors Using Microphone Array Technologies
NASA Technical Reports Server (NTRS)
Sklanka, Bernard J.; Tuss, Joel R.; Buehrle, Ralph D.; Klos, Jacob; Williams, Earl G.; Valdivia, Nicolas
2006-01-01
Using three microphone array configurations at two aircraft body stations on a Boeing 777-300ER flight test, the acoustic radiation characteristics of the sidewall and outboard floor system are investigated by experimental measurement. Analysis of the experimental data is performed using sound intensity calculations for closely spaced microphones, PATCH Inverse Boundary Element Nearfield Acoustic Holography, and Spherical Nearfield Acoustic Holography. Each method is compared assessing strengths and weaknesses, evaluating source identification capability for both broadband and narrowband sources, evaluating sources during transient and steady-state conditions, and quantifying field reconstruction continuity using multiple array positions.
Dynamically Reconfigurable Microphone Arrays
2011-05-01
from a number of different positions. In the second tests, the 2 wireless microphones were combined with a rigid binaural array on top of the b21r...Static + 2 Wireless Using only a standard computer sound card, a robot is limited to binaural inputs. Even when using wireless microphones, the audio...34 in HRI, Arlington, VA, 2007, pp. 113-120. [6] M. Heckmann, T. Rodemann, F. Joublin, C. Goerick, and B. Scholling, "Auditory Inspired Binaural
Design and Implementation of Sound Searching Robots in Wireless Sensor Networks
Han, Lianfu; Shen, Zhengguang; Fu, Changfeng; Liu, Chao
2016-01-01
A sound target-searching robot system which includes a 4-channel microphone array for sound collection, magneto-resistive sensor for declination measurement, and a wireless sensor networks (WSN) for exchanging information is described. It has an embedded sound signal enhancement, recognition and location method, and a sound searching strategy based on a digital signal processor (DSP). As the wireless network nodes, three robots comprise the WSN a personal computer (PC) in order to search the three different sound targets in task-oriented collaboration. The improved spectral subtraction method is used for noise reduction. As the feature of audio signal, Mel-frequency cepstral coefficient (MFCC) is extracted. Based on the K-nearest neighbor classification method, we match the trained feature template to recognize sound signal type. This paper utilizes the improved generalized cross correlation method to estimate time delay of arrival (TDOA), and then employs spherical-interpolation for sound location according to the TDOA and the geometrical position of the microphone array. A new mapping has been proposed to direct the motor to search sound targets flexibly. As the sink node, the PC receives and displays the result processed in the WSN, and it also has the ultimate power to make decision on the received results in order to improve their accuracy. The experiment results show that the designed three-robot system implements sound target searching function without collisions and performs well. PMID:27657088
Design and Implementation of Sound Searching Robots in Wireless Sensor Networks.
Han, Lianfu; Shen, Zhengguang; Fu, Changfeng; Liu, Chao
2016-09-21
A sound target-searching robot system which includes a 4-channel microphone array for sound collection, magneto-resistive sensor for declination measurement, and a wireless sensor networks (WSN) for exchanging information is described. It has an embedded sound signal enhancement, recognition and location method, and a sound searching strategy based on a digital signal processor (DSP). As the wireless network nodes, three robots comprise the WSN a personal computer (PC) in order to search the three different sound targets in task-oriented collaboration. The improved spectral subtraction method is used for noise reduction. As the feature of audio signal, Mel-frequency cepstral coefficient (MFCC) is extracted. Based on the K-nearest neighbor classification method, we match the trained feature template to recognize sound signal type. This paper utilizes the improved generalized cross correlation method to estimate time delay of arrival (TDOA), and then employs spherical-interpolation for sound location according to the TDOA and the geometrical position of the microphone array. A new mapping has been proposed to direct the motor to search sound targets flexibly. As the sink node, the PC receives and displays the result processed in the WSN, and it also has the ultimate power to make decision on the received results in order to improve their accuracy. The experiment results show that the designed three-robot system implements sound target searching function without collisions and performs well.
Spatial acoustic radiation of respiratory sounds for sleep evaluation.
Shabtai, Noam R; Zigel, Yaniv
2017-09-01
Body posture has an effect on sleeping quality and breathing disorders and therefore it is important to be recognized for the completion of the sleep evaluation process. Since humans have a directional acoustic radiation pattern, it is hypothesized that microphone arrays can be used to recognize different body postures, which is highly practical for sleep evaluation applications that already measure respiratory sounds using distant microphones. Furthermore, body posture may have an effect on distant microphone measurement; hence, the measurement can be compensated if the body posture is correctly recognized. A spherical harmonics decomposition approach to the spatial acoustic radiation is presented, assuming an array of eight microphones in a medium-sized audiology booth. The spatial sampling and reconstruction of the radiation pattern is discussed, and a final setup for the microphone array is recommended. A case study is shown using recorded segments of snoring and breathing sounds of three human subjects in three body postures in a silent but not anechoic audiology booth.
Mellow, Tim; Kärkkäinen, Leo
2014-03-01
An acoustic curtain is an array of microphones used for recording sound which is subsequently reproduced through an array of loudspeakers in which each loudspeaker reproduces the signal from its corresponding microphone. Here the sound originates from a point source on the axis of symmetry of the circular array. The Kirchhoff-Helmholtz integral for a plane circular curtain is solved analytically as fast-converging expansions, assuming an ideal continuous array, to speed up computations and provide insight. By reversing the time sequence of the recording (or reversing the direction of propagation of the incident wave so that the point source becomes an "ideal" point sink), the curtain becomes a time reversal mirror and the analytical solution for this is given simultaneously. In the case of an infinite planar array, it is demonstrated that either a monopole or dipole curtain will reproduce the diverging sound field of the point source on the far side. However, although the real part of the sound field of the infinite time-reversal mirror is reproduced, the imaginary part is an approximation due to the missing singularity. It is shown that the approximation may be improved by using the appropriate combination of monopole and dipole sources in the mirror.
Speaker Localisation Using Time Difference of Arrival
2008-04-01
School of Electrical and Electronic Engineering of the University of Adelaide. His area of expertise and interest is in Signal Processing including audio ...support of Theatre intelligence capabilities. His recent research interests include: information visualisation , text and data mining, and speech and...by: steering microphone arrays to improve the quality of audio pickup for recording, communication and transcription; enhancing the separation – and
Chemical and explosive detections using photo-acoustic effect and quantum cascade lasers
NASA Astrophysics Data System (ADS)
Choa, Fow-Sen
2013-12-01
Photoacoustic (PA) effect is a sensitive spectroscopic technique for chemical sensing. In recent years, with the development of quantum cascade lasers (QCLs), significant progress has been achieved for PA sensing applications. Using high-power, tunable mid-IR QCLs as laser sources, PA chemical sensor systems have demonstrated parts-pertrillion- level detection sensitivity. Many of these high sensitivity measurements were demonstrated locally in PA cells. Recently, we have demonstrated standoff PA detection of isopropanol vapor for more than 41 feet distance using a quantum cascade laser and a microphone with acoustic reflectors. We also further demonstrated solid phase TNT detections at a standoff distance of 8 feet. To further calibrate the detection sensitivity, we use nerve gas simulants that were generated and calibrated by a commercial vapor generator. Standoff detection of gas samples with calibrated concentration of 2.3 ppm was achieved at a detection distance of more than 2 feet. An extended detection distance up to 14 feet was observed for a higher gas concentration of 13.9 ppm. For field operations, array of microphones and microphone-reflector pairs can be utilized to achieve noise rejection and signal enhancement. We have experimentally demonstrated that the signal and noise spectra of the 4 microphone/4 reflector system with a combined SNR of 12.48 dB. For the 16-microphone and one reflector case, an SNR of 17.82 was achieved. These successful chemical sensing demonstrations will likely create new demands for widely tunable QCLs with ultralow threshold (for local fire-alarm size detection systems) and high-power (for standoff detection systems) performances.
NASA Astrophysics Data System (ADS)
Wu, Bo; Yang, Minglei; Li, Kehuang; Huang, Zhen; Siniscalchi, Sabato Marco; Wang, Tong; Lee, Chin-Hui
2017-12-01
A reverberation-time-aware deep-neural-network (DNN)-based multi-channel speech dereverberation framework is proposed to handle a wide range of reverberation times (RT60s). There are three key steps in designing a robust system. First, to accomplish simultaneous speech dereverberation and beamforming, we propose a framework, namely DNNSpatial, by selectively concatenating log-power spectral (LPS) input features of reverberant speech from multiple microphones in an array and map them into the expected output LPS features of anechoic reference speech based on a single deep neural network (DNN). Next, the temporal auto-correlation function of received signals at different RT60s is investigated to show that RT60-dependent temporal-spatial contexts in feature selection are needed in the DNNSpatial training stage in order to optimize the system performance in diverse reverberant environments. Finally, the RT60 is estimated to select the proper temporal and spatial contexts before feeding the log-power spectrum features to the trained DNNs for speech dereverberation. The experimental evidence gathered in this study indicates that the proposed framework outperforms the state-of-the-art signal processing dereverberation algorithm weighted prediction error (WPE) and conventional DNNSpatial systems without taking the reverberation time into account, even for extremely weak and severe reverberant conditions. The proposed technique generalizes well to unseen room size, array geometry and loudspeaker position, and is robust to reverberation time estimation error.
NASA Astrophysics Data System (ADS)
Montazeri, Allahyar; Taylor, C. James
2017-10-01
This article addresses the coupling of acoustic secondary sources in a confined space in a sound field reduction framework. By considering the coupling of sources in a rectangular enclosure, the set of coupled equations governing its acoustical behavior are solved. The model obtained in this way is used to analyze the behavior of multi-input multi-output (MIMO) active sound field control (ASC) systems, where the coupling of sources cannot be neglected. In particular, the article develops the analytical results to analyze the effect of coupling of an array of secondary sources on the sound pressure levels inside an enclosure, when an array of microphones is used to capture the acoustic characteristics of the enclosure. The results are supported by extensive numerical simulations showing how coupling of loudspeakers through acoustic modes of the enclosure will change the strength and hence the driving voltage signal applied to the secondary loudspeakers. The practical significance of this model is to provide a better insight on the performance of the sound reproduction/reduction systems in confined spaces when an array of loudspeakers and microphones are placed in a fraction of wavelength of the excitation signal to reduce/reproduce the sound field. This is of particular importance because the interaction of different sources affects their radiation impedance depending on the electromechanical properties of the loudspeakers.
Deconvolution Methods and Systems for the Mapping of Acoustic Sources from Phased Microphone Arrays
NASA Technical Reports Server (NTRS)
Humphreys, Jr., William M. (Inventor); Brooks, Thomas F. (Inventor)
2012-01-01
Mapping coherent/incoherent acoustic sources as determined from a phased microphone array. A linear configuration of equations and unknowns are formed by accounting for a reciprocal influence of one or more cross-beamforming characteristics thereof at varying grid locations among the plurality of grid locations. An equation derived from the linear configuration of equations and unknowns can then be iteratively determined. The equation can be attained by the solution requirement of a constraint equivalent to the physical assumption that the coherent sources have only in phase coherence. The size of the problem may then be reduced using zoning methods. An optimized noise source distribution is then generated over an identified aeroacoustic source region associated with a phased microphone array (microphones arranged in an optimized grid pattern including a plurality of grid locations) in order to compile an output presentation thereof, thereby removing beamforming characteristics from the resulting output presentation.
Deconvolution methods and systems for the mapping of acoustic sources from phased microphone arrays
NASA Technical Reports Server (NTRS)
Brooks, Thomas F. (Inventor); Humphreys, Jr., William M. (Inventor)
2010-01-01
A method and system for mapping acoustic sources determined from a phased microphone array. A plurality of microphones are arranged in an optimized grid pattern including a plurality of grid locations thereof. A linear configuration of N equations and N unknowns can be formed by accounting for a reciprocal influence of one or more beamforming characteristics thereof at varying grid locations among the plurality of grid locations. A full-rank equation derived from the linear configuration of N equations and N unknowns can then be iteratively determined. A full-rank can be attained by the solution requirement of the positivity constraint equivalent to the physical assumption of statically independent noise sources at each N location. An optimized noise source distribution is then generated over an identified aeroacoustic source region associated with the phased microphone array in order to compile an output presentation thereof, thereby removing the beamforming characteristics from the resulting output presentation.
Infrasound research at Kola Regional Seismological Centre, Russia
NASA Astrophysics Data System (ADS)
Asming, Vladimir; Kremenetskaya, Elena
2013-04-01
A small-aperture infrasound array has been installed in Kola Peninsula, Russia 17 km far from the town of Apatity in the year 2000. It comprises 3 Chaparral V microbarographs placed closely to the APA seismic array sensors and equipped with pipe wind reducing filters. The data are digitized at the array site and transmitted in real time to a processing center in Apatity. To search for infrasound events (arrivals of coherent signals) a beamforming-style detector has been developed. Now it works in near real time. We analyzed the detecting statistics for different frequency bands. Most man-made events are detected in 1-5 Hz band, microbaromes are typically detected in 0.2-1 Hz band. In lower frequencies we record mostly a wind noise. A data base of samples of infrasound signals of different natures has been collected. It contains recordings of microbaromes, industrial and military explosions, airplane shock waves, infrasound of airplanes, thunders, rocket launches and reentries, bolides etc. The most distant signals we have detected are associated with Kursk Magnetic Anomaly explosions (1700 km far from Apatity). We implemented an algorithm for association of infrasound signals and preliminary location of infrasound events by several arrays. It was tested with Apatity data together with data of Sweden - Finnish infrasound network operated by the Institute of Space Physics in Umea (Sweden). By agreement with NORSAR we have a real-time access to the data of Norwegian experimental infrasound installation situated in Karasjok (North Norway). Currently our detection and location programs work both with Apatity and Norwegian data. The results are available in Internet. Finnish militaries routinely destroy out-of-date weapon in autumns at the same compact site in North Finland. This is a great source of repeating infrasound signals of the same magnitude and origin. We recorded several hundreds of such explosions. The signals have been used for testing our location routines. Some factors were observed enabling or disabling first (tropospheric) arrivals of such signals depending on weather conditions. Systematic backazimuth deviations for stratospheric arrivals have been observed caused by strong stratospheric winds. In 2009 mobile infrasound arrays were developed in KRSC. Each array comprises 3 low-frequency microphones, GPS, digitizer and PC with data acquisition system. Aperture of such arrays is about 250 m, deployment time is less than 1 hour. These arrays are used in experimental work with Roskosmos space agency to search space debris reentering places. In 2012 a wireless version of such mobile array was created. Each acquisition point comprises a microphone, GPS and ADC chips, microcontroller and radio modem to send data to a central unit. This enabled us to increase aperture (up to 500 m) and decrease deployment time.
Application of MEMS Microphone Array Technology to Airframe Noise Measurements
NASA Technical Reports Server (NTRS)
Humphreys, William M., Jr.; Shams, Qamar A.; Graves, Sharon S.; Sealey, Bradley S.; Bartram, Scott M.; Comeaux, Toby
2005-01-01
Current generation microphone directional array instrumentation is capable of extracting accurate noise source location and directivity data on a variety of aircraft components, resulting in significant gains in test productivity. However, with this gain in productivity has come the desire to install larger and more complex arrays in a variety of ground test facilities, creating new challenges for the designers of array systems. To overcome these challenges, a research study was initiated to identify and develop hardware and fabrication technologies which could be used to construct an array system exhibiting acceptable measurement performance but at much lower cost and with much simpler installation requirements. This paper describes an effort to fabricate a 128-sensor array using commercially available Micro-Electro-Mechanical System (MEMS) microphones. The MEMS array was used to acquire noise data for an isolated 26%-scale high-fidelity Boeing 777 landing gear in the Virginia Polytechnic Institute and State University Stability Tunnel across a range of Mach numbers. The overall performance of the array was excellent, and major noise sources were successfully identified from the measurements.
NASA Astrophysics Data System (ADS)
Johnson, J. B.; Marcillo, O. E.; Arechiga, R. O.; Johnson, R.; Edens, H. E.; Marshall, H.; Havens, S.; Waite, G. P.
2010-12-01
Volcanoes, lightning, and mass wasting events generate substantial infrasonic energy that propagates for long distances through the atmosphere with generally low intrinsic attenuation. Although such sources are often studied with regional infrasound arrays that provide important records of their occurrence, position, and relative magnitudes these signals recorded at tens to hundreds of kilometers are often significantly affected by propagation effects. Complex atmospheric structure, due to heterogeneous winds and temperatures, and intervening topography can be responsible for multi-pathing, signal attenuation, and focusing or, alternatively, information loss (i.e., a shadow zone). At far offsets, geometric spreading diminishes signal amplitude requiring low noise recording sites and high fidelity microphones. In contrast recorded excess pressures at local distances are much higher in amplitude and waveforms are more representative of source phenomena. We report on recent studies of volcanoes, thunder, and avalanches made with networks and arrays of infrasound sensors deployed local (within a few km) to the source. At Kilauea Volcano (Hawaii) we deployed a network of ~50 infrasound sensitive sensors (flat from 50 s to 50 Hz) to track the coherence of persistent infrasonic tremor signals in the near-field (out to a few tens of kilometers). During periods of high winds (> 5-10 m/s) we found significant atmospheric influence for signals recorded at stations only a few kilometers from the source. Such observations have encouraged us to conduct a range of volcano, thunder, and snow avalanche studies with networks of small infrasound arrays (~30 m aperture) deployed close to the source region. We present results from local microphone deployments (12 sensors) at Santiaguito Volcano (Guatemala) where we are able to precisely (~10 m resolution) locate acoustic sources from explosions and rock falls. We also present results from our thunder mapping acoustic arrays (15 sensors) in the Magdalena Mountains of New Mexico capable of mapping lightning channels more than 10 km in extent whose positions are corroborated by the radio wave detecting lightning mapping array. We also discuss the recent implementation of a network of snow avalanche detection arrays (12 sensors) in Idaho that are used to monitor and track and map infrasound produced by moving sources. We contend that local infrasound deployment is analogous to local seismic monitoring in that it enables precision localization and interpretation of source phenomena.
NASA Astrophysics Data System (ADS)
Wang, Xun; Quost, Benjamin; Chazot, Jean-Daniel; Antoni, Jérôme
2016-01-01
This paper considers the problem of identifying multiple sound sources from acoustical measurements obtained by an array of microphones. The problem is solved via maximum likelihood. In particular, an expectation-maximization (EM) approach is used to estimate the sound source locations and strengths, the pressure measured by a microphone being interpreted as a mixture of latent signals emitted by the sources. This work also considers two kinds of uncertainties pervading the sound propagation and measurement process: uncertain microphone locations and uncertain wavenumber. These uncertainties are transposed to the data in the belief functions framework. Then, the source locations and strengths can be estimated using a variant of the EM algorithm, known as the Evidential EM (E2M) algorithm. Eventually, both simulation and real experiments are shown to illustrate the advantage of using the EM in the case without uncertainty and the E2M in the case of uncertain measurement.
Active room compensation for sound reinforcement using sound field separation techniques.
Heuchel, Franz M; Fernandez-Grande, Efren; Agerkvist, Finn T; Shabalina, Elena
2018-03-01
This work investigates how the sound field created by a sound reinforcement system can be controlled at low frequencies. An indoor control method is proposed which actively absorbs the sound incident on a reflecting boundary using an array of secondary sources. The sound field is separated into incident and reflected components by a microphone array close to the secondary sources, enabling the minimization of reflected components by means of optimal signals for the secondary sources. The method is purely feed-forward and assumes constant room conditions. Three different sound field separation techniques for the modeling of the reflections are investigated based on plane wave decomposition, equivalent sources, and the Spatial Fourier transform. Simulations and an experimental validation are presented, showing that the control method performs similarly well at enhancing low frequency responses with the three sound separation techniques. Resonances in the entire room are reduced, although the microphone array and secondary sources are confined to a small region close to the reflecting wall. Unlike previous control methods based on the creation of a plane wave sound field, the investigated method works in arbitrary room geometries and primary source positions.
Fiber optic microphone with large dynamic range based on bi-fiber Fabry-Perot cavity
NASA Astrophysics Data System (ADS)
Cheng, Jin; Lu, Dan-feng; Gao, Ran; Qi, Zhi-mei
2017-10-01
In this paper, we report a fiber optic microphone with a large dynamic range. The probe of microphone consists of bi-fiber Fabry-Perot cavity architecture. The wavelength of the working laser is about 1552.05nm. At this wavelength, the interference spectroscopies of these two fiber Fabry-Perot cavities have a quadrature shift. So the outputs of these two fiber Fabry-Perot sensors are orthogonal signal. By using orthogonal signal demodulation method, this microphone can output a signal of acoustic wave. Due to no relationship between output signal and the linear region on interference spectroscopy, the microphones have a large maximum acoustic pressure above 125dB.
Directionality of dog vocalizations
NASA Astrophysics Data System (ADS)
Frommolt, Karl-Heinz; Gebler, Alban
2004-07-01
The directionality patterns of sound emission in domestic dogs were measured in an anechoic environment using a microphone array. Mainly long-distance signals from four dogs were investigated. The radiation pattern of the signals differed clearly from an omnidirectional one with average differences in sound-pressure level between the frontal and rear position of 3-7 dB depending from the individual. Frequency dependence of directionality was shown for the range from 250 to 3200 Hz. The results indicate that when studying acoustic communication in mammals, more attention should be paid to the directionality pattern of sound emission.
Mens, Lucas H M
2011-01-01
To test speech understanding in noise using array microphones integrated in an eyeglass device and to test if microphones placed anteriorly at the temple provide better directivity than above the pinna. Sentences were presented from the front and uncorrelated noise from 45, 135, 225 and 315°. Fifteen hearing impaired participants with a significant speech discrimination loss were included, as well as 5 normal hearing listeners. The device (Varibel) improved speech understanding in noise compared to most conventional directional devices with a directional benefit of 5.3 dB in the asymmetric fit mode, which was not significantly different from the bilateral fully directional mode (6.3 dB). Anterior microphones outperformed microphones at a conventional position above the pinna by 2.6 dB. By integrating microphones in an eyeglass frame, a long array can be used resulting in a higher directionality index and improved speech understanding in noise. An asymmetric fit did not significantly reduce performance and can be considered to increase acceptance and environmental awareness. Directional microphones at the temple seemed to profit more from the head shadow than above the pinna, better suppressing noise from behind the listener.
Sub-Surface Windscreen for the Measurement of Outdoor Infrasound
NASA Technical Reports Server (NTRS)
Shams, Qamar A.; Burkett, Cecil G., Jr.; Comeaux, Toby; Zuckerwar, Allan J.; Weistroffer, George R.
2008-01-01
A windscreen has been developed that features two advantages favorable for the measurement of outdoor infrasound. First, the sub-surface location, with the top of the windscreen flush with the ground surface, minimizes the mean velocity of the impinging wind. Secondly, the windscreen material (closed cell polyurethane foam) has a sufficiently low acoustic impedance (222 times that of air) and wall thickness (0.0127 m) to provide a transmission coefficient of nearly unity over the infrasonic frequency range (0-20 Hz). The windscreen, a tightly-sealed box having internal dimensions of 0.3048 x 0.3048 x 0.3556 m, contains a microphone, preamplifier, and a cable feed thru to an external power supply. Provisions are made for rain drainage and seismic isolation. A three-element array, configured as an equilateral triangle with 30.48 m spacing and operating continuously in the field, periodically receives highly coherent signals attributed to emissions from atmospheric turbulence. The time delays between infrasonic signals received at the microphones permit determination of the bearing and elevation of the sources, which correlate well with locations of pilot reports (PIREPS) within a 320 km radius about the array. The test results are interpreted to yield spectral information on infrasonic emissions from clear air turbulence.
Acoustical Direction Finding with Time-Modulated Arrays
Clark, Ben; Flint, James A.
2016-01-01
Time-Modulated Linear Arrays (TMLAs) offer useful efficiency savings over conventional phased arrays when applied in parameter estimation applications. The present paper considers the application of TMLAs to acoustic systems and proposes an algorithm for efficiently deriving the arrival angle of a signal. The proposed technique is applied in the frequency domain, where the signal and harmonic content is captured. Using a weighted average method on harmonic amplitudes and their respective main beam angles, it is possible to determine an estimate for the signal’s direction of arrival. The method is demonstrated and evaluated using results from both numerical and practical implementations and performance data is provided. The use of Micro-Electromechanical Systems (MEMS) sensors allows time-modulation techniques to be applied at ultrasonic frequencies. Theoretical predictions for an array of five isotropic elements with half-wavelength spacing and 1000 data samples suggest an accuracy of ±1∘ within an angular range of approximately ±50∘. In experiments of a 40 kHz five-element microphone array, a Direction of Arrival (DoA) estimation within ±2.5∘ of the target signal is readily achieved inside a ±45∘ range using a single switched input stage and a simple hardware setup. PMID:27973432
Adaptive Noise Reduction Techniques for Airborne Acoustic Sensors
2012-01-01
and Preamplifiers . . . . . . . . . . . . . . . . . . . . 16 3.3.2 Audio Recorders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 iv 4...consuming less energy than active systems such as radar, lidar, or sonar [5]. Ground and marine-based acoustic arrays are currently employed in a variety of...factors for the performance of an airborne acoustic array. 3.3.1 Audio Microphones and Preamplifiers An audio microphone is a transducer that converts
Vasta, Robert; Crandell, Ian; Millican, Anthony; House, Leanna; Smith, Eric
2017-10-13
Microphone sensor systems provide information that may be used for a variety of applications. Such systems generate large amounts of data. One concern is with microphone failure and unusual values that may be generated as part of the information collection process. This paper describes methods and a MATLAB graphical interface that provides rapid evaluation of microphone performance and identifies irregularities. The approach and interface are described. An application to a microphone array used in a wind tunnel is used to illustrate the methodology.
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2015-01-01
A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset.
Field-Deployable Acoustic Digital Systems for Noise Measurement
NASA Technical Reports Server (NTRS)
Shams, Qamar A.; Wright, Kenneth D.; Lunsford, Charles B.; Smith, Charlie D.
2000-01-01
Langley Research Center (LaRC) has for years been a leader in field acoustic array measurement technique. Two field-deployable digital measurement systems have been developed to support acoustic research programs at LaRC. For several years, LaRC has used the Digital Acoustic Measurement System (DAMS) for measuring the acoustic noise levels from rotorcraft and tiltrotor aircraft. Recently, a second system called Remote Acquisition and Storage System (RASS) was developed and deployed for the first time in the field along with DAMS system for the Community Noise Flight Test using the NASA LaRC-757 aircraft during April, 2000. The test was performed at Airborne Airport in Wilmington, OH to validate predicted noise reduction benefits from alternative operational procedures. The test matrix was composed of various combinations of altitude, cutback power, and aircraft weight. The DAMS digitizes the acoustic inputs at the microphone site and can be located up to 2000 feet from the van which houses the acquisition, storage and analysis equipment. Digitized data from up to 10 microphones is recorded on a Jaz disk and is analyzed post-test by microcomputer system. The RASS digitizes and stores acoustic inputs at the microphone site that can be located up to three miles from the base station and can compose a 3 mile by 3 mile array of microphones. 16-bit digitized data from the microphones is stored on removable Jaz disk and is transferred through a high speed array to a very large high speed permanent storage device. Up to 30 microphones can be utilized in the array. System control and monitoring is accomplished via Radio Frequency (RF) link. This paper will present a detailed description of both systems, along with acoustic data analysis from both systems.
Development of a Microphone Phased Array Capability for the Langley 14- by 22-Foot Subsonic Tunnel
NASA Technical Reports Server (NTRS)
Humphreys, William M.; Brooks, Thomas F.; Bahr, Christopher J.; Spalt, Taylor B.; Bartram, Scott M.; Culliton, William G.; Becker, Lawrence E.
2014-01-01
A new aeroacoustic measurement capability has been developed for use in open-jet testing in the NASA Langley 14- by 22-Foot Subsonic Tunnel (14x22 tunnel). A suite of instruments has been developed to characterize noise source strengths, locations, and directivity for both semi-span and full-span test articles in the facility. The primary instrument of the suite is a fully traversable microphone phased array for identification of noise source locations and strengths on models. The array can be mounted in the ceiling or on either side of the facility test section to accommodate various test article configurations. Complementing the phased array is an ensemble of streamwise traversing microphones that can be placed around the test section at defined locations to conduct noise source directivity studies along both flyover and sideline axes. A customized data acquisition system has been developed for the instrumentation suite that allows for command and control of all aspects of the array and microphone hardware, and is coupled with a comprehensive data reduction system to generate information in near real time. This information includes such items as time histories and spectral data for individual microphones and groups of microphones, contour presentations of noise source locations and strengths, and hemispherical directivity data. The data acquisition system integrates with the 14x22 tunnel data system to allow real time capture of facility parameters during acquisition of microphone data. The design of the phased array system has been vetted via a theoretical performance analysis based on conventional monopole beamforming and DAMAS deconvolution. The performance analysis provides the ability to compute figures of merit for the array as well as characterize factors such as beamwidths, sidelobe levels, and source discrimination for the types of noise sources anticipated in the 14x22 tunnel. The full paper will summarize in detail the design of the instrumentation suite, the construction of the hardware system, and the results of the performance analysis. Although the instrumentation suite is designed to characterize noise for a variety of test articles in the 14x22 tunnel, this paper will concentrate on description of the instruments for two specific test campaigns in the facility, namely a full-span NASA Hybrid Wing Body (HWB) model entry and a semi-span Gulfstream aircraft model entry, tested in the facility in the winter of 2012 and spring of 2013, respectively.
Uncovering Spatial Variation in Acoustic Environments Using Sound Mapping.
Job, Jacob R; Myers, Kyle; Naghshineh, Koorosh; Gill, Sharon A
2016-01-01
Animals select and use habitats based on environmental features relevant to their ecology and behavior. For animals that use acoustic communication, the sound environment itself may be a critical feature, yet acoustic characteristics are not commonly measured when describing habitats and as a result, how habitats vary acoustically over space and time is poorly known. Such considerations are timely, given worldwide increases in anthropogenic noise combined with rapidly accumulating evidence that noise hampers the ability of animals to detect and interpret natural sounds. Here, we used microphone arrays to record the sound environment in three terrestrial habitats (forest, prairie, and urban) under ambient conditions and during experimental noise introductions. We mapped sound pressure levels (SPLs) over spatial scales relevant to diverse taxa to explore spatial variation in acoustic habitats and to evaluate the number of microphones needed within arrays to capture this variation under both ambient and noisy conditions. Even at small spatial scales and over relatively short time spans, SPLs varied considerably, especially in forest and urban habitats, suggesting that quantifying and mapping acoustic features could improve habitat descriptions. Subset maps based on input from 4, 8, 12 and 16 microphones differed slightly (< 2 dBA/pixel) from those based on full arrays of 24 microphones under ambient conditions across habitats. Map differences were more pronounced with noise introductions, particularly in forests; maps made from only 4-microphones differed more (> 4 dBA/pixel) from full maps than the remaining subset maps, but maps with input from eight microphones resulted in smaller differences. Thus, acoustic environments varied over small spatial scales and variation could be mapped with input from 4-8 microphones. Mapping sound in different environments will improve understanding of acoustic environments and allow us to explore the influence of spatial variation in sound on animal ecology and behavior.
Detection of infrasound generated by Clear Air Turbulence (CAT)
NASA Technical Reports Server (NTRS)
Meredith, R.; Badavi, F.; Becher, J.
1981-01-01
A unified data acquisiton system which is an AM carrier system consisting of a converter signal, conditioning electronics, and peripheral equipment was checked and calibrated and installed in a mobile van. A microphone array in the shape of a 244-m equilateral triangle was connected to the data acquisition system using 457-m long cables. Development of techniques for signal processing for interpreting the infrasonic signature (differentiating between CAT and other sources of infrasound) is summarized. Once patches of CAT are located in the atmosphere, corroboration can be achieved through test flights of aircraft into the suspected region.
NASA Astrophysics Data System (ADS)
Revelle, Douglas O.; Whitaker, Rodney W.
1996-10-01
During the early morning of November 21, 1995, a fireball as bright as the full moon entered the atmosphere over southeastern Colorado and initially produced audible sonic boom reports from Texas to Wyoming. The event was detected locally by a security video camera which showed the reflection of the fireball event on the hood of a truck. The camera also recorded tree shadows cast by the light of the fireball. This recording includes the audio signal of a strong double boom as well. Subsequent investigation of the array near Los Alamos, New Mexico operated by the Los Alamos National Laboratory as part of its commitment to the Comprehensive Test Ban Treaty negotiations, showed the presence of an infrasonic signal from the proper direction at about the correct time for this fireball. The Los Alamos array is a four-element infrasonic system in near-continuous operation on the laboratory property. The nominal spacing between the array elements is 212 m. The basic sensor is a Globe Universal Sciences Model 100C microphone whose response is flat from about 0.1 to 300 Hz (which we filter at the high frequency end to be limited to 20 Hz). Each low frequency microphone is connected to a set of twelve porous hoses to reduce wind noise. The characteristics of the observed signal include the onset arrival time of 0939:20 UT (0239:20 MST), with a maximum timing uncertainty of plus or minus 2 minutes, the signal onset time delay from the appearance of the fireball of 21 minutes, 20 seconds, total signal duration of 2 minutes 10 seconds, the source location determined to be toward 31 degrees from true north, the horizontal trace velocity of 429 m/sec, the signal velocity of 0.29 plus or minus 0.03 km/sec, assuming a 375 km horizontal range to the fireball, the dominant signal frequency content of 0.25 to 0.84 Hz (analyzed in the frequency interval from 0.2 to 2.0 Hz), the maximum signal cross-correlation of 0.97 and the maximum signal amplitude of 2.0 plus or minus 0.1 microbars. Also, on the basis of the signal period at maximum amplitude, we estimate a probable source energy for this event of between 10 to 100 tons of TNT (53.0 tons nominal).
Optical Fiber Infrasound Sensor Arrays: An Improved Alternative to Arrays of Rosette Wind Filters
NASA Astrophysics Data System (ADS)
Walker, Kristoffer; Zumberge, Mark; Dewolf, Scott; Berger, Jon; Hedlin, Michael
2010-05-01
A key difficulty in infrasound signal detection is the noise created by spatially incoherent turbulence that is usually present in wind. Increasing wind speeds correlate with increasing noise levels across the entire infrasound band. Optical fiber infrasound sensors (OFIS) are line microphones that instantaneously integrate pressure along their lengths with laser interferometry. Although the sensor has a very low noise floor, the promise of the sensor is in its effectiveness at reducing wind noise without the need for a network of interconnected pipes. We have previously shown that a single 90 m OFIS (spanning a line) is just as effective at reducing wind noise as a 70 m diameter rosette (covering a circular area). We have also empirically measured the infrasound response of the OFIS as a function of back azimuth, showing that it is well predicted by an analytical solution; the response is flat for broadside signals and similar to the rosette response for endfire signals. Using that analytical solution, we have developed beamforming techniques that permit the estimation of back azimuth using an array of OFIS arms as well as an array deconvolution technique that accurately stacks weighted versions of the recordings to obtain the original infrasound signal. We show how a slight modification to traditional array processing techniques can also be used with OFIS arrays to determine back azimuth, even for signal-to-noise ratios much less than 1. Recently several improvements to the OFIS instrumentation have been achieved. We have made an important modification to our interferometric technique that makes the interferometer insensitive to ambient temperature fluctuation. We are also developing a continuous real-time calibration system, which may eliminate the need for periodic array calibration efforts. We also report progress in comparing a newly installed 270 m long OFIS at Piñon Flat Observatory (PFO) to a collocated 70 m rosette of the I57US array. Specifically, we compare hundreds of wind noise spectra and two Vandenberg rocket launch infrasound signals that were recorded by both systems. The 70-m diameter rosette (L2) was connected to a Chaparral Physics Model 50 microphone, which is usually more sensitive than the MB2000 microbarometer in the 1-10 Hz band. The data show that in low wind, the noise floor of the OFIS is the same as the Chaparral. However, in moderate wind (5 m/s) the OFIS attenuates wind noise at 1 Hz by 10 dB better than L2. Similarly, the two rocket launch signals that were recorded in the presence of 3-4 m/s wind confirm that the signal-to-noise ratio improvement is 10 dB at 1 Hz. This confirms that at each signal frequency and direction, there exists an OFIS length threshold above which an OFIS wind filter will always outperform a rosette in terms recorded signal-to-noise ratio. The OFIS technology is proven and mature for observatory installations. Work is underway to make the technology more portable for remote, DC-powered deployments. A DC-powered, ruggedized OFIS array will be installed for microbarom research in Northern California during the spring of 2010. We seek collaborations with other researchers that are interested in evaluating or assisting in the further development of the OFIS technology.
An integrated analysis-synthesis array system for spatial sound fields.
Bai, Mingsian R; Hua, Yi-Hsin; Kuo, Chia-Hao; Hsieh, Yu-Hao
2015-03-01
An integrated recording and reproduction array system for spatial audio is presented within a generic framework akin to the analysis-synthesis filterbanks in discrete time signal processing. In the analysis stage, a microphone array "encodes" the sound field by using the plane-wave decomposition. Direction of arrival of plane-wave components that comprise the sound field of interest are estimated by multiple signal classification. Next, the source signals are extracted by using a deconvolution procedure. In the synthesis stage, a loudspeaker array "decodes" the sound field by reconstructing the plane-wave components obtained in the analysis stage. This synthesis stage is carried out by pressure matching in the interior domain of the loudspeaker array. The deconvolution problem is solved by truncated singular value decomposition or convex optimization algorithms. For high-frequency reproduction that suffers from the spatial aliasing problem, vector panning is utilized. Listening tests are undertaken to evaluate the deconvolution method, vector panning, and a hybrid approach that combines both methods to cover frequency ranges below and above the spatial aliasing frequency. Localization and timbral attributes are considered in the subjective evaluation. The results show that the hybrid approach performs the best in overall preference. In addition, there is a trade-off between reproduction performance and the external radiation.
Effectiveness of the Directional Microphone in the Baha® Divino™
Oeding, Kristi; Valente, Michael; Kerckhoff, Jessica
2010-01-01
Background Patients with unilateral sensorineural hearing loss (USNHL) experience great difficulty listening to speech in noisy environments. A directional microphone (DM) could potentially improve speech recognition in this difficult listening environment. It is well known that DMs in behind-the-ear (BTE) and custom hearing aids can provide a greater signal-to-noise ratio (SNR) in comparison to an omnidirectional microphone (OM) to improve speech recognition in noise for persons with hearing impairment. Studies examining the DM in bone anchored auditory osseointegrated implants (Baha), however, have been mixed, with little to no benefit reported for the DM compared to an OM. Purpose The primary purpose of this study was to determine if there are statistically significant differences in the mean reception threshold for sentences (RTS in dB) in noise between the OM and DM in the Baha® Divino™. The RTS of these two microphone modes was measured utilizing two loudspeaker arrays (speech from 0° and noise from 180° or a diffuse eight-loudspeaker array) and with the better ear open or closed with an earmold impression and noise attenuating earmuff. Subjective benefit was assessed using the Abbreviated Profile of Hearing Aid Benefit (APHAB) to compare unaided and aided (Divino OM and DM combined) problem scores. Research Design A repeated measures design was utilized, with each subject counterbalanced to each of the eight treatment levels for three independent variables: (1) microphone (OM and DM), (2) loudspeaker array (180° and diffuse), and (3) better ear (open and closed). Study Sample Sixteen subjects with USNHL currently utilizing the Baha were recruited from Washington University’s Center for Advanced Medicine and the surrounding area. Data Collection and Analysis Subjects were tested at the initial visit if they entered the study wearing the Divino or after at least four weeks of acclimatization to a loaner Divino. The RTS was determined utilizing Hearing in Noise Test (HINT) sentences in the R-Space™ system, and subjective benefit was determined utilizing the APHAB. A three-way repeated measures analysis of variance (ANOVA) and a paired samples t-test were utilized to analyze results of the HINT and APHAB, respectively. Results Results revealed statistically significant differences within microphone (p < 0.001; directional advantage of 3.2 dB), loudspeaker array (p = 0.046; 180° advantage of 1.1 dB), and better ear conditions (p < 0.001; open ear advantage of 4.9 dB). Results from the APHAB revealed statistically and clinically significant benefit for the Divino relative to unaided on the subscales of Ease of Communication (EC) (p = 0.037), Background Noise (BN) (p < 0.001), and Reverberation (RV) (p = 0.005). Conclusions The Divino’s DM provides a statistically significant improvement in speech recognition in noise compared to the OM for subjects with USNHL. Therefore, it is recommended that audiologists consider selecting a Baha with a DM to provide improved speech recognition performance in noisy listening environments. PMID:21034701
Effectiveness of the directional microphone in the Baha® Divino™.
Oeding, Kristi; Valente, Michael; Kerckhoff, Jessica
2010-09-01
Patients with unilateral sensorineural hearing loss (USNHL) experience great difficulty listening to speech in noisy environments. A directional microphone (DM) could potentially improve speech recognition in this difficult listening environment. It is well known that DMs in behind-the-ear (BTE) and custom hearing aids can provide a greater signal-to-noise ratio (SNR) in comparison to an omnidirectional microphone (OM) to improve speech recognition in noise for persons with hearing impairment. Studies examining the DM in bone anchored auditory osseointegrated implants (Baha), however, have been mixed, with little to no benefit reported for the DM compared to an OM. The primary purpose of this study was to determine if there are statistically significant differences in the mean reception threshold for sentences (RTS in dB) in noise between the OM and DM in the Baha® Divino™. The RTS of these two microphone modes was measured utilizing two loudspeaker arrays (speech from 0° and noise from 180° or a diffuse eight-loudspeaker array) and with the better ear open or closed with an earmold impression and noise attenuating earmuff. Subjective benefit was assessed using the Abbreviated Profile of Hearing Aid Benefit (APHAB) to compare unaided and aided (Divino OM and DM combined) problem scores. A repeated measures design was utilized, with each subject counterbalanced to each of the eight treatment levels for three independent variables: (1) microphone (OM and DM), (2) loudspeaker array (180° and diffuse), and (3) better ear (open and closed). Sixteen subjects with USNHL currently utilizing the Baha were recruited from Washington University's Center for Advanced Medicine and the surrounding area. Subjects were tested at the initial visit if they entered the study wearing the Divino or after at least four weeks of acclimatization to a loaner Divino. The RTS was determined utilizing Hearing in Noise Test (HINT) sentences in the R-Space™ system, and subjective benefit was determined utilizing the APHAB. A three-way repeated measures analysis of variance (ANOVA) and a paired samples t-test were utilized to analyze results of the HINT and APHAB, respectively. Results revealed statistically significant differences within microphone (p < 0.001; directional advantage of 3.2 dB), loudspeaker array (p = 0.046; 180° advantage of 1.1 dB), and better ear conditions (p < 0.001; open ear advantage of 4.9 dB). Results from the APHAB revealed statistically and clinically significant benefit for the Divino relative to unaided on the subscales of Ease of Communication (EC) (p = 0.037), Background Noise (BN) (p < 0.001), and Reverberation (RV) (p = 0.005). The Divino's DM provides a statistically significant improvement in speech recognition in noise compared to the OM for subjects with USNHL. Therefore, it is recommended that audiologists consider selecting a Baha with a DM to provide improved speech recognition performance in noisy listening environments. American Academy of Audiology.
Mission-Oriented Sensor Arrays and UAVs - a Case Study on Environmental Monitoring
NASA Astrophysics Data System (ADS)
Figueira, N. M.; Freire, I. L.; Trindade, O.; Simões, E.
2015-08-01
This paper presents a new concept of UAV mission design in geomatics, applied to the generation of thematic maps for a multitude of civilian and military applications. We discuss the architecture of Mission-Oriented Sensors Arrays (MOSA), proposed in Figueira et Al. (2013), aimed at splitting and decoupling the mission-oriented part of the system (non safety-critical hardware and software) from the aircraft control systems (safety-critical). As a case study, we present an environmental monitoring application for the automatic generation of thematic maps to track gunshot activity in conservation areas. The MOSA modeled for this application integrates information from a thermal camera and an on-the-ground microphone array. The use of microphone arrays technology is of particular interest in this paper. These arrays allow estimation of the direction-of-arrival (DOA) of the incoming sound waves. Information about events of interest is obtained by the fusion of the data provided by the microphone array, captured by the UAV, fused with information from the termal image processing. Preliminary results show the feasibility of the on-the-ground sound processing array and the simulation of the main processing module, to be embedded into an UAV in a future work. The main contributions of this paper are the proposed MOSA system, including concepts, models and architecture.
Morgenstern, Hai; Rafaely, Boaz; Zotter, Franz
2015-11-01
Spatial attributes of room acoustics have been widely studied using microphone and loudspeaker arrays. However, systems that combine both arrays, referred to as multiple-input multiple-output (MIMO) systems, have only been studied to a limited degree in this context. These systems can potentially provide a powerful tool for room acoustics analysis due to the ability to simultaneously control both arrays. This paper offers a theoretical framework for the spatial analysis of enclosed sound fields using a MIMO system comprising spherical loudspeaker and microphone arrays. A system transfer function is formulated in matrix form for free-field conditions, and its properties are studied using tools from linear algebra. The system is shown to have unit-rank, regardless of the array types, and its singular vectors are related to the directions of arrival and radiation at the microphone and loudspeaker arrays, respectively. The formulation is then generalized to apply to rooms, using an image source method. In this case, the rank of the system is related to the number of significant reflections. The paper ends with simulation studies, which support the developed theory, and with an extensive reflection analysis of a room impulse response, using the platform of a MIMO system.
Implementation issues of the nearfield equivalent source imaging microphone array
NASA Astrophysics Data System (ADS)
Bai, Mingsian R.; Lin, Jia-Hong; Tseng, Chih-Wen
2011-01-01
This paper revisits a nearfield microphone array technique termed nearfield equivalent source imaging (NESI) proposed previously. In particular, various issues concerning the implementation of the NESI algorithm are examined. The NESI can be implemented in both the time domain and the frequency domain. Acoustical variables including sound pressure, particle velocity, active intensity and sound power are calculated by using multichannel inverse filters. Issues concerning sensor deployment are also investigated for the nearfield array. The uniform array outperformed a random array previously optimized for far-field imaging, which contradicts the conventional wisdom in far-field arrays. For applications in which only a patch array with scarce sensors is available, a virtual microphone approach is employed to ameliorate edge effects using extrapolation and to improve imaging resolution using interpolation. To enhance the processing efficiency of the time-domain NESI, an eigensystem realization algorithm (ERA) is developed. Several filtering methods are compared in terms of computational complexity. Significant saving on computations can be achieved using ERA and the frequency-domain NESI, as compared to the traditional method. The NESI technique was also experimentally validated using practical sources including a 125 cc scooter and a wooden box model with a loudspeaker fitted inside. The NESI technique proved effective in identifying broadband and non-stationary sources produced by the sources.
In vivo evaluation of mastication noise reduction for dual channel implantable microphone.
Woo, SeongTak; Jung, EuiSung; Lim, HyungGyu; Lee, Jang Woo; Seong, Ki Woong; Won, Chul Ho; Kim, Myoung Nam; Cho, Jin Ho; Lee, Jyung Hyun
2014-01-01
Input for fully implantable hearing devices (FIHDs) is provided by an implantable microphone under the skin of the temporal bone. However, the implanted microphone can be affected when the FIHDs user chews. In this paper, a dual implantable microphone was designed that can filter out the noise from mastication. For the in vivo experiment, a fabricated microphone was implanted in a rabbit. Pure-tone sounds of 1 kHz through a standard speaker were applied to the rabbit, which was given food simultaneously. To evaluate noise reduction, the measured signals were processed using a MATLAB program based adaptive filter. To verify the proposed method, the correlation coefficients and signal to-noise ratio before and after signal processing were calculated. By comparing the results, signal-to-noise ratio and correlation coefficients are enhanced by 6.07dB and 0.529 respectively.
XV-15 Tiltrotor Low Noise Terminal Area Operations
NASA Technical Reports Server (NTRS)
Conner, David A.; Marcolini, Michael A.; Edwards, Bryan D.; Brieger, John T.
1998-01-01
Acoustic data have been acquired for the XV-15 tiltrotor aircraft performing a variety of terminal area operating procedures. This joint NASA/Bell/Army test program was conducted in two phases. During Phase 1 the XV-15 was flown over a linear array of microphones, deployed perpendicular to the flight path, at a number of fixed operating conditions. This documented the relative noise differences between the various conditions. During Phase 2 the microphone array was deployed over a large area to directly measure the noise footprint produced during realistic approach and departure procedures. The XV-15 flew approach profiles that culminated in IGE hover over a landing pad, then takeoffs from the hover condition back out over the microphone array. Results from Phase 1 identify noise differences between selected operating conditions, while those from Phase 2 identify differences in noise footprints between takeoff and approach conditions and changes in noise footprint due to variation in approach procedures.
Characteristics and measurement of supersonic projectile shock waves by a 32-microphone ring array
NASA Astrophysics Data System (ADS)
Chang, Ho; Wu, Yan-Chyuan; Tsung, Tsing-Tshih
2011-08-01
This paper discusses about the characteristics of supersonic projectile shock wave in muzzle region during firing of high explosive anti-tank (HEAT) and high explosive (HE) projectiles. HEAT projectiles are fired horizontally at a muzzle velocity of Mach 3.5 from a medium caliber tank gun equipped with a newly designed multi-perforated muzzle brake, whereas HE projectiles are fired at elevation angles at a muzzle velocity of Mach 2 from a large caliber howitzer equipped with a newly designed double-baffle muzzle brake. In the near field, pressure signatures of the N-wave generated from projectiles are measured by 32-microphone ring array wrapped by cotton sheath. Records measured by the microphone array are used to demonstrate several key characteristics of the shock wave of supersonic projectile. All measurements made in this study can be a significant reference for developing guns, tanks, or the chassis of fighting vehicles.
Characteristics and measurement of supersonic projectile shock waves by a 32-microphone ring array.
Chang, Ho; Wu, Yan-Chyuan; Tsung, Tsing-Tshih
2011-08-01
This paper discusses about the characteristics of supersonic projectile shock wave in muzzle region during firing of high explosive anti-tank (HEAT) and high explosive (HE) projectiles. HEAT projectiles are fired horizontally at a muzzle velocity of Mach 3.5 from a medium caliber tank gun equipped with a newly designed multi-perforated muzzle brake, whereas HE projectiles are fired at elevation angles at a muzzle velocity of Mach 2 from a large caliber howitzer equipped with a newly designed double-baffle muzzle brake. In the near field, pressure signatures of the N-wave generated from projectiles are measured by 32-microphone ring array wrapped by cotton sheath. Records measured by the microphone array are used to demonstrate several key characteristics of the shock wave of supersonic projectile. All measurements made in this study can be a significant reference for developing guns, tanks, or the chassis of fighting vehicles.
NASA Technical Reports Server (NTRS)
Horne, Clifton; Burnside, Nathan J.
2013-01-01
Aeroacoustic measurements of the 11 % scale full-span AMELIA CESTOL model with leading- and trailing-edge slot blowing circulation control (CCW) wing were obtained during a recent test in the Arnold Engineering Development Center 40- by 80-Ft. Wind Tunnel at NASA Ames Research Center, Sound levels and spectra were acquired with seven in-flow microphones and a 48-element phased microphone array for a variety of vehicle configurations, CCW slot flow rates, and forward speeds, Corrections to the measurements and processing are in progress, however the data from selected configurations presented in this report confirm good measurement quality and dynamic range over the test conditions, Array beamform maps at 40 kts tunnel speed show that the trailing edge flap source is dominant for most frequencies at flap angles of 0deg and 60deg, The overall sound level for the 60deg flap was similar to the 0deg flap for most slot blowing rates forward of 90deg incidence, but was louder by up to 6 dB for downstream angles, At 100 kts, the in-flow microphone levels were louder than the sensor self-noise for the higher blowing rates, while passive and active background noise suppression methods for the microphone array revealed source levels as much as 20 dB lower than observed with the in-flow microphones,
NASA Astrophysics Data System (ADS)
Dehé, Alfons
2017-06-01
After decades of research and more than ten years of successful production in very high volumes Silicon MEMS microphones are mature and unbeatable in form factor and robustness. Audio applications such as video, noise cancellation and speech recognition are key differentiators in smart phones. Microphones with low self-noise enable those functions. Backplate-free microphones enter the signal to noise ratios above 70dB(A). This talk will describe state of the art MEMS technology of Infineon Technologies. An outlook on future technologies such as the comb sensor microphone will be given.
Use of a Microphone Phased Array to Determine Noise Sources in a Rocket Plume
NASA Technical Reports Server (NTRS)
Panda, J.; Mosher, R.
2010-01-01
A 70-element microphone phased array was used to identify noise sources in the plume of a solid rocket motor. An environment chamber was built and other precautions were taken to protect the sensitive condenser microphones from rain, thunderstorms and other environmental elements during prolonged stay in the outdoor test stand. A camera mounted at the center of the array was used to photograph the plume. In the first phase of the study the array was placed in an anechoic chamber for calibration, and validation of the indigenous Matlab(R) based beamform software. It was found that the "advanced" beamform methods, such as CLEAN-SC was partially successful in identifying speaker sources placed closer than the Rayleigh criteria. To participate in the field test all equipments were shipped to NASA Marshal Space Flight Center, where the elements of the array hardware were rebuilt around the test stand. The sensitive amplifiers and the data acquisition hardware were placed in a safe basement, and 100m long cables were used to connect the microphones, Kulites and the camera. The array chamber and the microphones were found to withstand the environmental elements as well as the shaking from the rocket plume generated noise. The beamform map was superimposed on a photo of the rocket plume to readily identify the source distribution. It was found that the plume made an exceptionally long, >30 diameter, noise source over a large frequency range. The shock pattern created spatial modulation of the noise source. Interestingly, the concrete pad of the horizontal test stand was found to be a good acoustic reflector: the beamform map showed two distinct source distributions- the plume and its reflection on the pad. The array was found to be most effective in the frequency range of 2kHz to 10kHz. As expected, the classical beamform method excessively smeared the noise sources at lower frequencies and produced excessive side-lobes at higher frequencies. The "advanced" beamform routine CLEAN-SC created a series of lumped sources which may be unphysical. We believe that the present effort is the first-ever attempt to directly measure noise source distribution in a rocket plume.
Acoustic Imaging of Snowpack Physical Properties
NASA Astrophysics Data System (ADS)
Kinar, N. J.; Pomeroy, J. W.
2011-12-01
Measurements of snowpack depth, density, structure and temperature have often been conducted by the use of snowpits and invasive measurement devices. Previous research has shown that acoustic waves passing through snow are capable of measuring these properties. An experimental observation device (SAS2, System for the Acoustic Sounding of Snow) was used to autonomously send audible sound waves into the top of the snowpack and to receive and process the waves reflected from the interior and bottom of the snowpack. A loudspeaker and microphone array separated by an offset distance was suspended in the air above the surface of the snowpack. Sound waves produced from a loudspeaker as frequency-swept sequences and maximum length sequences were used as source signals. Up to 24 microphones measured the audible signal from the snowpack. The signal-to-noise ratio was compared between sequences in the presence of environmental noise contributed by wind and reflections from vegetation. Beamforming algorithms were used to reject spurious reflections and to compensate for movement of the sensor assembly during the time of data collection. A custom-designed circuit with digital signal processing hardware implemented an inversion algorithm to relate the reflected sound wave data to snowpack physical properties and to create a two-dimensional image of snowpack stratigraphy. The low power consumption circuit was powered by batteries and through WiFi and Bluetooth interfaces enabled the display of processed data on a mobile device. Acoustic observations were logged to an SD card after each measurement. The SAS2 system was deployed at remote field locations in the Rocky Mountains of Alberta, Canada. Acoustic snow properties data was compared with data collected from gravimetric sampling, thermocouple arrays, radiometers and snowpit observations of density, stratigraphy and crystal structure. Aspects for further research and limitations of the acoustic sensing system are also discussed.
Uloza, Virgilijus; Padervinskis, Evaldas; Uloziene, Ingrida; Saferis, Viktoras; Verikas, Antanas
2015-09-01
The aim of the present study was to evaluate the reliability of the measurements of acoustic voice parameters obtained simultaneously using oral and contact (throat) microphones and to investigate utility of combined use of these microphones for voice categorization. Voice samples of sustained vowel /a/ obtained from 157 subjects (105 healthy and 52 pathological voices) were recorded in a soundproof booth simultaneously through two microphones: oral AKG Perception 220 microphone (AKG Acoustics, Vienna, Austria) and contact (throat) Triumph PC microphone (Clearer Communications, Inc, Burnaby, Canada) placed on the lamina of thyroid cartilage. Acoustic voice signal data were measured for fundamental frequency, percent of jitter and shimmer, normalized noise energy, signal-to-noise ratio, and harmonic-to-noise ratio using Dr. Speech software (Tiger Electronics, Seattle, WA). The correlations of acoustic voice parameters in vocal performance were statistically significant and strong (r = 0.71-1.0) for the entire functional measurements obtained for the two microphones. When classifying into healthy-pathological voice classes, the oral-shimmer revealed the correct classification rate (CCR) of 75.2% and the throat-jitter revealed CCR of 70.7%. However, combination of both throat and oral microphones allowed identifying a set of three voice parameters: throat-signal-to-noise ratio, oral-shimmer, and oral-normalized noise energy, which provided the CCR of 80.3%. The measurements of acoustic voice parameters using a combination of oral and throat microphones showed to be reliable in clinical settings and demonstrated high CCRs when distinguishing the healthy and pathological voice patient groups. Our study validates the suitability of the throat microphone signal for the task of automatic voice analysis for the purpose of voice screening. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Quantifying Errors in Jet Noise Research Due to Microphone Support Reflection
NASA Technical Reports Server (NTRS)
Nallasamy, Nambi; Bridges, James
2002-01-01
The reflection coefficient of a microphone support structure used insist noise testing is documented through tests performed in the anechoic AeroAcoustic Propulsion Laboratory. The tests involve the acquisition of acoustic data from a microphone mounted in the support structure while noise is generated from a known broadband source. The ratio of reflected signal amplitude to the original signal amplitude is determined by performing an auto-correlation function on the data. The documentation of the reflection coefficients is one component of the validation of jet noise data acquired using the given microphone support structure. Finally. two forms of acoustic material were applied to the microphone support structure to determine their effectiveness in reducing reflections which give rise to bias errors in the microphone measurements.
Microphone Array Phased Processing System (MAPPS): Version 4.0 Manual
NASA Technical Reports Server (NTRS)
Watts, Michael E.; Mosher, Marianne; Barnes, Michael; Bardina, Jorge
1999-01-01
A processing system has been developed to meet increasing demands for detailed noise measurement of individual model components. The Microphone Array Phased Processing System (MAPPS) uses graphical user interfaces to control all aspects of data processing and visualization. The system uses networked parallel computers to provide noise maps at selected frequencies in a near real-time testing environment. The system has been successfully used in the NASA Ames 7- by 10-Foot Wind Tunnel.
Methods for determining infrasound phase velocity direction with an array of line sensors.
Walker, Kristoffer T; Zumberge, Mark A; Hedlin, Michael A H; Shearer, Peter M
2008-10-01
Infrasound arrays typically consist of several microbarometers separated by distances that provide predictable signal time separations, forming the basis for processing techniques that estimate the phase velocity direction. The directional resolution depends on the noise level and is proportional to the number of these point sensors; additional sensors help attenuate noise and improve direction resolution. An alternative approach is to form an array of directional line sensors, each of which emulates a line of many microphones that instantaneously integrate pressure change. The instrument response is a function of the orientation of the line with respect to the signal wavefront. Real data recorded at the Piñon Flat Observatory in southern California and synthetic data show that this spectral property can be exploited with multiple line sensors to determine the phase velocity direction with a precision comparable to a larger aperture array of microbarometers. Three types of instrument-response-dependent beamforming and an array deconvolution technique are evaluated. The results imply that an array of five radial line sensors, with equal azimuthal separation and an aperture that depends on the frequency band of interest, provides directional resolution while requiring less space compared to an equally effective array of five microbarometers with rosette wind filters.
NASA Technical Reports Server (NTRS)
Martin, Ruth M.; Splettstoesser, W. R.; Elliott, J. W.; Schultz, K.-J.
1988-01-01
Acoustic data are presented from a 40 percent scale model of the 4-bladed BO-105 helicopter main rotor, measured in the large European aeroacoustic wind tunnel, the DNW. Rotor blade-vortex interaction (BVI) noise data in the low speed flight range were acquired using a traversing in-flow microphone array. The experimental apparatus, testing procedures, calibration results, and experimental objectives are fully described. A large representative set of averaged acoustic signals is presented.
Acoustic centering of sources measured by surrounding spherical microphone arrays.
Hagai, Ilan Ben; Pollow, Martin; Vorländer, Michael; Rafaely, Boaz
2011-10-01
The radiation patterns of acoustic sources have great significance in a wide range of applications, such as measuring the directivity of loudspeakers and investigating the radiation of musical instruments for auralization. Recently, surrounding spherical microphone arrays have been studied for sound field analysis, facilitating measurement of the pressure around a sphere and the computation of the spherical harmonics spectrum of the sound source. However, the sound radiation pattern may be affected by the location of the source inside the microphone array, which is an undesirable property when aiming to characterize source radiation in a unique manner. This paper presents a theoretical analysis of the spherical harmonics spectrum of spatially translated sources and defines four measures for the misalignment of the acoustic center of a radiating source. Optimization is used to promote optimal alignment based on the proposed measures and the errors caused by numerical and array-order limitations are investigated. This methodology is examined using both simulated and experimental data in order to investigate the performance and limitations of the different alignment methods. © 2011 Acoustical Society of America
Partial differential equation-based localization of a monopole source from a circular array.
Ando, Shigeru; Nara, Takaaki; Levy, Tsukassa
2013-10-01
Wave source localization from a sensor array has long been the most active research topics in both theory and application. In this paper, an explicit and time-domain inversion method for the direction and distance of a monopole source from a circular array is proposed. The approach is based on a mathematical technique, the weighted integral method, for signal/source parameter estimation. It begins with an exact form of the source-constraint partial differential equation that describes the unilateral propagation of wide-band waves from a single source, and leads to exact algebraic equations that include circular Fourier coefficients (phase mode measurements) as their coefficients. From them, nearly closed-form, single-shot and multishot algorithms are obtained that is suitable for use with band-pass/differential filter banks. Numerical evaluation and several experimental results obtained using a 16-element circular microphone array are presented to verify the validity of the proposed method.
Adaptive Wiener filtering for improved acquisition of distortion product otoacoustic emissions.
Ozdamar, O; Delgado, R E; Rahman, S; Lopez, C
1998-01-01
An innovative acoustic noise canceling method using adaptive Wiener filtering (AWF) was developed for improved acquisition of distortion product otoacoustic emissions (DPOAEs). The system used one microphone placed in the test ear for the primary signal. Noise reference signals were obtained from three different sources: (a) pre-stimulus response from the test ear microphone, (b) post-stimulus response from a microphone placed near the head of the subject and (c) post-stimulus response obtained from a microphone placed in the subject's nontest ear. In order to improve spectral estimation, block averaging of a different number of single sweep responses was used. DPOAE data were obtained from 11 ears of healthy newborns in a well-baby nursery of a hospital under typical noise conditions. Simultaneously obtained recordings from all three microphones were digitized, stored and processed off-line to evaluate the effects of AWF with respect to DPOAE detection and signal-to-noise ratio (SNR) improvement. Results show that compared to standard DPOAE processing, AWF improved signal detection and improved SNR.
NASA Astrophysics Data System (ADS)
Nishiura, Takanobu; Nakamura, Satoshi
2002-11-01
It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.
Measurement of Phased Array Point Spread Functions for Use with Beamforming
NASA Technical Reports Server (NTRS)
Bahr, Chris; Zawodny, Nikolas S.; Bertolucci, Brandon; Woolwine, Kyle; Liu, Fei; Li, Juan; Sheplak, Mark; Cattafesta, Louis
2011-01-01
Microphone arrays can be used to localize and estimate the strengths of acoustic sources present in a region of interest. However, the array measurement of a region, or beam map, is not an accurate representation of the acoustic field in that region. The true acoustic field is convolved with the array s sampling response, or point spread function (PSF). Many techniques exist to remove the PSF's effect on the beam map via deconvolution. Currently these methods use a theoretical estimate of the array point spread function and perhaps account for installation offsets via determination of the microphone locations. This methodology fails to account for any reflections or scattering in the measurement setup and still requires both microphone magnitude and phase calibration, as well as a separate shear layer correction in an open-jet facility. The research presented seeks to investigate direct measurement of the array's PSF using a non-intrusive acoustic point source generated by a pulsed laser system. Experimental PSFs of the array are computed for different conditions to evaluate features such as shift-invariance, shear layers and model presence. Results show that experimental measurements trend with theory with regard to source offset. The source shows expected behavior due to shear layer refraction when observed in a flow, and application of a measured PSF to NACA 0012 aeroacoustic trailing-edge noise data shows a promising alternative to a classic shear layer correction method.
Antarctic atmospheric infrasound. Final technical report, 1 July 1981-30 September 1984
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, C.R.; McKibben, B.N.
1986-11-01
In order to monitor atmospheric infrasonic waves in the passband from 0.1 to 0.01 Hz a digital infrasonic detection system was installed in Antarctica on the Ross Ice shelf near McMurdo Station on McMurdo Sound. An array of seven infrasonic microphones subtending an area of about 35 sg km was operated in Windless Bight. The analog microphone data were telemetered to McMurdo station where the infrasonic date were digitized and subjected to on-line real-time analysis to detect traveling infrasonic waves with periods from 10 to 100 seconds. During the period of operation of the Antartic infrasonic observatory, hundreds of infrasonicmore » signals were detected in association with many natural sources such as the aurora australis, marine storm sea-air interactions, volcanic eruptions, mountain generated lee-wave effects, large meteors and auroral electrojet supersonic motions.« less
One of many microphones arrayed under the path of the F-5E SSBE aircraft to record sonic booms
2004-01-13
One of many microphones arrayed under the path of the F-5E SSBE (Shaped Sonic Boom Experiment) aircraft to record sonic booms. The SSBE (Shaped Sonic Boom Experiment) was formerly known as the Shaped Sonic Boom Demonstration, or SSBD, and is part of DARPA's Quiet Supersonic Platform (QSP) program. On August 27, 2003, the F-5E SSBD aircraft demonstrated a method to reduce the intensity of sonic booms.
Valente, Michael; Mispagel, Karen M; Tchorz, Juergen; Fabry, David
2006-06-01
Differences in performance between omnidirectional and directional microphones were evaluated between two loudspeaker conditions (single loudspeaker at 180 degrees; diffuse using eight loudspeakers set 45 degrees apart) and two types of noise (steady-state HINT noise; R-Space restaurant noise). Twenty-five participants were fit bilaterally with Phonak Perseo hearing aids using the manufacturer's recommended procedure. After wearing the hearing aids for one week, the parameters were fine-tuned based on subjective comments. Four weeks later, differences in performance between omnidirectional and directional microphones were assessed using HINT sentences presented at 0 degrees with the two types of background noise held constant at 65 dBA and under the two loudspeaker conditions. Results revealed significant differences in Reception Thresholds for Sentences (RTS in dB) where directional performance was significantly better than omnidirectional. Performance in the 180 degrees condition was significantly better than the diffuse condition, and performance was significantly better using the HINT noise in comparison to the R-Space restaurant noise. In addition, results revealed that within each loudspeaker array, performance was significantly better for the directional microphone. Looking across loudspeaker arrays, however, significant differences were not present in omnidirectional performance, but directional performance was significantly better in the 180 degrees condition when compared to the diffuse condition. These findings are discussed in terms of results reported in the past and counseling patients on the potential advantages of directional microphones as the listening situation and type of noise changes.
Talker Localization Based on Interference between Transmitted and Reflected Audible Sound
NASA Astrophysics Data System (ADS)
Nakayama, Masato; Nakasako, Noboru; Shinohara, Toshihiro; Uebo, Tetsuji
In many engineering fields, distance to targets is very important. General distance measurement method uses a time delay between transmitted and reflected waves, but it is difficult to estimate the short distance. On the other hand, the method using phase interference to measure the short distance has been known in the field of microwave radar. Therefore, we have proposed the distance estimation method based on interference between transmitted and reflected audible sound, which can measure the distance between microphone and target with one microphone and one loudspeaker. In this paper, we propose talker localization method based on distance estimation using phase interference. We expand the distance estimation method using phase interference into two microphones (microphone array) in order to estimate talker position. The proposed method can estimate talker position by measuring the distance and direction between target and microphone array. In addition, talker's speech is regarded as a noise in the proposed method. Therefore, we also propose combination of the proposed method and CSP (Cross-power Spectrum Phase analysis) method which is one of the DOA (Direction Of Arrival) estimation methods. We evaluated the performance of talker localization in real environments. The experimental result shows the effectiveness of the proposed method.
Chen, Yung-Yue
2018-05-08
Mobile devices are often used in our daily lives for the purposes of speech and communication. The speech quality of mobile devices is always degraded due to the environmental noises surrounding mobile device users. Regretfully, an effective background noise reduction solution cannot easily be developed for this speech enhancement problem. Due to these depicted reasons, a methodology is systematically proposed to eliminate the effects of background noises for the speech communication of mobile devices. This methodology integrates a dual microphone array with a background noise elimination algorithm. The proposed background noise elimination algorithm includes a whitening process, a speech modelling method and an H ₂ estimator. Due to the adoption of the dual microphone array, a low-cost design can be obtained for the speech enhancement of mobile devices. Practical tests have proven that this proposed method is immune to random background noises, and noiseless speech can be obtained after executing this denoise process.
Sound source localization on an axial fan at different operating points
NASA Astrophysics Data System (ADS)
Zenger, Florian J.; Herold, Gert; Becker, Stefan; Sarradj, Ennes
2016-08-01
A generic fan with unskewed fan blades is investigated using a microphone array method. The relative motion of the fan with respect to the stationary microphone array is compensated by interpolating the microphone data to a virtual rotating array with the same rotational speed as the fan. Hence, beamforming algorithms with deconvolution, in this case CLEAN-SC, could be applied. Sound maps and integrated spectra of sub-components are evaluated for five operating points. At selected frequency bands, the presented method yields sound maps featuring a clear circular source pattern corresponding to the nine fan blades. Depending on the adjusted operating point, sound sources are located on the leading or trailing edges of the fan blades. Integrated spectra show that in most cases leading edge noise is dominant for the low-frequency part and trailing edge noise for the high-frequency part. The shift from leading to trailing edge noise is strongly dependent on the operating point and frequency range considered.
Studies of infrasound propagation using the USArray seismic network (Invited)
NASA Astrophysics Data System (ADS)
Hedlin, M. A.; Degroot-Hedlin, C. D.; Walker, K. T.
2010-12-01
Although there are currently ~ 100 infrasound arrays worldwide, more than ever before, the station density is still insufficient to provide validation for detailed propagation modeling. Much structure in the atmosphere is short-lived and occurs at spatial scales much smaller than the average distance between infrasound stations. Relatively large infrasound signals can be observed on seismic channels due to coupling at the Earth's surface. Recent research, using data from the 70-km spaced 400-station USArray and other seismic network deployments, has shown the value of dense seismic network data for filling in the gaps between infrasound arrays. The dense sampling of the infrasound wavefield has allowed us to observe complete travel-time branches of infrasound signals and shed more light on the nature of infrasound propagation. We present early results from our studies of impulsive atmospheric sources, such as series of UTTR rocket motor detonations in Utah. The Utah blasts have been well recorded by USArray seismic stations and infrasound arrays in Nevada and Washington State. Recordings of seismic signals from a series of six events in 2007 are used to pinpoint the shot times to < 1 second. Variations in the acoustic branches and signal arrival times at the arrays are used to probe variations in atmospheric structure. Although we currently use coupled signals we anticipate studying dense acoustic network recordings as the USArray is currently being upgraded with infrasound microphones. These new sensors will allow us to make semi-continental scale network recordings of infrasound signals free of concerns about how the signals observed on seismic channels were modified when being coupled to seismic.
Airframe Noise from a Hybrid Wing Body Aircraft Configuration
NASA Technical Reports Server (NTRS)
Hutcheson, Florence V.; Spalt, Taylor B.; Brooks, Thomas F.; Plassman, Gerald E.
2016-01-01
A high fidelity aeroacoustic test was conducted in the NASA Langley 14- by 22-Foot Subsonic Tunnel to establish a detailed database of component noise for a 5.8% scale HWB aircraft configuration. The model has a modular design, which includes a drooped and a stowed wing leading edge, deflectable elevons, twin verticals, and a landing gear system with geometrically scaled wheel-wells. The model is mounted inverted in the test section and noise measurements are acquired at different streamwise stations from an overhead microphone phased array and from overhead and sideline microphones. Noise source distribution maps and component noise spectra are presented for airframe configurations representing two different approach flight conditions. Array measurements performed along the aircraft flyover line show the main landing gear to be the dominant contributor to the total airframe noise, followed by the nose gear, the inboard side-edges of the LE droop, the wing tip/LE droop outboard side-edges, and the side-edges of deployed elevons. Velocity dependence and flyover directivity are presented for the main noise components. Decorrelation effects from turbulence scattering on spectral levels measured with the microphone phased array are discussed. Finally, noise directivity maps obtained from the overhead and sideline microphone measurements for the landing gear system are provided for a broad range of observer locations.
NASA Astrophysics Data System (ADS)
Gover, Bradford Noel
The problem of hands-free speech pick-up is introduced, and it is identified how details of the spatial properties of the reverberant field may be useful for enhanced design of microphone arrays. From this motivation, a broadly-applicable measurement system has been developed for the analysis of the directional and spatial variations in reverberant sound fields. Two spherical, 32-element arrays of microphones are used to generate narrow beams over two different frequency ranges, together covering 300--3300 Hz. Using an omnidirectional loudspeaker as excitation in a room, the pressure impulse response in each of 60 steering directions is measured. Through analysis of these responses, the variation of arriving energy with direction is studied. The system was first validated in simple sound fields in an anechoic chamber and in a reverberation chamber. The system characterizes these sound fields as expected, both quantitatively through numerical descriptors and qualitatively from plots of the arriving energy versus direction. The system was then used to measure the sound fields in several actual rooms. Through both qualitative and quantitative output, these sound fields were seen to be highly anisotropic, influenced greatly by the direct sound and early-arriving reflections. Furthermore, the rate of sound decay was not independent of direction, sound being absorbed more rapidly in some directions than in others. These results are discussed in the context of the original motivation, and methods for their application to enhanced speech pick-up using microphone arrays are proposed.
Seibert, Anna-Maria; Koblitz, Jens C.; Denzinger, Annette; Schnitzler, Hans-Ulrich
2015-01-01
The Barbastelle bat (Barbastella barbastellus) preys almost exclusively on tympanate moths. While foraging, this species alternates between two different signal types. We investigated whether these signals differ in emission direction or source level (SL) as assumed from earlier single microphone recordings. We used two different settings of a 16-microphone array to determine SL and sonar beam direction at various locations in the field. Both types of search signals had low SLs (81 and 82 dB SPL rms re 1 m) as compared to other aerial-hawking bats. These two signal types were emitted in different directions; type 1 signals were directed downward and type 2 signals upward. The angle between beam directions was approximately 70°. Barbastelle bats are able to emit signals through both the mouth and the nostrils. As mouth and nostrils are roughly perpendicular to each other, we conclude that type 1 signals are emitted through the mouth while type 2 signals and approach signals are emitted through the nose. We hypothesize that the “stealth” echolocation system of B. barbastellus is bifunctional. The more upward directed nose signals may be mainly used for search and localization of prey. Their low SL prevents an early detection by eared moths but comes at the expense of a strongly reduced detection range for the environment below the bat. The more downward directed mouth signals may have evolved to compensate for this disadvantage and may be mainly used for spatial orientation. We suggest that the possibly bifunctional echolocation system of B. barbastellus has been adapted to the selective foraging of eared moths and is an excellent example of a sophisticated sensory arms race between predator and prey. PMID:26352271
Seibert, Anna-Maria; Koblitz, Jens C; Denzinger, Annette; Schnitzler, Hans-Ulrich
2015-01-01
The Barbastelle bat (Barbastella barbastellus) preys almost exclusively on tympanate moths. While foraging, this species alternates between two different signal types. We investigated whether these signals differ in emission direction or source level (SL) as assumed from earlier single microphone recordings. We used two different settings of a 16-microphone array to determine SL and sonar beam direction at various locations in the field. Both types of search signals had low SLs (81 and 82 dB SPL rms re 1 m) as compared to other aerial-hawking bats. These two signal types were emitted in different directions; type 1 signals were directed downward and type 2 signals upward. The angle between beam directions was approximately 70°. Barbastelle bats are able to emit signals through both the mouth and the nostrils. As mouth and nostrils are roughly perpendicular to each other, we conclude that type 1 signals are emitted through the mouth while type 2 signals and approach signals are emitted through the nose. We hypothesize that the "stealth" echolocation system of B. barbastellus is bifunctional. The more upward directed nose signals may be mainly used for search and localization of prey. Their low SL prevents an early detection by eared moths but comes at the expense of a strongly reduced detection range for the environment below the bat. The more downward directed mouth signals may have evolved to compensate for this disadvantage and may be mainly used for spatial orientation. We suggest that the possibly bifunctional echolocation system of B. barbastellus has been adapted to the selective foraging of eared moths and is an excellent example of a sophisticated sensory arms race between predator and prey.
Measurements of Infrared and Acoustic Source Distributions in Jet Plumes
NASA Technical Reports Server (NTRS)
Agboola, Femi A.; Bridges, James; Saiyed, Naseem
2004-01-01
The aim of this investigation was to use the linear phased array (LPA) microphones and infrared (IR) imaging to study the effects of advanced nozzle-mixing techniques on jet noise reduction. Several full-scale engine nozzles were tested at varying power cycles with the linear phased array setup parallel to the jet axis. The array consisted of 16 sparsely distributed microphones. The phased array microphone measurements were taken at a distance of 51.0 ft (15.5 m) from the jet axis, and the results were used to obtain relative overall sound pressure levels from one nozzle design to the other. The IR imaging system was used to acquire real-time dynamic thermal patterns of the exhaust jet from the nozzles tested. The IR camera measured the IR radiation from the nozzle exit to a distance of six fan diameters (X/D(sub FAN) = 6), along the jet plume axis. The images confirmed the expected jet plume mixing intensity, and the phased array results showed the differences in sound pressure level with respect to nozzle configurations. The results show the effects of changes in configurations to the exit nozzles on both the flows mixing patterns and radiant energy dissipation patterns. By comparing the results from these two measurements, a relationship between noise reduction and core/bypass flow mixing is demonstrated.
Response Identification in the Extremely Low Frequency Region of an Electret Condenser Microphone
Jeng, Yih-Nen; Yang, Tzung-Ming; Lee, Shang-Yin
2011-01-01
This study shows that a small electret condenser microphone connected to a notebook or a personal computer (PC) has a prominent response in the extremely low frequency region in a specific environment. It confines most acoustic waves within a tiny air cell as follows. The air cell is constructed by drilling a small hole in a digital versatile disk (DVD) plate. A small speaker and an electret condenser microphone are attached to the two sides of the hole. Thus, the acoustic energy emitted by the speaker and reaching the microphone is strong enough to actuate the diaphragm of the latter. The experiments showed that, once small air leakages are allowed on the margin of the speaker, the microphone captured the signal in the range of 0.5 to 20 Hz. Moreover, by removing the plastic cover of the microphone and attaching the microphone head to the vibration surface, the low frequency signal can be effectively captured too. Two examples are included to show the convenience of applying the microphone to pick up the low frequency vibration information of practical systems. PMID:22346594
Response identification in the extremely low frequency region of an electret condenser microphone.
Jeng, Yih-Nen; Yang, Tzung-Ming; Lee, Shang-Yin
2011-01-01
This study shows that a small electret condenser microphone connected to a notebook or a personal computer (PC) has a prominent response in the extremely low frequency region in a specific environment. It confines most acoustic waves within a tiny air cell as follows. The air cell is constructed by drilling a small hole in a digital versatile disk (DVD) plate. A small speaker and an electret condenser microphone are attached to the two sides of the hole. Thus, the acoustic energy emitted by the speaker and reaching the microphone is strong enough to actuate the diaphragm of the latter. The experiments showed that, once small air leakages are allowed on the margin of the speaker, the microphone captured the signal in the range of 0.5 to 20 Hz. Moreover, by removing the plastic cover of the microphone and attaching the microphone head to the vibration surface, the low frequency signal can be effectively captured too. Two examples are included to show the convenience of applying the microphone to pick up the low frequency vibration information of practical systems.
Remote listening and passive acoustic detection in a 3-D environment
NASA Astrophysics Data System (ADS)
Barnhill, Colin
Teleconferencing environments are a necessity in business, education and personal communication. They allow for the communication of information to remote locations without the need for travel and the necessary time and expense required for that travel. Visual information can be communicated using cameras and monitors. The advantage of visual communication is that an image can capture multiple objects and convey them, using a monitor, to a large group of people regardless of the receiver's location. This is not the case for audio. Currently, most experimental teleconferencing systems' audio is based on stereo recording and reproduction techniques. The problem with this solution is that it is only effective for one or two receivers. To accurately capture a sound environment consisting of multiple sources and to recreate that for a group of people is an unsolved problem. This work will focus on new methods of multiple source 3-D environment sound capture and applications using these captured environments. Using spherical microphone arrays, it is now possible to capture a true 3-D environment A spherical harmonic transform on the array's surface allows us to determine the basis functions (spherical harmonics) for all spherical wave solutions (up to a fixed order). This spherical harmonic decomposition (SHD) allows us to not only look at the time and frequency characteristics of an audio signal but also the spatial characteristics of an audio signal. In this way, a spherical harmonic transform is analogous to a Fourier transform in that a Fourier transform transforms a signal into the frequency domain and a spherical harmonic transform transforms a signal into the spatial domain. The SHD also decouples the input signals from the microphone locations. Using the SHD of a soundfield, new algorithms are available for remote listening, acoustic detection, and signal enhancement The new algorithms presented in this paper show distinct advantages over previous detection and listening algorithms especially for multiple speech sources and room environments. The algorithms use high order (spherical harmonic) beamforming and power signal characteristics for source localization and signal enhancement These methods are applied to remote listening, surveillance, and teleconferencing.
Cigada, Alfredo; Lurati, Massimiliano; Ripamonti, Francesco; Vanali, Marcello
2008-12-01
This paper introduces a measurement technique aimed at reducing or possibly eliminating the spatial aliasing problem in the beamforming technique. Beamforming main disadvantages are a poor spatial resolution, at low frequency, and the spatial aliasing problem, at higher frequency, leading to the identification of false sources. The idea is to move the microphone array during the measurement operation. In this paper, the proposed approach is theoretically and numerically investigated by means of simple sound propagation models, proving its efficiency in reducing the spatial aliasing. A number of different array configurations are numerically investigated together with the most important parameters governing this measurement technique. A set of numerical results concerning the case of a planar rotating array is shown, together with a first experimental validation of the method.
Dobbs, M A; Lueker, M; Aird, K A; Bender, A N; Benson, B A; Bleem, L E; Carlstrom, J E; Chang, C L; Cho, H-M; Clarke, J; Crawford, T M; Crites, A T; Flanigan, D I; de Haan, T; George, E M; Halverson, N W; Holzapfel, W L; Hrubes, J D; Johnson, B R; Joseph, J; Keisler, R; Kennedy, J; Kermish, Z; Lanting, T M; Lee, A T; Leitch, E M; Luong-Van, D; McMahon, J J; Mehl, J; Meyer, S S; Montroy, T E; Padin, S; Plagge, T; Pryke, C; Richards, P L; Ruhl, J E; Schaffer, K K; Schwan, D; Shirokoff, E; Spieler, H G; Staniszewski, Z; Stark, A A; Vanderlinde, K; Vieira, J D; Vu, C; Westbrook, B; Williamson, R
2012-07-01
A technological milestone for experiments employing transition edge sensor bolometers operating at sub-Kelvin temperature is the deployment of detector arrays with 100s-1000s of bolometers. One key technology for such arrays is readout multiplexing: the ability to read out many sensors simultaneously on the same set of wires. This paper describes a frequency-domain multiplexed readout system which has been developed for and deployed on the APEX-SZ and South Pole Telescope millimeter wavelength receivers. In this system, the detector array is divided into modules of seven detectors, and each bolometer within the module is biased with a unique ∼MHz sinusoidal carrier such that the individual bolometer signals are well separated in frequency space. The currents from all bolometers in a module are summed together and pre-amplified with superconducting quantum interference devices operating at 4 K. Room temperature electronics demodulate the carriers to recover the bolometer signals, which are digitized separately and stored to disk. This readout system contributes little noise relative to the detectors themselves, is remarkably insensitive to unwanted microphonic excitations, and provides a technology pathway to multiplexing larger numbers of sensors.
Novel Methods for Sensing Acoustical Emissions From the Knee for Wearable Joint Health Assessment.
Teague, Caitlin N; Hersek, Sinan; Toreyin, Hakan; Millard-Stafford, Mindy L; Jones, Michael L; Kogler, Geza F; Sawka, Michael N; Inan, Omer T
2016-08-01
We present the framework for wearable joint rehabilitation assessment following musculoskeletal injury. We propose a multimodal sensing (i.e., contact based and airborne measurement of joint acoustic emission) system for at-home monitoring. We used three types of microphones-electret, MEMS, and piezoelectric film microphones-to obtain joint sounds in healthy collegiate athletes during unloaded flexion/extension, and we evaluated the robustness of each microphone's measurements via: 1) signal quality and 2) within-day consistency. First, air microphones acquired higher quality signals than contact microphones (signal-to-noise-and-interference ratio of 11.7 and 12.4 dB for electret and MEMS, respectively, versus 8.4 dB for piezoelectric). Furthermore, air microphones measured similar acoustic signatures on the skin and 5 cm off the skin (∼4.5× smaller amplitude). Second, the main acoustic event during repetitive motions occurred at consistent joint angles (intra-class correlation coefficient ICC(1, 1) = 0.94 and ICC(1, k) = 0.99). Additionally, we found that this angular location was similar between right and left legs, with asymmetry observed in only a few individuals. We recommend using air microphones for wearable joint sound sensing; for practical implementation of contact microphones in a wearable device, interface noise must be reduced. Importantly, we show that airborne signals can be measured consistently and that healthy left and right knees often produce a similar pattern in acoustic emissions. These proposed methods have the potential for enabling knee joint acoustics measurement outside the clinic/lab and permitting long-term monitoring of knee health for patients rehabilitating an acute knee joint injury.
Ricketts, Todd A; Picou, Erin M
2013-09-01
This study aimed to evaluate the potential utility of asymmetrical and symmetrical directional hearing aid fittings for school-age children in simulated classroom environments. This study also aimed to evaluate speech recognition performance of children with normal hearing in the same listening environments. Two groups of school-age children 11 to 17 years of age participated in this study. Twenty participants had normal hearing, and 29 participants had sensorineural hearing loss. Participants with hearing loss were fitted with behind-the-ear hearing aids with clinically appropriate venting and were tested in 3 hearing aid configurations: bilateral omnidirectional, bilateral directional, and asymmetrical directional microphones. Speech recognition testing was completed in each microphone configuration in 3 environments: Talker-Front, Talker-Back, and Question-Answer situations. During testing, the location of the speech signal changed, but participants were always seated in a noisy, moderately reverberant classroom-like room. For all conditions, results revealed expected effects of directional microphones on speech recognition performance. When the signal of interest was in front of the listener, bilateral directional microphone was best, and when the signal of interest was behind the listener, bilateral omnidirectional microphone was best. Performance with asymmetric directional microphones was between the 2 symmetrical conditions. The magnitudes of directional benefits and decrements were not significantly correlated. In comparison with their peers with normal hearing, children with hearing loss performed similarly to their peers with normal hearing when fitted with directional microphones and the speech was from the front. In contrast, children with normal hearing still outperformed children with hearing loss if the speech originated from behind, even when the children were fitted with the optimal hearing aid microphone mode for the situation. Bilateral directional microphones can be effective in improving speech recognition performance for children in the classroom, as long as child is facing the talker of interest. Bilateral directional microphones, however, can impair performance if the signal originates from behind a listener. However, these data suggest that the magnitude of decrement is not predictable from an individual's benefit. The results re-emphasize the importance of appropriate switching between microphone modes so children can take full advantage of directional benefits without being hurt by directional decrements. An asymmetric fitting limits decrements, but does not lead to maximum speech recognition scores when compared with the optimal symmetrical fitting. Therefore, the asymmetric mode may not be the best option as a default fitting for children in a classroom environment. While directional microphones improve performance for children with hearing loss, their performance in most conditions continues to be impaired relative to their normal-hearing peers, particularly when the signals of interest originate from behind or from an unpredictable location.
A dynamic multi-channel speech enhancement system for distributed microphones in a car environment
NASA Astrophysics Data System (ADS)
Matheja, Timo; Buck, Markus; Fingscheidt, Tim
2013-12-01
Supporting multiple active speakers in automotive hands-free or speech dialog applications is an interesting issue not least due to comfort reasons. Therefore, a multi-channel system for enhancement of speech signals captured by distributed distant microphones in a car environment is presented. Each of the potential speakers in the car has a dedicated directional microphone close to his position that captures the corresponding speech signal. The aim of the resulting overall system is twofold: On the one hand, a combination of an arbitrary pre-defined subset of speakers' signals can be performed, e.g., to create an output signal in a hands-free telephone conference call for a far-end communication partner. On the other hand, annoying cross-talk components from interfering sound sources occurring in multiple different mixed output signals are to be eliminated, motivated by the possibility of other hands-free applications being active in parallel. The system includes several signal processing stages. A dedicated signal processing block for interfering speaker cancellation attenuates the cross-talk components of undesired speech. Further signal enhancement comprises the reduction of residual cross-talk and background noise. Subsequently, a dynamic signal combination stage merges the processed single-microphone signals to obtain appropriate mixed signals at the system output that may be passed to applications such as telephony or a speech dialog system. Based on signal power ratios between the particular microphone signals, an appropriate speaker activity detection and therewith a robust control mechanism of the whole system is presented. The proposed system may be dynamically configured and has been evaluated for a car setup with four speakers sitting in the car cabin disturbed in various noise conditions.
Genescà, Meritxell; Svensson, U Peter; Taraldsen, Gunnar
2015-04-01
Ground reflections cause problems when estimating the direction of arrival of aircraft noise. In traditional methods, based on the time differences between the microphones of a compact array, they may cause a significant loss of accuracy in the vertical direction. This study evaluates the use of first-order directional microphones, instead of omnidirectional, with the aim of reducing the amplitude of the reflected sound. Such a modification allows the problem to be treated as in free field conditions. Although further tests are needed for a complete evaluation of the method, the experimental results presented here show that under the particular conditions tested the vertical angle error is reduced ∼10° for both jet and propeller aircraft by selecting an appropriate directivity pattern. It is also shown that the final level of error depends on the vertical angle of arrival of the sound, and that the estimates of the horizontal angle of arrival are not influenced by the directivity pattern of the microphones nor by the reflective properties of the ground.
Acoustic waveform of continuous bubbling in a non-Newtonian fluid.
Vidal, Valérie; Ichihara, Mie; Ripepe, Maurizio; Kurita, Kei
2009-12-01
We study experimentally the acoustic signal associated with a continuous bubble bursting at the free surface of a non-Newtonian fluid. Due to the fluid rheological properties, the bubble shape is elongated, and, when bursting at the free surface, acts as a resonator. For a given fluid concentration, at constant flow rate, repetitive bubble bursting occurs at the surface. We report a modulation pattern of the acoustic waveform through time. Moreover, we point out the existence of a precursor acoustic signal, recorded on the microphone array, previous to each bursting. The time delay between this precursor and the bursting signal is well correlated with the bursting signal frequency content. Their joint modulation through time is driven by the fluid rheology, which strongly depends on the presence of small satellite bubbles trapped in the fluid due to the yield stress.
Tran, Phuong K; Letowski, Tomasz R; McBride, Maranda E
2013-06-01
Speech signals can be converted into electrical audio signals using either conventional air conduction (AC) microphone or a contact bone conduction (BC) microphone. The goal of this study was to investigate the effects of the location of a BC microphone on the intensity and frequency spectrum of the recorded speech. Twelve locations, 11 on the talker's head and 1 on the collar bone, were investigated. The speech sounds were three vowels (/u/, /a/, /i/) and two consonants (/m/, /∫/). The sounds were produced by 12 talkers. Each sound was recorded simultaneously with two BC microphones and an AC microphone. Analyzed spectral data showed that the BC recordings made at the forehead of the talker were the most similar to the AC recordings, whereas the collar bone recordings were most different. Comparison of the spectral data with speech intelligibility data collected in another study revealed a strong negative relationship between BC speech intelligibility and the degree of deviation of the BC speech spectrum from the AC spectrum. In addition, the head locations that resulted in the highest speech intelligibility were associated with the lowest output signals among all tested locations. Implications of these findings for BC communication are discussed.
Impedance Eduction in Ducts with Higher-Order Modes and Flow
NASA Technical Reports Server (NTRS)
Watson, Willie R.; Jones, Michael G.
2009-01-01
An impedance eduction technique, previously validated for ducts with plane waves at the source and duct termination planes, has been extended to support higher-order modes at these locations. Inputs for this method are the acoustic pressures along the source and duct termination planes, and along a microphone array located in a wall either adjacent or opposite to the test liner. A second impedance eduction technique is then presented that eliminates the need for the microphone array. The integrity of both methods is tested using three sound sources, six Mach numbers, and six selected frequencies. Results are presented for both a hardwall and a test liner (with known impedance) consisting of a perforated plate bonded to a honeycomb core. The primary conclusion of the study is that the second method performs well in the presence of higher-order modes and flow. However, the first method performs poorly when most of the microphones are located near acoustic pressure nulls. The negative effects of the acoustic pressure nulls can be mitigated by a judicious choice of the mode structure in the sound source. The paper closes by using the first impedance eduction method to design a rectangular array of 32 microphones for accurate impedance eduction in the NASA LaRC Curved Duct Test Rig in the presence of expected measurement uncertainties, higher order modes, and mean flow.
Traversing Microphone Track Installed in NASA Lewis' Aero-Acoustic Propulsion Laboratory Dome
NASA Technical Reports Server (NTRS)
Bauman, Steven W.; Perusek, Gail P.
1999-01-01
The Aero-Acoustic Propulsion Laboratory is an acoustically treated, 65-ft-tall dome located at the NASA Lewis Research Center. Inside this laboratory is the Nozzle Acoustic Test Rig (NATR), which is used in support of Advanced Subsonics Technology (AST) and High Speed Research (HSR) to test engine exhaust nozzles for thrust and acoustic performance under simulated takeoff conditions. Acoustic measurements had been gathered by a far-field array of microphones located along the dome wall and 10-ft above the floor. Recently, it became desirable to collect acoustic data for engine certifications (as specified by the Federal Aviation Administration (FAA)) that would simulate the noise of an aircraft taking off as heard from an offset ground location. Since nozzles for the High-Speed Civil Transport have straight sides that cause their noise signature to vary radially, an additional plane of acoustic measurement was required. Desired was an arched array of 24 microphones, equally spaced from the nozzle and each other, in a 25 off-vertical plane. The various research requirements made this a challenging task. The microphones needed to be aimed at the nozzle accurately and held firmly in place during testing, but it was also essential that they be easily and routinely lowered to the floor for calibration and servicing. Once serviced, the microphones would have to be returned to their previous location near the ceiling. In addition, there could be no structure could between the microphones and the nozzle, and any structure near the microphones would have to be designed to minimize noise reflections. After many concepts were considered, a single arched truss structure was selected that would be permanently affixed to the dome ceiling and to one end of the dome floor.
Benefits of the fiber optic versus the electret microphone in voice amplification.
Kyriakou, Kyriaki; Fisher, Hélène R
2013-01-01
Voice disorders that result in reduced loudness may cause difficulty in communicating, socializing and participating in occupational activities. Amplification is often recommended in order to facilitate functional communication, reduce vocal load and avoid developing maladaptive compensatory behaviours. The most common microphone used with amplification systems is the electret microphone. One alternate form of microphone is the fiber optic microphone. To examine the benefits of the fiber optic (1190S) versus the electret (M04) microphone as measured by objective and subjective parameters in the amplification of a patient's voice with reduced loudness caused by neurological and/or respiratory-based problems. Eighteen patients with vocal fold paralysis, Parkinson's disease and/or chronic obstructive pulmonary disease (COPD) participated in the study. The study contained a measurement of intensity, amplitude perturbation and signal-to-noise ratio during a sustained vowel production and a measurement of intensity during conversation with the use of the two microphones simultaneously. It also included the completion of a questionnaire indicating the patient's satisfaction with each microphone. The fiber optic (1190S) microphone had better objective acoustic performance (i.e. lower amplitude perturbation, higher signal-to-noise ratio and higher intensity) than the electret (M04) microphone. It also had better patient subjective satisfaction (i.e. less conspicuousness, more voice clarity, less acoustic feedback, more loudness and more utilization) than the electret microphone. Patients with neurological and/or respiratory-based voice problems may more confidently and frequently use the fiber optic microphone to communicate, socialize and participate in occupational activities more easily. Speech-language pathologists may more confidently use or recommend the fiber optic microphone with amplification systems. © 2012 Royal College of Speech and Language Therapists.
Location of aerodynamic noise sources from a 200 kW vertical-axis wind turbine
NASA Astrophysics Data System (ADS)
Ottermo, Fredric; Möllerström, Erik; Nordborg, Anders; Hylander, Jonny; Bernhoff, Hans
2017-07-01
Noise levels emitted from a 200 kW H-rotor vertical-axis wind turbine have been measured using a microphone array at four different positions, each at a hub-height distance from the tower. The microphone array, comprising 48 microphones in a spiral pattern, allows for directional mapping of the noise sources in the range of 500 Hz to 4 kHz. The produced images indicate that most of the noise is generated in a narrow azimuth-angle range, compatible with the location where increased turbulence is known to be present in the flow, as a result of the previous passage of a blade and its support arms. It is also shown that a semi-empirical model for inflow-turbulence noise seems to produce noise levels of the correct order of magnitude, based on the amount of turbulence that could be expected from power extraction considerations.
1955-05-01
6.2.1 Seismic Height of Burst Determination 6l 6.2.2 Long Range Yield Determination 6l 6.2.3 Lead Sulphide Cells 6l | • APPENDIX A SURVEY...located at M 96 in the vicinity of Array 3* The original metro station was moved to vicinity Camp Mercury Sewage Disposal Plant for Shots 9 and 10. The...JWiRE COMMUNICATION =r=^^l .QBAWIELS ., I foi* 50—ftfejP - -70 CAMP DESERT MICROPHONE 40- 15" (CAMP --’ MERCURY -60 MICROPHONE ARRAY NO I
NASA Astrophysics Data System (ADS)
Fischer, J.; Doolan, C.
2017-12-01
A method to improve the quality of acoustic beamforming in reverberant environments is proposed in this paper. The processing is based on a filtering of the cross-correlation matrix of the microphone signals obtained using a microphone array. The main advantage of the proposed method is that it does not require information about the geometry of the reverberant environment and thus it can be applied to any configuration. The method is applied to the particular example of aeroacoustic testing in a hard-walled low-speed wind tunnel; however, the technique can be used in any reverberant environment. Two test cases demonstrate the technique. The first uses a speaker placed in the hard-walled working section with no wind tunnel flow. In the second test case, an airfoil is placed in a flow and acoustic beamforming maps are obtained. The acoustic maps have been improved, as the reflections observed in the conventional maps have been removed after application of the proposed method.
Infrasonic array observations at I53US of the 2006 Augustine Volcano eruptions
Wilson, C.R.; Olson, J.V.; Szuberla, Curt A.L.; McNutt, Steve; Tytgat, Guy; Drob, Douglas P.
2006-01-01
The recent January 2006 Augustine eruptions, from the 11th to the 28th, have produced a series of 12 infrasonic signals that were observed at the I53US array at UAF. the eruption times for the signals were provided by the Alaska Volcanic Observatory at UAF using seismic sensors and a Chaparral microphone that are installed on Augustine Island. The bearing and distance of Augustine from I53US are, respectively, 207.8 degrees and 675 km. The analysis of the signals is done with a least-squares detector/estimator that calculates, from the 28 different sensor-pairs in the array, the mean of the cross-correlation maxima (MCCM), the horizontal trace-velocity and the azimuth of arrival of the signal using a sliding-window of 2000 data points. The data were bandpass filtered from 0.03 to 0.10 Hz. The data are digitized at a rate of 20 Hz. The average values of the signal parameters for all 12 Augustine signals are as follows: MCCM=0.85 (std 0.14), Trace-velocity=0.346 (std 0.016) km/sec, Azimuth=209 (std 2) deg. The celerity for each signal was calculated using the range 675 km and the individual travel times to I53US. The average celerity for all ten eruption signals was 0.27 (std 0.02) km/sec. Ray tracing studies, using mean values of the wind speed and temperature profiles (along the path) from NRL, have shown that there was propagation to I53US by both stratospheric and thermospheric ray paths from the volcano.
Signal Processing and Interpretation Using Multilevel Signal Abstractions.
1986-06-01
mappings expressed in the Fourier domain. Pre- viously proposed causal analysis techniques for diagnosis are based on the analysis of intermediate data ...can be processed either as individual one-dimensional waveforms or as multichannel data 26 I P- - . . . ." " ." h9. for source detection and direction...microphone data . The signal processing for both spectral analysis of microphone signals and direc- * tion determination of acoustic sources involves
Electronic filters, hearing aids and methods
NASA Technical Reports Server (NTRS)
Engebretson, A. Maynard (Inventor)
1995-01-01
An electronic filter for an electroacoustic system. The system has a microphone for generating an electrical output from external sounds and an electrically driven transducer for emitting sound. Some of the sound emitted by the transducer returns to the microphone means to add a feedback contribution to its electrical output. The electronic filter includes a first circuit for electronic processing of the electrical output of the microphone to produce a first signal. An adaptive filter, interconnected with the first circuit, performs electronic processing of the first signal to produce an adaptive output to the first circuit to substantially offset the feedback contribution in the electrical output of the microphone, and the adaptive filter includes means for adapting only in response to polarities of signals supplied to and from the first circuit. Other electronic filters for hearing aids, public address systems and other electroacoustic systems, as well as such systems and methods of operating them are also disclosed.
Electronic filters, hearing aids and methods
NASA Technical Reports Server (NTRS)
Engebretson, A. Maynard (Inventor); O'Connell, Michael P. (Inventor); Zheng, Baohua (Inventor)
1991-01-01
An electronic filter for an electroacoustic system. The system has a microphone for generating an electrical output from external sounds and an electrically driven transducer for emitting sound. Some of the sound emitted by the transducer returns to the microphone means to add a feedback contribution to its electical output. The electronic filter includes a first circuit for electronic processing of the electrical output of the microphone to produce a filtered signal. An adaptive filter, interconnected with the first circuit, performs electronic processing of the filtered signal to produce an adaptive output to the first circuit to substantially offset the feedback contribution in the electrical output of the microphone, and the adaptive filter includes means for adapting only in response to polarities of signals supplied to and from the first circuit. Other electronic filters for hearing aids, public address systems and other electroacoustic systems, as well as such systems, and methods of operating them are also disclosed.
Acoustic imaging of aircraft wake vortex dynamics
DOT National Transportation Integrated Search
2005-06-01
The experience in utilizing a phased microphone array to passively image aircraft wake : vortices is highlighted. It is demonstrated that the array can provide visualization of wake : dynamics similar to smoke release or natural condensation of vorti...
System for determining aerodynamic imbalance
NASA Technical Reports Server (NTRS)
Churchill, Gary B. (Inventor); Cheung, Benny K. (Inventor)
1994-01-01
A system is provided for determining tracking error in a propeller or rotor driven aircraft by determining differences in the aerodynamic loading on the propeller or rotor blades of the aircraft. The system includes a microphone disposed relative to the blades during the rotation thereof so as to receive separate pressure pulses produced by each of the blades during the passage thereof by the microphone. A low pass filter filters the output signal produced by the microphone, the low pass filter having an upper cut-off frequency set below the frequency at which the blades pass by the microphone. A sensor produces an output signal after each complete revolution of the blades, and a recording display device displays the outputs of the low pass filter and sensor so as to enable evaluation of the relative magnitudes of the pressure pulses produced by passage of the blades by the microphone during each complete revolution of the blades.
Acoustic/infrasonic rocket engine signatures
NASA Astrophysics Data System (ADS)
Tenney, Stephen M.; Noble, John M.; Whitaker, Rodney W.; ReVelle, Douglas O.
2003-09-01
Infrasonics offers the potential of long-range acoustic detection of explosions, missiles and even sounds created by manufacturing plants. The atmosphere attenuates acoustic energy above 20 Hz quite rapidly, but signals below 10 Hz can propagate to long ranges. Space shuttle launches have been detected infrasonically from over 1000 km away and the Concorde airliner from over 400 km. This technology is based on microphones designed to respond to frequencies from .1 to 300 Hz that can be operated outdoors for extended periods of time with out degrading their performance. The US Army Research Laboratory and Los Alamos National Laboratory have collected acoustic and infrasonic signatures of static engine testing of two missiles. Signatures were collected of a SCUD missile engine at Huntsville, AL and a Minuteman engine at Edwards AFB. The engines were fixed vertically in a test stand during the burn. We will show the typical time waveform signals of these static tests and spectrograms for each type. High resolution, 24-bit data were collected at 512 Hz and 16-bit acoustic data at 10 kHz. Edwards data were recorded at 250 Hz and 50 Hz using a Geotech Instruments 24 bit digitizer. Ranges from the test stand varied from 1 km to 5 km. Low level and upper level meteorological data was collected to provide full details of atmospheric propagation during the engine test. Infrasonic measurements were made with the Chaparral Physics Model 2 microphone with porous garden hose attached for wind noise suppression. A B&K microphone was used for high frequency acoustic measurements. Results show primarily a broadband signal with distinct initiation and completion points. There appear to be features present in the signals that would allow identification of missile type. At 5 km the acoustic/infrasonic signal was clearly present. Detection ranges for the types of missile signatures measured will be predicted based on atmospheric modeling. As part of an experiment conducted by ARL, sounding rocket launches have been detected from over 150 km. A variety of rockets launched from NASA"s Wallops Island facility were detected over a two year span. Arrays of microphones were able to create a line of bearing to the source of the launches that took place during different times of the year. This same experiment has been able to detect the space shuttle from over 1000 km on a regular basis. These two sources represent opposite ends of the target size, but they do demonstrate the potential for the detection and location of rocket launches.
Analysis of jet-airfoil interaction noise sources by using a microphone array technique
NASA Astrophysics Data System (ADS)
Fleury, Vincent; Davy, Renaud
2016-03-01
The paper is concerned with the characterization of jet noise sources and jet-airfoil interaction sources by using microphone array data. The measurements were carried-out in the anechoic open test section wind tunnel of Onera, Cepra19. The microphone array technique relies on the convected, Lighthill's and Ffowcs-Williams and Hawkings' acoustic analogy equation. The cross-spectrum of the source term of the analogy equation is sought. It is defined as the optimal solution to a minimal error equation using the measured microphone cross-spectra as reference. This inverse problem is ill-posed yet. A penalty term based on a localization operator is therefore added to improve the recovery of jet noise sources. The analysis of isolated jet noise data in subsonic regime shows the contribution of the conventional mixing noise source in the low frequency range, as expected, and of uniformly distributed, uncorrelated noise sources in the jet flow at higher frequencies. In underexpanded supersonic regime, a shock-associated noise source is clearly identified, too. An additional source is detected in the vicinity of the nozzle exit both in supersonic and subsonic regimes. In the presence of the airfoil, the distribution of the noise sources is deeply modified. In particular, a strong noise source is localized on the flap. For high Strouhal numbers, higher than about 2 (based on the jet mixing velocity and diameter), a significant contribution from the shear-layer near the flap is observed, too. Indications of acoustic reflections on the airfoil are also discerned.
High Altitude Infrasound Measurements using Balloon-Borne Arrays
NASA Astrophysics Data System (ADS)
Bowman, D. C.; Johnson, C. S.; Gupta, R. A.; Anderson, J.; Lees, J. M.; Drob, D. P.; Phillips, D.
2015-12-01
For the last fifty years, almost all infrasound sensors have been located on the Earth's surface. A few experiments consisting of microphones on poles and tethered aerostats comprise the remainder. Such surface and near-surface arrays likely do not capture the full diversity of acoustic signals in the atmosphere. Here, we describe results from a balloon mounted infrasound array that reached altitudes of up to 38 km (the middle stratosphere). The balloon drifted at the ambient wind speed, resulting in a near total reduction in wind noise. Signals consistent with tropospheric turbulence were detected. A spectral peak in the ocean microbarom range (0.12 - 0.35 Hz) was present on balloon-mounted sensors but not on static infrasound stations near the flight path. A strong 18 Hz signal, possibly related to building ventilation systems, was observed in the stratosphere. A wide variety of other narrow band acoustic signals of uncertain provenance were present throughout the flight, but were absent in simultaneous recordings from nearby ground stations. Similar phenomena were present in spectrograms from the last balloon infrasound campaign in the 1960s. Our results suggest that the infrasonic wave field in the stratosphere is very different from that which is readily detectable on surface stations. This has implications for modeling acoustic energy transfer between the lower and upper atmosphere as well as the detection of novel acoustic signals that never reach the ground. Our work provides valuable constraints on a proposed mission to detect earthquakes on Venus using balloon-borne infrasound sensors.
Wheel/rail noise generated by a high-speed train investigated with a line array of microphones
NASA Astrophysics Data System (ADS)
Barsikow, B.; King, W. F.; Pfizenmaier, E.
1987-10-01
Radiated noise generated by a high-speed electric train travelling at speeds up to 250 km/h has been measured with a line array of microphones mounted along the wayside in two different orientations. The test train comprised a 103 electric locomotive, four Intercity coaches, and a dynamo coach. Some of the wheels were fitted with experimental wheel-noise absorbers. By using the directional capabilities of the array, the locations of the dominant sources of wheel/rail radiated noise were identified on the wheels. For conventional wheels, these sources lie near or on the rim at an average height of about 0·2 m above the railhead. The effect of wheel-noise absorbers and freshly turned treads on radiated noise were also investigated.
Locating and Quantifying Broadband Fan Sources Using In-Duct Microphones
NASA Technical Reports Server (NTRS)
Dougherty, Robert P.; Walker, Bruce E.; Sutliff, Daniel L.
2010-01-01
In-duct beamforming techniques have been developed for locating broadband noise sources on a low-speed fan and quantifying the acoustic power in the inlet and aft fan ducts. The NASA Glenn Research Center's Advanced Noise Control Fan was used as a test bed. Several of the blades were modified to provide a broadband source to evaluate the efficacy of the in-duct beamforming technique. Phased arrays consisting of rings and line arrays of microphones were employed. For the imaging, the data were mathematically resampled in the frame of reference of the rotating fan. For both the imaging and power measurement steps, array steering vectors were computed using annular duct modal expansions, selected subsets of the cross spectral matrix elements were used, and the DAMAS and CLEAN-SC deconvolution algorithms were applied.
J-FLiC UAS Flights for Acoustic Testing Research
NASA Technical Reports Server (NTRS)
Motter, Mark A.; High, James W.
2016-01-01
The jet-powered flying testbed (J-FLiC) unmanned aircraft system (UAS) successfully completed twenty-six flights at Fort AP Hill, VA, from 27 August until September 3 2015, supporting tests of a microphone array system for aircraft noise measurement. The test vehicles, J-FLiC NAVY2 (N508NU), and J-FLiC 4 (N509NU), were flown under manual and autopiloted control in a variety of test conditions: clean at speeds ranging from 80 to 150 knots; and full landing configuration at speeds ranging from 50 to 95 knots. During the test campaign, autopilot capability was incrementally improved to ultimately provide a high degree of accuracy and repeatability of the critical test requirements for airspeed, altitude, runway alignment and position over the microphone array. Manual flights were performed for test conditions at the both ends of the speed envelope where autopiloted flight would have required flight beyond visual range and more extensive developmental work. The research objectives of the campaign were fully achieved. The ARMD Integrated Systems Research Program (ISRP) Environmentally Responsible Aviation (ERA) Project aims to develop the enabling capabilities/technologies that will allow prediction/reduction of aircraft noise. A primary measurement tool for ascertaining and characterizing empirically the effectiveness of various noise reduction technologies is a microphone phased array system. Such array systems need to be vetted and certified for operational use via field deployments and overflights of the array with test aircraft, in this case with sUAS aircraft such as J-FLiC.
Atmospheric effects on microphone array analysis of aircraft vortex sound
DOT National Transportation Integrated Search
2006-05-08
This paper provides the basis of a comprehensive analysis of vortex sound propagation : through the atmosphere in order to assess real atmospheric effects on acoustic array : processing. Such effects may impact vortex localization accuracy and detect...
Uloza, Virgilijus; Padervinskis, Evaldas; Vegiene, Aurelija; Pribuisiene, Ruta; Saferis, Viktoras; Vaiciukynas, Evaldas; Gelzinis, Adas; Verikas, Antanas
2015-11-01
The objective of this study is to evaluate the reliability of acoustic voice parameters obtained using smart phone (SP) microphones and investigate the utility of use of SP voice recordings for voice screening. Voice samples of sustained vowel/a/obtained from 118 subjects (34 normal and 84 pathological voices) were recorded simultaneously through two microphones: oral AKG Perception 220 microphone and SP Samsung Galaxy Note3 microphone. Acoustic voice signal data were measured for fundamental frequency, jitter and shimmer, normalized noise energy (NNE), signal to noise ratio and harmonic to noise ratio using Dr. Speech software. Discriminant analysis-based Correct Classification Rate (CCR) and Random Forest Classifier (RFC) based Equal Error Rate (EER) were used to evaluate the feasibility of acoustic voice parameters classifying normal and pathological voice classes. Lithuanian version of Glottal Function Index (LT_GFI) questionnaire was utilized for self-assessment of the severity of voice disorder. The correlations of acoustic voice parameters obtained with two types of microphones were statistically significant and strong (r = 0.73-1.0) for the entire measurements. When classifying into normal/pathological voice classes, the Oral-NNE revealed the CCR of 73.7% and the pair of SP-NNE and SP-shimmer parameters revealed CCR of 79.5%. However, fusion of the results obtained from SP voice recordings and GFI data provided the CCR of 84.60% and RFC revealed the EER of 7.9%, respectively. In conclusion, measurements of acoustic voice parameters using SP microphone were shown to be reliable in clinical settings demonstrating high CCR and low EER when distinguishing normal and pathological voice classes, and validated the suitability of the SP microphone signal for the task of automatic voice analysis and screening.
Veligdan, James T.
2000-01-11
An optical microphone includes a laser and beam splitter cooperating therewith for splitting a laser beam into a reference beam and a signal beam. A reflecting sensor receives the signal beam and reflects it in a plurality of reflections through sound pressure waves. A photodetector receives both the reference beam and reflected signal beam for heterodyning thereof to produce an acoustic signal for the sound waves. The sound waves vary the local refractive index in the path of the signal beam which experiences a Doppler frequency shift directly analogous with the sound waves.
Stanaćević, Milutin; Li, Shuo; Cauwenberghs, Gert
2016-07-01
A parallel micro-power mixed-signal VLSI implementation of independent component analysis (ICA) with reconfigurable outer-product learning rules is presented. With the gradient sensing of the acoustic field over a miniature microphone array as a pre-processing method, the proposed ICA implementation can separate and localize up to 3 sources in mild reverberant environment. The ICA processor is implemented in 0.5 µm CMOS technology and occupies 3 mm × 3 mm area. At 16 kHz sampling rate, ASIC consumes 195 µW power from a 3 V supply. The outer-product implementation of natural gradient and Herault-Jutten ICA update rules demonstrates comparable performance to benchmark FastICA algorithm in ideal conditions and more robust performance in noisy and reverberant environment. Experiments demonstrate perceptually clear separation and precise localization over wide range of separation angles of two speech sources presented through speakers positioned at 1.5 m from the array on a conference room table. The presented ASIC leads to a extreme small form factor and low power consumption microsystem for source separation and localization required in applications like intelligent hearing aids and wireless distributed acoustic sensor arrays.
First Test of Fan Active Noise Control (ANC) Completed
NASA Technical Reports Server (NTRS)
2005-01-01
With the advent of ultrahigh-bypass engines, the space available for passive acoustic treatment is becoming more limited, whereas noise regulations are becoming more stringent. Active noise control (ANC) holds promise as a solution to this problem. It uses secondary (added) noise sources to reduce or eliminate the offending noise radiation. The first active noise control test on the low-speed fan test bed was a General Electric Company system designed to control either the exhaust or inlet fan tone. This system consists of a "ring source," an induct array of error microphones, and a control computer. Fan tone noise propagates in a duct in the form of spinning waves. These waves are detected by the microphone array, and the computer identifies their spinning structure. The computer then controls the "ring source" to generate waves that have the same spinning structure and amplitude, but 180 out of phase with the fan noise. This computer generated tone cancels the fan tone before it radiates from the duct and is heard in the far field. The "ring source" used in these tests is a cylindrical array of 16 flat-plate acoustic radiators that are driven by thin piezoceramic sheets bonded to their back surfaces. The resulting source can produce spinning waves up to mode 7 at levels high enough to cancel the fan tone. The control software is flexible enough to work on spinning mode orders from -6 to 6. In this test, the fan was configured to produce a tone of order 6. The complete modal (spinning and radial) structure of the tones was measured with two builtin sets of rotating microphone rakes. These rakes provide a measurement of the system performance independent from the control system error microphones. In addition, the far-field noise was measured with a semicircular array of 28 microphones. This test represents the first in a series of tests that demonstrate different active noise control concepts, each on a progressively more complicated modal structure. The tests are in preparation for a demonstration on a flight-type engine.
Helicopter noise experiments in an urban environment
DOT National Transportation Integrated Search
1974-08-01
In two series of helicopter noise experiments, soundpressurelevel recordings were made on the ground while a helicopter flew over (i) an array of microphones placed in an open field, and (ii) a similar array placed in the center of a city stree...
Optical microphone with fiber Bragg grating and signal processing techniques
NASA Astrophysics Data System (ADS)
Tosi, Daniele; Olivero, Massimo; Perrone, Guido
2008-06-01
In this paper, we discuss the realization of an optical microphone array using fiber Bragg gratings as sensing elements. The wavelength shift induced by acoustic waves perturbing the sensing Bragg grating is transduced into an intensity modulation. The interrogation unit is based on a fixed-wavelength laser source and - as receiver - a photodetector with proper amplification; the system has been implemented using devices for standard optical communications, achieving a low-cost interrogator. One of the advantages of the proposed approach is that no voltage-to-strain calibration is required for tracking dynamic shifts. The optical sensor is complemented by signal processing tools, including a data-dependent frequency estimator and adaptive filters, in order to improve the frequency-domain analysis and mitigate the effects of disturbances. Feasibility and performances of the optical system have been tested measuring the output of a loudspeaker. With this configuration, the sensor is capable of correctly detecting sounds up to 3 kHz, with a frequency response that exhibits a top sensitivity within the range 200-500 Hz; single-frequency input sounds inducing an axial strain higher than ~10nɛ are correctly detected. The repeatability range is ~0.1%. The sensor has also been applied for the detection of pulsed stimuli generated from a metronome.
NASA Astrophysics Data System (ADS)
Kim, Sungyoung; Martens, William L.
2005-04-01
By industry standard (ITU-R. Recommendation BS.775-1), multichannel stereophonic signals within the frequency range of up to 80 or 120 Hz may be mixed and delivered via a single driver (e.g., a subwoofer) without significant impairment of stereophonic sound quality. The assumption that stereophonic information within such low-frequency content is not significant was tested by measuring discrimination thresholds for changes in interaural cross-correlation (IACC) within spectral bands containing the lowest frequency components of low-pitch musical tones. Performances were recorded for three different musical instruments playing single notes ranging in fundamental frequency from 41 Hz to 110 Hz. The recordings, made using a multichannel microphone array composed of five DPA 4006 pressure microphones, were processed to produce a set of stimuli that varied in interaural cross-correlation (IACC) within a low-frequency band, but were otherwise identical in a higher-frequency band. This correlation processing was designed to have minimal effect upon other psychoacoustic variables such as loudness and timbre. The results show that changes in interaural cross correlation (IACC) within low-frequency bands of low-pitch musical tones are most easily discriminated when decorrelated signals are presented via subwoofers positioned at extreme lateral angles (far from the median plane). [Work supported by VRQ.
Trans Atlantic Infrasound Payload (TAIP) Operation Plan.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Daniel; Lees, Jonathan M.
The Carolina Infrasound package, added as a piggyback to the 2016 ULDB ight, recorded unique acoustic signals such as the ocean microbarom and a large meteor. These data both yielded unique insights into the acoustic energy transfer from the lower to the upper atmosphere as well as highlighted the vast array of signals whose origins remain unknown. Now, the opportunity to y a payload across the north Atlantic offers an opportunity to sample one of the most active ocean microbarom sources on Earth. Improvements in payload capabilities should result in characterization of the higher frequency range of the stratospheric infrasoundmore » spectrum as well. Finally, numerous large mining and munitions disposal explosions in the region may provide \\ground truth" events for assessing the detection capability of infrasound microphones in the stratosphere. The flight will include three different types of infrasound sensors. One type is a pair of polarity reversed InfraBSU microphones (standard for high altitude flights since 2016), another is a highly sensitive Chaparral 60 modified for a very low corner period, and the final sensor is a lightweight, low power Gem infrasound package. By evaluating these configurations against each other on the same flight, we will be able to optimize future campaigns with different sensitivity and mass constraints.« less
A Robust Sound Source Localization Approach for Microphone Array with Model Errors
NASA Astrophysics Data System (ADS)
Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong
In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.
Microphone and electroglottographic data from dysphonic patients: type 1, 2 and 3 signals.
Behrman, A; Agresti, C J; Blumstein, E; Lee, N
1998-06-01
Recently, it has been suggested that statistics which are dependent upon the reliable extraction of a single fundamental period, such as jitter and shimmer, are valid only for nearly periodic signals. This study explored the incidence of nearly periodic and nonperiodic microphone and electroglottographic signals obtained from 202 dysphonic patients. It was found that approximately 42% were type 1 (nearly periodic); approximately 35% were type 2 (containing bifurcations, modulations or subharmonic structure); and approximately 22% were type 3 (chaotic). Discriminating between type 2 and 3 signals was very difficult for 40% of the signals which were ultimately rated type 3. This was due to the brevity of the apparently chaotic segment, and/or the persistence of some harmonic structure within the chaos. Irrespective of that difficulty, the results suggest that there may be a substantial incidence of nontype 1 signals in a given clinical population. It was concluded, therefore, that signal typing is a necessary step in the analyses of microphone and electoglottographic data.
Speech Acquisition and Automatic Speech Recognition for Integrated Spacesuit Audio Systems
NASA Technical Reports Server (NTRS)
Huang, Yiteng; Chen, Jingdong; Chen, Shaoyan
2010-01-01
A voice-command human-machine interface system has been developed for spacesuit extravehicular activity (EVA) missions. A multichannel acoustic signal processing method has been created for distant speech acquisition in noisy and reverberant environments. This technology reduces noise by exploiting differences in the statistical nature of signal (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, the automatic speech recognition (ASR) accuracy can be improved to the level at which crewmembers would find the speech interface useful. The developed speech human/machine interface will enable both crewmember usability and operational efficiency. It can enjoy a fast rate of data/text entry, small overall size, and can be lightweight. In addition, this design will free the hands and eyes of a suited crewmember. The system components and steps include beam forming/multi-channel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, model adaption, ASR HMM (Hidden Markov Model) training, and ASR decoding. A state-of-the-art phoneme recognizer can obtain an accuracy rate of 65 percent when the training and testing data are free of noise. When it is used in spacesuits, the rate drops to about 33 percent. With the developed microphone array speech-processing technologies, the performance is improved and the phoneme recognition accuracy rate rises to 44 percent. The recognizer can be further improved by combining the microphone array and HMM model adaptation techniques and using speech samples collected from inside spacesuits. In addition, arithmetic complexity models for the major HMMbased ASR components were developed. They can help real-time ASR system designers select proper tasks when in the face of constraints in computational resources.
NASA Astrophysics Data System (ADS)
Yousefian Jazi, Nima
Spatial filtering and directional discrimination has been shown to be an effective pre-processing approach for noise reduction in microphone array systems. In dual-microphone hearing aids, fixed and adaptive beamforming techniques are the most common solutions for enhancing the desired speech and rejecting unwanted signals captured by the microphones. In fact, beamformers are widely utilized in systems where spatial properties of target source (usually in front of the listener) is assumed to be known. In this dissertation, some dual-microphone coherence-based speech enhancement techniques applicable to hearing aids are proposed. All proposed algorithms operate in the frequency domain and (like traditional beamforming techniques) are purely based on the spatial properties of the desired speech source and does not require any knowledge of noise statistics for calculating the noise reduction filter. This benefit gives our algorithms the ability to address adverse noise conditions, such as situations where interfering talker(s) speaks simultaneously with the target speaker. In such cases, the (adaptive) beamformers lose their effectiveness in suppressing interference, since the noise channel (reference) cannot be built and updated accordingly. This difference is the main advantage of the proposed techniques in the dissertation over traditional adaptive beamformers. Furthermore, since the suggested algorithms are independent of noise estimation, they offer significant improvement in scenarios that the power level of interfering sources are much more than that of target speech. The dissertation also shows the premise behind the proposed algorithms can be extended and employed to binaural hearing aids. The main purpose of the investigated techniques is to enhance the intelligibility level of speech, measured through subjective listening tests with normal hearing and cochlear implant listeners. However, the improvement in quality of the output speech achieved by the algorithms are also presented to show that the proposed methods can be potential candidates for future use in commercial hearing aids and cochlear implant devices.
NASA Technical Reports Server (NTRS)
Horne, William C.
2011-01-01
Measurements of background noise were recently obtained with a 24-element phased microphone array in the test section of the Arnold Engineering Development Center 80- by120-Foot Wind Tunnel at speeds of 50 to 100 knots (27.5 to 51.4 m/s). The array was mounted in an aerodynamic fairing positioned with array center 1.2m from the floor and 16 m from the tunnel centerline, The array plate was mounted flush with the fairing surface as well as recessed in. (1.27 cm) behind a porous Kevlar screen. Wind-off speaker measurements were also acquired every 15 on a 10 m semicircular arc to assess directional resolution of the array with various processing algorithms, and to estimate minimum detectable source strengths for future wind tunnel aeroacoustic studies. The dominant background noise of the facility is from the six drive fans downstream of the test section and first set of turning vanes. Directional array response and processing methods such as background-noise cross-spectral-matrix subtraction suggest that sources 10-15 dB weaker than the background can be detected.
Measurement of Gravitational Acceleration Using a Computer Microphone Port
ERIC Educational Resources Information Center
Khairurrijal; Eko Widiatmoko; Srigutomo, Wahyu; Kurniasih, Neny
2012-01-01
A method has been developed to measure the swing period of a simple pendulum automatically. The pendulum position is converted into a signal frequency by employing a simple electronic circuit that detects the intensity of infrared light reflected by the pendulum. The signal produced by the electronic circuit is sent to the microphone port and…
Assessment of Microphone Phased Array for Measuring Launch Vehicle Lift-off Acoustics
NASA Technical Reports Server (NTRS)
Garcia, Roberto
2012-01-01
The specific purpose of the present work was to demonstrate the suitability of a microphone phased array for launch acoustics applications via participation in selected firings of the Ares I Scale Model Acoustics Test. The Ares I Scale Model Acoustics Test is a part of the discontinued Constellation Program Ares I Project, but the basic understanding gained from this test is expected to help development of the Space Launch System vehicles. Correct identification of sources not only improves the predictive ability, but provides guidance for a quieter design of the launch pad and optimization of the water suppression system. This document contains the results of the NASA Engineering and Safety Center assessment.
Liu, Huawei; Li, Baoqing; Yuan, Xiaobing; Zhou, Qianwei; Huang, Jingchang
2018-03-27
Parameters estimation of sequential movement events of vehicles is facing the challenges of noise interferences and the demands of portable implementation. In this paper, we propose a robust direction-of-arrival (DOA) estimation method for the sequential movement events of vehicles based on a small Micro-Electro-Mechanical System (MEMS) microphone array system. Inspired by the incoherent signal-subspace method (ISM), the method that is proposed in this work employs multiple sub-bands, which are selected from the wideband signals with high magnitude-squared coherence to track moving vehicles in the presence of wind noise. The field test results demonstrate that the proposed method has a better performance in emulating the DOA of a moving vehicle even in the case of severe wind interference than the narrowband multiple signal classification (MUSIC) method, the sub-band DOA estimation method, and the classical two-sided correlation transformation (TCT) method.
Torres, Ana M; Lopez, Jose J; Pueo, Basilio; Cobos, Maximo
2013-04-01
Plane-wave decomposition (PWD) methods using microphone arrays have been shown to be a very useful tool within the applied acoustics community for their multiple applications in room acoustics analysis and synthesis. While many theoretical aspects of PWD have been previously addressed in the literature, the practical advantages of the PWD method to assess the acoustic behavior of real rooms have been barely explored so far. In this paper, the PWD method is employed to analyze the sound field inside a selected set of real rooms having a well-defined purpose. To this end, a circular microphone array is used to capture and process a number of impulse responses at different spatial positions, providing angle-dependent data for both direct and reflected wavefronts. The detection of reflected plane waves is performed by means of image processing techniques applied over the raw array response data and over the PWD data, showing the usefulness of image-processing-based methods for room acoustics analysis.
NASA Astrophysics Data System (ADS)
Bowman, Daniel C.; Albert, Sarah A.
2018-06-01
A variety of Earth surface and atmospheric sources generate low-frequency sound waves that can travel great distances. Despite a rich history of ground-based sensor studies, very few experiments have investigated the prospects of free floating microphone arrays at high altitudes. However, recent initiatives have shown that such networks have very low background noise and may sample an acoustic wave field that is fundamentally different than that at Earth's surface. The experiments have been limited to at most two stations at altitude, making acoustic event detection and localization difficult. We describe the deployment of four drifting microphone stations at altitudes between 21 and 24 km above sea level. The stations detected one of two regional ground-based chemical explosions as well as the ocean microbarom while travelling almost 500 km across the American Southwest. The explosion signal consisted of multiple arrivals; signal amplitudes did not correlate with sensor elevation or source range. The waveforms and propagation patterns suggest interactions with gravity waves at 35-45 km altitude. A sparse network method that employed curved wave front corrections was able to determine the backazimuth from the free flying network to the acoustic source. Episodic signals similar to those seen on previous flights in the same region were noted, but their source remains unclear. Background noise levels were commensurate with those on infrasound stations in the International Monitoring System below 2 s.
Gover, Bradford N; Ryan, James G; Stinson, Michael R
2002-11-01
A measurement system has been developed that is capable of analyzing the directional and spatial variations in a reverberant sound field. A spherical, 32-element array of microphones is used to generate a narrow beam that is steered in 60 directions. Using an omnidirectional loudspeaker as excitation, the sound pressure arriving from each steering direction is measured as a function of time, in the form of pressure impulse responses. By subsequent analysis of these responses, the variation of arriving energy with direction is studied. The directional diffusion and directivity index of the arriving sound can be computed, as can the energy decay rate in each direction. An analysis of the 32 microphone responses themselves allows computation of the point-to-point variation of reverberation time and of sound pressure level, as well as the spatial cross-correlation coefficient, over the extent of the array. The system has been validated in simple sound fields in an anechoic chamber and in a reverberation chamber. The system characterizes these sound fields as expected, both quantitatively from the measures and qualitatively from plots of the arriving energy versus direction. It is anticipated that the system will be of value in evaluating the directional distribution of arriving energy and the degree and diffuseness of sound fields in rooms.
Point source moving above a finite impedance reflecting plane - Experiment and theory
NASA Technical Reports Server (NTRS)
Norum, T. D.; Liu, C. H.
1978-01-01
A widely used experimental version of the acoustic monopole consists of an acoustic driver of restricted opening forced by a discrete frequency oscillator. To investigate the effects of forward motion on this source, it was mounted above an automobile and driven over an asphalt surface at constant speed past a microphone array. The shapes of the received signal were compared to results computed from an analysis of a fluctuating-mass-type point source moving above a finite impedance reflecting plane. Good agreement was found between experiment and theory when a complex normal impedance representative of a fairly hard acoustic surface was used in the analysis.
2009-03-15
CAPE CANAVERAL, Fla. – In Firing Room 4 of the Launch Control Center at NASA's Kennedy Space Center in Florida, Center Director Bob Cabana (with microphone) congratulates the mission management team after the successful launch of space shuttle Discovery on the STS-119 mission. Launch was on time at 7:43 p.m. EDT. The STS-119 mission is the 28th to the space station and Discovery's 36th flight. Discovery will deliver the final pair of power-generating solar array wings and the S6 truss segment. Installation of S6 will signal the station's readiness to house a six-member crew for conducting increased science. Photo credit: NASA/Kim Shiflett
NASA Astrophysics Data System (ADS)
McKenna, Mihan H.; Stump, Brian W.; Hayward, Chris
2008-06-01
The Chulwon Seismo-Acoustic Array (CHNAR) is a regional seismo-acoustic array with co-located seismometers and infrasound microphones on the Korean peninsula. Data from forty-two days over the course of a year between October 1999 and August 2000 were analyzed; 2052 infrasound-only arrivals and 23 seismo-acoustic arrivals were observed over the six week study period. A majority of the signals occur during local working hours, hour 0 to hour 9 UT and appear to be the result of cultural activity located within a 250 km radius. Atmospheric modeling is presented for four sample days during the study period, one in each of November, February, April, and August. Local meteorological data sampled at six hour intervals is needed to accurately model the observed arrivals and this data produced highly temporally variable thermal ducts that propagated infrasound signals within 250 km, matching the temporal variation in the observed arrivals. These ducts change dramatically on the order of hours, and meteorological data from the appropriate sampled time frame was necessary to interpret the observed arrivals.
Design and Evaluation of a Scalable and Reconfigurable Multi-Platform System for Acoustic Imaging
Izquierdo, Alberto; Villacorta, Juan José; del Val Puente, Lara; Suárez, Luis
2016-01-01
This paper proposes a scalable and multi-platform framework for signal acquisition and processing, which allows for the generation of acoustic images using planar arrays of MEMS (Micro-Electro-Mechanical Systems) microphones with low development and deployment costs. Acoustic characterization of MEMS sensors was performed, and the beam pattern of a module, based on an 8 × 8 planar array and of several clusters of modules, was obtained. A flexible framework, formed by an FPGA, an embedded processor, a computer desktop, and a graphic processing unit, was defined. The processing times of the algorithms used to obtain the acoustic images, including signal processing and wideband beamforming via FFT, were evaluated in each subsystem of the framework. Based on this analysis, three frameworks are proposed, defined by the specific subsystems used and the algorithms shared. Finally, a set of acoustic images obtained from sound reflected from a person are presented as a case study in the field of biometric identification. These results reveal the feasibility of the proposed system. PMID:27727174
Infrasonic Stethoscope for Monitoring Physiological Processes
NASA Technical Reports Server (NTRS)
Zuckerwar, Allan J. (Inventor); Shams, Qamar A. (Inventor); Dimarcantonio, Albert L. (Inventor)
2018-01-01
An infrasonic stethoscope for monitoring physiological processes of a patient includes a microphone capable of detecting acoustic signals in the audible frequency bandwidth and in the infrasonic bandwidth (0.03 to 1000 Hertz), a body coupler attached to the body at a first opening in the microphone, a flexible tube attached to the body at a second opening in the microphone, and an earpiece attached to the flexible tube. The body coupler is capable of engagement with a patient to transmit sounds from the person, to the microphone and then to the earpiece.
Infrasonic Stethoscope for Monitoring Physiological Processes
NASA Technical Reports Server (NTRS)
Zuckerwar, Allan J. (Inventor); Shams, Qamar A. (Inventor); Dimarcantonio, Albert L. (Inventor)
2016-01-01
An infrasonic stethoscope for monitoring physiological processes of a patient includes a microphone capable of detecting acoustic signals in the audible frequency bandwidth and in the infrasonic bandwidth (0.03 to 1000 Hertz), a body coupler attached to the body at a first opening in the microphone, a flexible tube attached to the body at a second opening in the microphone, and an earpiece attached to the flexible tube. The body coupler is capable of engagement with a patient to transmit sounds from the person, to the microphone and then to the earpiece.
A low-cost 3-D printed stethoscope connected to a smartphone.
Aguilera-Astudillo, Carlos; Chavez-Campos, Marx; Gonzalez-Suarez, Alan; Garcia-Cordero, Jose L
2016-08-01
We demonstrate the fabrication of a digital stethoscope using a 3D printer and commercial off-the-shelf electronics. A chestpiece consists of an electret microphone embedded into the drum of a 3D printed chestpiece. An electronic dongle amplifies the signal from the microphone and reduces any external noise. It also adjusts the signal to the voltages accepted by the smartphones headset jack. A graphical user interface programmed in Android displays the signals processed by the dongle. The application also saves the processed signal and sends it to a physician.
NASA Technical Reports Server (NTRS)
Martin, R. M.; Splettstoesser, W. R.; Elliott, J. W.; Schultz, K.-J.
1988-01-01
Acoustic data are presented from a 40 percent scale model of the four-bladed BO-105 helicopter main rotor, tested in a large aerodynamic wind tunnel. Rotor blade-vortex interaction (BVI) noise data in the low-speed flight range were acquired using a traversing in-flow microphone array. Acoustic results presented are used to assess the acoustic far field of BVI noise, to map the directivity and temporal characteristics of BVI impulsive noise, and to show the existence of retreating-side BVI signals. The characterics of the acoustic radiation patterns, which can often be strongly focused, are found to be very dependent on rotor operating condition. The acoustic signals exhibit multiple blade-vortex interactions per blade with broad impulsive content at lower speeds, while at higher speeds, they exhibit fewer interactions per blade, with much sharper, higher amplitude acoustic signals. Moderate-amplitude BVI acoustic signals measured under the aft retreating quadrant of the rotor are shown to originate from the retreating side of the rotor.
Infrasound in the middle stratosphere measured with a free-flying acoustic array
NASA Astrophysics Data System (ADS)
Bowman, Daniel C.; Lees, Jonathan M.
2015-11-01
Infrasound recorded in the middle stratosphere suggests that the acoustic wavefield above the Earth's surface differs dramatically from the wavefield near the ground. In contrast to nearby surface stations, the balloon-borne infrasound array detected signals from turbulence, nonlinear ocean wave interactions, building ventilation systems, and other sources that have not been identified yet. Infrasound power spectra also bore little resemblance to spectra recorded on the ground at the same time. Thus, sensors on the Earth's surface likely capture a fraction of the true diversity of acoustic waves in the atmosphere. Future studies building upon this experiment may quantify the acoustic energy flux from the surface to the upper atmosphere, extend the capability of the International Monitoring System to detect nuclear explosions, and lay the observational groundwork for a recently proposed mission to detect earthquakes on Venus using free-flying microphones.
Infrasonic emissions from local meteorological events: A summary of data taken throughout 1984
NASA Technical Reports Server (NTRS)
Zuckerwar, A. J.
1986-01-01
Records of infrasonic signals, propagating through the Earth's atmosphere in the frequency band 2 to 16 Hz, were gathered on a three microphone array at Langley Research Center throughout the year 1984. Digital processing of these records fulfilled three functions: time delay estimation, based on an adaptive filter; source location, determined from the time delay estimates; and source identification, based on spectral analysis. Meteorological support was provided by significant meteorological advisories, lightning locator plots, and daily reports from the Air Weather Service. The infrasonic data are organized into four characteristic signatures, one of which is believed to contain emissions from local meteorological sources. This class of signature prevailed only on those days when major global meteorological events appeared in or near to eastern United States. Eleven case histories are examined. Practical application of the infrasonic array in a low level wing shear alert system is discussed.
Phased Array Noise Source Localization Measurements Made on a Williams International FJ44 Engine
NASA Technical Reports Server (NTRS)
Podboy, Gary G.; Horvath, Csaba
2010-01-01
A 48-microphone planar phased array system was used to acquire noise source localization data on a full-scale Williams International FJ44 turbofan engine. Data were acquired with the array at three different locations relative to the engine, two on the side and one in front of the engine. At the two side locations the planar microphone array was parallel to the engine centerline; at the front location the array was perpendicular to the engine centerline. At each of the three locations, data were acquired at eleven different engine operating conditions ranging from engine idle to maximum (take off) speed. Data obtained with the array off to the side of the engine were spatially filtered to separate the inlet and nozzle noise. Tones occurring in the inlet and nozzle spectra were traced to the low and high speed spools within the engine. The phased array data indicate that the Inflow Control Device (ICD) used during this test was not acoustically transparent; instead, some of the noise emanating from the inlet reflected off of the inlet lip of the ICD. This reflection is a source of error for far field noise measurements made during the test. The data also indicate that a total temperature rake in the inlet of the engine is a source of fan noise.
Morgenstern, Hai; Rafaely, Boaz; Noisternig, Markus
2017-03-01
Spherical microphone arrays (SMAs) and spherical loudspeaker arrays (SLAs) facilitate the study of room acoustics due to the three-dimensional analysis they provide. More recently, systems that combine both arrays, referred to as multiple-input multiple-output (MIMO) systems, have been proposed due to the added spatial diversity they facilitate. The literature provides frameworks for designing SMAs and SLAs separately, including error analysis from which the operating frequency range (OFR) of an array is defined. However, such a framework does not exist for the joint design of a SMA and a SLA that comprise a MIMO system. This paper develops a design framework for MIMO systems based on a model that addresses errors and highlights the importance of a matched design. Expanding on a free-field assumption, errors are incorporated separately for each array and error bounds are defined, facilitating error analysis for the system. The dependency of the error bounds on the SLA and SMA parameters is studied and it is recommended that parameters should be chosen to assure matched OFRs of the arrays in MIMO system design. A design example is provided, demonstrating the superiority of a matched system over an unmatched system in the synthesis of directional room impulse responses.
NASA Astrophysics Data System (ADS)
Fisher, Aileen
The term infrasound describes atmospheric sound waves with frequencies below 20 Hz, while acoustics are classified within the audible range of 20 Hz to 20 kHz. Infrasound and acoustic monitoring in the scientific community is hampered by low signal-to-noise ratios and a limited number of studies on regional and short-range noise and source characterization. The JASON Report (2005) suggests the infrasound community focus on more broad-frequency, observational studies within a tactical distance of 10 km. In keeping with that recommendation, this paper presents a study of regional and short-range atmospheric acoustic and infrasonic noise characterization, at a desert site in West Texas, covering a broad frequency range of 0.2 to 100 Hz. To spatially sample the band, a large number of infrasound gauges was needed. A laboratory instrument analysis is presented of the set of low-cost infrasound sensors used in this study, manufactured by Inter-Mountain Laboratories (IML). Analysis includes spectra, transfer functions and coherences to assess the stability and range of the gauges, and complements additional instrument testing by Sandia National Laboratories. The IMLs documented here have been found reliably coherent from 0.1 to 7 Hz without instrument correction. Corrections were built using corresponding time series from the commercially available and more expensive Chaparral infrasound gauge, so that the corrected IML outputs were able to closely mimic the Chaparral output. Arrays of gauges are needed for atmospheric sound signal processing. Our West Texas experiment consisted of a 1.5 km aperture, 23-gauge infrasound/acoustic array of IMLs, with a compact, 12 m diameter grid-array of rented IMLs at the center. To optimize signal recording, signal-to-noise ratio needs to be quantified with respect to both frequency band and coherence length. The higher-frequency grid array consisted of 25 microphones arranged in a five by five pattern with 3 meter spacing, without spatial wind noise filtering hoses or pipes. The grid was within the distance limits of a single gauge's normal hose array, and data were used to perform a spatial noise correlation study. The highest correlation values were not found in the lower frequencies as anticipated, owing to a lack of sources in the lower range and the uncorrelated nature of wind noise. The highest values, with cross-correlation averages between 0.4 and 0.7 from 3 to 17 m between gauges, were found at night from 10 and 20 Hz due to a continuous local noise source and low wind. Data from the larger array were used to identify continuous and impulsive signals in the area that comprise the ambient noise field. Ground truth infrasound and acoustic, time and location data were taken for a highway site, a wind farm, and a natural gas compressor. Close-range sound data were taken with a single IML "traveler" gauge. Spectrograms and spectrum peaks were used to identify their source signatures. Two regional location techniques were also tested with data from the large array by using a propane cannon as a controlled, impulsive source. A comparison is presented of the Multiple Signal Classification Algorithm (MUSIC) to a simple, quadratic, circular wavefront algorithm. MUSIC was unable to effectively separate noise and source eignenvalues and eigenvectors due to spatial aliasing of the propane cannon signal and a lack of incoherent noise. Only 33 out of 80 usable shots were located by MUSIC within 100 m. Future work with the algorithm should focus on location of impulsive and continuous signals with development of methods for accurate separation of signal and noise eigenvectors in the presence of coherent noise and possible spatial aliasing. The circular wavefront algorithm performed better with our specific dataset and successfully located 70 out of 80 propane cannon shots within 100 m of the original location, 66 of which were within 20 m. This method has low computation requirements, making it well suited for real-time automated processing and smaller computers. Future research could focus on development of the method for an automated system and statistical impulsive noise filtering for higher accuracy.
Comment on "Acoustical observation of bubble oscillations induced by bubble popping"
NASA Astrophysics Data System (ADS)
Blanc, É.; Ollivier, F.; Antkowiak, A.; Wunenburger, R.
2015-03-01
We have reproduced the experiment of acoustic monitoring of spontaneous popping of single soap bubbles standing in air reported by Ding et al. [2aa Phys. Rev. E 75, 041601 (2007), 10.1103/PhysRevE.75.041601]. By using a single microphone and two different signal acquisition systems recording in parallel the signal at the microphone output, among them the system used by Ding et al., we have experimentally evidenced that the acoustic precursors of bubble popping events detected by Ding et al. actually result from an acausal artifact of the signal processing performed by their acquisition system which lies outside of its prescribed working frequency range. No acoustic precursor of popping could be evidenced with the microphone used in these experiments, whose sensitivity is 1 V Pa-1 and frequency range is 500 Hz-100 kHz.
NASA Astrophysics Data System (ADS)
Peng, Di; Wang, Shaofei; Liu, Yingzheng
2016-04-01
Fast pressure-sensitive paint (PSP) is very useful in flow diagnostics due to its fast response and high spatial resolution, but its applications in low-speed flows are usually challenging due to limitations of paint's pressure sensitivity and the capability of high-speed imagers. The poor signal-to-noise ratio in low-speed cases makes it very difficult to extract useful information from the PSP data. In this study, unsteady PSP measurements were made on a flat plate behind a cylinder in a low-speed wind tunnel (flow speed from 10 to 17 m/s). Pressure fluctuations (Δ P) on the plate caused by vortex-plate interaction were recorded continuously by fast PSP (using a high-speed camera) and a microphone array. Power spectrum of pressure fluctuations and phase-averaged Δ P obtained from PSP and microphone were compared, showing good agreement in general. Proper orthogonal decomposition (POD) was used to reduce noise in PSP data and extract the dominant pressure features. The PSP results reconstructed from selected POD modes were then compared to the pressure data obtained simultaneously with microphone sensors. Based on the comparison of both instantaneous Δ P and root-mean-square of Δ P, it was confirmed that POD analysis could effectively remove noise while preserving the instantaneous pressure information with good fidelity, especially for flows with strong periodicity. This technique extends the application range of fast PSP and can be a powerful tool for fundamental fluid mechanics research at low speed.
Modeling high signal-to-noise ratio in a novel silicon MEMS microphone with comb readout
NASA Astrophysics Data System (ADS)
Manz, Johannes; Dehe, Alfons; Schrag, Gabriele
2017-05-01
Strong competition within the consumer market urges the companies to constantly improve the quality of their devices. For silicon microphones excellent sound quality is the key feature in this respect which means that improving the signal-to-noise ratio (SNR), being strongly correlated with the sound quality is a major task to fulfill the growing demands of the market. MEMS microphones with conventional capacitive readout suffer from noise caused by viscous damping losses arising from perforations in the backplate [1]. Therefore, we conceived a novel microphone design based on capacitive read-out via comb structures, which is supposed to show a reduction in fluidic damping compared to conventional MEMS microphones. In order to evaluate the potential of the proposed design, we developed a fully energy-coupled, modular system-level model taking into account the mechanical motion, the slide film damping between the comb fingers, the acoustic impact of the package and the capacitive read-out. All submodels are physically based scaling with all relevant design parameters. We carried out noise analyses and due to the modular and physics-based character of the model, were able to discriminate the noise contributions of different parts of the microphone. This enables us to identify design variants of this concept which exhibit a SNR of up to 73 dB (A). This is superior to conventional and at least comparable to high-performance variants of the current state-of-the art MEMS microphones [2].
Speech intelligibility in noise using throat and acoustic microphones.
Acker-Mills, Barbara E; Houtsma, Adrianus J M; Ahroon, William A
2006-01-01
Helicopter cockpits are very noisy and this noise must be reduced for effective communication. The standard U.S. Army aviation helmet is equipped with a noise-canceling acoustic microphone, but some ambient noise still is transmitted. Throat microphones are not sensitive to air molecule vibrations and thus, transmittal of ambient noise is reduced. It is possible that throat microphones could enhance speech communication in helicopters, but speech intelligibility with the devices must first be assessed. In the current study, speech intelligibility of signals generated by an acoustic microphone, a throat microphone, and by the combined output of the two microphones was assessed using the Modified Rhyme Test (MRT). Stimulus words were recorded in a reverberant chamber with ambient broadband noise intensity at 90 and 106 dBA. Listeners completed the MRT task in the same settings, thus simulating the typical environment of a rotary-wing aircraft. Results show that speech intelligibility is significantly worse for the throat microphone (average percent correct = 55.97) than for the acoustic microphone (average percent correct = 69.70), particularly for the higher noise level. In addition, no benefit is gained by simultaneously using both microphones. A follow-up experiment evaluated different consonants using the Diagnostic Rhyme Test and replicated the MRT results. The current results show that intelligibility using throat microphones is poorer than with the use of boom microphones in noisy and in quiet environments. Therefore, throat microphones are not recommended for use in any situation where fast and accurate speech intelligibility is essential.
Locating arbitrarily time-dependent sound sources in three dimensional space in real time.
Wu, Sean F; Zhu, Na
2010-08-01
This paper presents a method for locating arbitrarily time-dependent acoustic sources in a free field in real time by using only four microphones. This method is capable of handling a wide variety of acoustic signals, including broadband, narrowband, impulsive, and continuous sound over the entire audible frequency range, produced by multiple sources in three dimensional (3D) space. Locations of acoustic sources are indicated by the Cartesian coordinates. The underlying principle of this method is a hybrid approach that consists of modeling of acoustic radiation from a point source in a free field, triangulation, and de-noising to enhance the signal to noise ratio (SNR). Numerical simulations are conducted to study the impacts of SNR, microphone spacing, source distance and frequency on spatial resolution and accuracy of source localizations. Based on these results, a simple device that consists of four microphones mounted on three mutually orthogonal axes at an optimal distance, a four-channel signal conditioner, and a camera is fabricated. Experiments are conducted in different environments to assess its effectiveness in locating sources that produce arbitrarily time-dependent acoustic signals, regardless whether a sound source is stationary or moves in space, even toward behind measurement microphones. Practical limitations on this method are discussed.
NASA Technical Reports Server (NTRS)
Panda, Jayanta; Mosher, Robert N.; Porter, Barry J.
2013-01-01
A 70 microphone, 10-foot by 10-foot, microphone phased array was built for use in the harsh environment of rocket launches. The array was setup at NASA Wallops launch pad 0A during a static test firing of Orbital Sciences' Antares engines, and again during the first launch of the Antares vehicle. It was placed 400 feet away from the pad, and was hoisted on a scissor lift 40 feet above ground. The data sets provided unprecedented insight into rocket noise sources. The duct exit was found to be the primary source during the static test firing; the large amount of water injected beneath the nozzle exit and inside the plume duct quenched all other sources. The maps of the noise sources during launch were found to be time-dependent. As the engines came to full power and became louder, the primary source switched from the duct inlet to the duct exit. Further elevation of the vehicle caused spilling of the hot plume, resulting in a distributed noise map covering most of the pad. As the entire plume emerged from the duct, and the ondeck water system came to full power, the plume itself became the loudest noise source. These maps of the noise sources provide vital insight for optimization of sound suppression systems for future Antares launches.
Development of an automated speech recognition interface for personal emergency response systems
Hamill, Melinda; Young, Vicky; Boger, Jennifer; Mihailidis, Alex
2009-01-01
Background Demands on long-term-care facilities are predicted to increase at an unprecedented rate as the baby boomer generation reaches retirement age. Aging-in-place (i.e. aging at home) is the desire of most seniors and is also a good option to reduce the burden on an over-stretched long-term-care system. Personal Emergency Response Systems (PERSs) help enable older adults to age-in-place by providing them with immediate access to emergency assistance. Traditionally they operate with push-button activators that connect the occupant via speaker-phone to a live emergency call-centre operator. If occupants do not wear the push button or cannot access the button, then the system is useless in the event of a fall or emergency. Additionally, a false alarm or failure to check-in at a regular interval will trigger a connection to a live operator, which can be unwanted and intrusive to the occupant. This paper describes the development and testing of an automated, hands-free, dialogue-based PERS prototype. Methods The prototype system was built using a ceiling mounted microphone array, an open-source automatic speech recognition engine, and a 'yes' and 'no' response dialog modelled after an existing call-centre protocol. Testing compared a single microphone versus a microphone array with nine adults in both noisy and quiet conditions. Dialogue testing was completed with four adults. Results and discussion The microphone array demonstrated improvement over the single microphone. In all cases, dialog testing resulted in the system reaching the correct decision about the kind of assistance the user was requesting. Further testing is required with elderly voices and under different noise conditions to ensure the appropriateness of the technology. Future developments include integration of the system with an emergency detection method as well as communication enhancement using features such as barge-in capability. Conclusion The use of an automated dialog-based PERS has the potential to provide users with more autonomy in decisions regarding their own health and more privacy in their own home. PMID:19583876
An Electromechanical Model for the Cochlear Microphonic
NASA Astrophysics Data System (ADS)
Teal, Paul D.; Lineton, Ben; Elliott, Stephen J.
2011-11-01
The first of the many electrical signals generated in the ear, nerves and brain as a response to a sound incident on the ear is the cochlear microphonic (CM). The CM is generated by the hair cells of the cochlea, primarily the outer hairs cells. The potentials of this signal are a nonlinear filtered version of the acoustic pressure at the tympanic membrane. The CM signal has been used very little in recent years for clinical audiology and audiological research. This is because of uncertainty in interpreting the CM signal as a diagnostic measure, and also because of the difficulty of obtaining the signal, which has usually required the use of a transtympanic electrode. There are however, several potential clinical and research applications for acquisition of the CM. To promote understanding of the CM, and potential clinical application, a model is presented which can account for the generation of the cochlear microphonic signal. The model incorporates micro-mechanical and macro-mechanical aspects of previously published models of the basilar membrane and reticular lamina, as well as cochlear fluid mechanics, piezoelectric activity and capacitance of the outer hair cells. It also models the electrical coupling of signals along the scalae.
NASA Astrophysics Data System (ADS)
Cerwin, Steve; Barnes, Julie; Kell, Scott; Walters, Mark
2003-09-01
This paper describes development and application of a novel method to accomplish real-time solid angle acoustic direction finding using two 8-element orthogonal microphone arrays. The developed prototype system was intended for localization and signature recognition of ground-based sounds from a small UAV. Recent advances in computer speeds have enabled the implementation of microphone arrays in many audio applications. Still, the real-time presentation of a two-dimensional sound field for the purpose of audio target localization is computationally challenging. In order to overcome this challenge, a crosspower spectrum phase1 (CSP) technique was applied to each 8-element arm of a 16-element cross array to provide audio target localization. In this paper, we describe the technique and compare it with two other commonly used techniques; Cross-Spectral Matrix2 and MUSIC3. The results show that the CSP technique applied to two 8-element orthogonal arrays provides a computationally efficient solution with reasonable accuracy and tolerable artifacts, sufficient for real-time applications. Additional topics include development of a synchronized 16-channel transmitter and receiver to relay the airborne data to the ground-based processor and presentation of test data demonstrating both ground-mounted operation and airborne localization of ground-based gunshots and loud engine sounds.
Source Identification and Location Techniques
NASA Technical Reports Server (NTRS)
Weir, Donald; Bridges, James; Agboola, Femi; Dougherty, Robert
2001-01-01
Mr. Weir presented source location results obtained from an engine test as part of the Engine Validation of Noise Reduction Concepts program. Two types of microphone arrays were used in this program to determine the jet noise source distribution for the exhaust from a 4.3 bypass ratio turbofan engine. One was a linear array of 16 microphones located on a 25 ft. sideline and the other was a 103 microphone 3-D "cage" array in the near field of the jet. Data were obtained from a baseline nozzle and from numerous nozzle configuration using chevrons and/or tabs to reduce the jet noise. Mr. Weir presented data from two configurations: the baseline nozzle and a nozzle configuration with chevrons on both the core and bypass nozzles. This chevron configuration had achieved a jet noise reduction of 4 EPNdB in small scale tests conducted at the Glenn Research Center. IR imaging showed that the chevrons produced significant improvements in mixing and greatly reduced the length of the jet potential core. Comparison of source location data from the 1-D phased array showed a shift of the noise sources towards the nozzle and clear reductions of the sources due to the noise reduction devices. Data from the 3-D array showed a single source at a frequency of 125 Hz. located several diameters downstream from the nozzle exit. At 250 and 400 Hz., multiple sources, periodically spaced, appeared to exist downstream of the nozzle. The trend of source location moving toward the nozzle exit with increasing frequency was also observed. The 3-D array data also showed a reduction in source strength with the addition of chevrons. The overall trend of source location with frequency was compared for the two arrays and with classical experience. Similar trends were observed. Although overall trends with frequency and addition of suppression devices were consistent between the data from the 1-D and the 3-D arrays, a comparison of the details of the inferred source locations did show differences. A flight test is planned to determine if the hardware tested statically will achieve similar reductions in flight.
Characteristics of propeller noise on an aircraft fuselage related to interior noise transmission
NASA Technical Reports Server (NTRS)
Mixson, J. S.; Barton, C. K.; Piersol, A. G.; Wilby, J. F.
1979-01-01
Exterior noise was measured on the fuselage of a twin-engine, light aircraft at four values of engine rpm in ground static tests and at forward speeds up to 36 m/s in taxi tests. Propeller noise levels, spectra, and correlations were determined using a horizontal array of seven flush-mounted microphones and a vertical array of four flush-mounted microphones in the propeller plane. The measured levels and spectra are compared with predictions based on empirical and analytical methods for static and taxi conditions. Trace wavelengths of the propeller noise field, obtained from point-to-point correlations, are compared with the aircraft sidewall structural dimensions, and some analytical results are presented that suggest the sensitivity of interior noise transmission to variations of the propeller noise characteristics.
Leak locating microphone, method and system for locating fluid leaks in pipes
Kupperman, David S.; Spevak, Lev
1994-01-01
A leak detecting microphone inserted directly into fluid within a pipe includes a housing having a first end being inserted within the pipe and a second opposed end extending outside the pipe. A diaphragm is mounted within the first housing end and an acoustic transducer is coupled to the diaphragm for converting acoustical signals to electrical signals. A plurality of apertures are provided in the housing first end, the apertures located both above and below the diaphragm, whereby to equalize fluid pressure on either side of the diaphragm. A leak locating system and method are provided for locating fluid leaks within a pipe. A first microphone is installed within fluid in the pipe at a first selected location and sound is detected at the first location. A second microphone is installed within fluid in the pipe at a second selected location and sound is detected at the second location. A cross-correlation is identified between the detected sound at the first and second locations for identifying a leak location.
NASA Test Flights Examine Effect of Atmospheric Turbulence on Sonic Booms
2016-07-20
One of three microphone arrays positioned strategically along the ground at Edwards Air Force Base, California, sits ready to collect sound signatures from sonic booms created by a NASA F/A-18 during the SonicBAT flight series. The arrays collected the sound signatures of booms that had traveled through atmospheric turbulence before reaching the ground.
Characteristics of a new automatic hail recorder
NASA Astrophysics Data System (ADS)
Löffler-Mang, Martin; Schön, Dominik; Landry, Markus
2011-06-01
An automatic hail sensor was developed, based on signal production with microphones, a quick signal analysis and recording possibility. For this hail recorder (HARE) small piezo-electric microphones inside a Makrolon body are used to detect hailstones. The prototype has an octagonal shape, two microphones on the top and bottom plates situated in the middle of the device, and an electronic board. A hailstone striking the surface produces waves on the sensor body and a voltage in the piezo-electric microphones. Each hail event is stored in the internal memory including the time and date. The memory can be read out via a USB port at any time after one or more hail events. HARE was tested and calibrated with the help of a newly constructed pneumatic hail gun. The voltage signal at the top plate microphone of HARE increases linearly proportional to hailstone momentum, whereas at the bottom plate it increases linearly proportional to hailstone kinetic energy. For large hailstones the accuracy of HARE is in the order of 10%. Calibration of HARE is still in progress and it has not been tested in real hailfalls as yet. An online device as well as an autonomous one is available for a large number of possible applications. Lately there has been interest to use HARE at solar power plants in Southern Europe to prevent the expensive modules from becoming damaged. Perhaps HARE could also participate in new and existing hail observing networks.
Measurement and Characterization of Helicopter Noise in Steady-State and Maneuvering Flight
NASA Technical Reports Server (NTRS)
Schmitz, Fredric H.; Greenwood, Eric; Sickenberger, Richard D.; Gopalan, Gaurav; Sim, Ben Well-C; Conner, David; Moralez, Ernesto; Decker, William A.
2007-01-01
A special acoustic flight test program was performed on the Bell 206B helicopter outfitted with an in-flight microphone boom/array attached to the helicopter while simultaneous acoustic measurements were made using a linear ground array of microphones arranged to be perpendicular to the flight path. Air and ground noise measurements were made in steady-state longitudinal and steady turning flight, and during selected dynamic maneuvers. Special instrumentation, including direct measurement of the helicopter s longitudinal tip-path-plane (TPP) angle, Differential Global Positioning System (DGPS) and Inertial Navigation Unit (INU) measurements, and a pursuit guidance display were used to measure important noise controlling parameters and to make the task of flying precise operating conditions and flight track easier for the pilot. Special care was also made to test only in very low winds. The resulting acoustic data is of relatively high quality and shows the value of carefully monitoring and controlling the helicopter s performance state. This paper has shown experimentally, that microphones close to the helicopter can be used to estimate the specific noise sources that radiate to the far field, if the microphones are positioned correctly relative to the noise source. Directivity patterns for steady, turning flight were also developed, for the first time, and connected to the turning performance of the helicopter. Some of the acoustic benefits of combining normally separated flight segments (i.e. an accelerated segment and a descending segment) were also demonstrated.
NASA Technical Reports Server (NTRS)
Soderman, Paul T.; Olson, Larry (Technical Monitor)
1995-01-01
The NFAC 40- by 80- Foot Wind Tunnel at Ames is being refurbished with a new, deep acoustic lining in the test section which will make the facility nearly anechoic over a large frequency range. The modification history, key elements, and schedule will be discussed. Design features and expected performance gains will be described. Background noise reductions will be summarized. Improvements in aeroacoustic research techniques have been developed and used recently at NFAC on several wind tunnel tests of High Speed Research models. Research on quiet inflow microphones and struts will be described. The Acoustic Survey Apparatus in the 40x80 will be illustrated. A special intensity probe was tested for source localization. Multi-channel, high speed digital data acquisition is now used for acoustics. And most important, phased microphone arrays have been developed and tested which have proven to be very powerful for source identification and increased signal-to-noise ratio. Use of these tools for the HEAT model will be illustrated. In addition, an acoustically absorbent symmetry plane was built to satisfy the HEAT semispan aerodynamic and acoustic requirements. Acoustic performance of that symmetry plane will be shown.
NASA Astrophysics Data System (ADS)
Hedlin, Michael; de Groot-Hedlin, Catherine; Hoffmann, Lars; Alexander, M. Joan; Stephan, Claudia
2016-04-01
The upgrade of the USArray Transportable Array (TA) with microbarometers and infrasound microphones has created an opportunity for a broad range of new studies of atmospheric sources and the large- and small-scale atmospheric structure through which signals from these events propagate. These studies are akin to early studies of seismic events and the Earth's interior structure that were made possible by the first seismic networks. In one early study with the new dataset we use the method of de Groot-Hedlin and Hedlin (2015) to recast the TA as a massive collection of 3-element arrays to detect and locate large infrasonic events. Over 2,000 events have been detected in 2013. The events cluster in highly active regions on land and offshore. Stratospherically ducted signals from some of these events have been recorded more than 2,000 km from the source and clearly show dispersion due to propagation through atmospheric gravity waves. Modeling of these signals has been used to test statistical models of atmospheric gravity waves. The network is also useful for making direct observations of gravity waves. We are currently studying TA and satellite observations of gravity waves from singular events to better understand how the waves near ground level relate to those observed aloft. We are also studying the long-term statistics of these waves from the beginning of 2010 through 2014. Early work using data bandpass filtered from 1-6 hr shows that both the TA and satellite data reveal highly active source regions, such as near the Great Lakes. de Groot-Hedlin and Hedlin, 2015, A method for detecting and locating geophysical events using clusters of arrays, Geophysical Journal International, v203, p960-971, doi: 10.1093/gji/ggv345.
Volcanic Thunder From Explosive Eruptions at Bogoslof Volcano, Alaska
NASA Astrophysics Data System (ADS)
Haney, Matthew M.; Van Eaton, Alexa R.; Lyons, John J.; Kramer, Rebecca L.; Fee, David; Iezzi, Alexandra M.
2018-04-01
Lightning often occurs during ash-producing eruptive activity, and its detection is now being used in volcano monitoring for rapid alerts. We report on infrasonic and sonic recordings of the related, but previously undocumented, phenomenon of volcanic thunder. We observe volcanic thunder during the waning stages of two explosive eruptions at Bogoslof volcano, Alaska, on a microphone array located 60 km away. Thunder signals arrive from a different direction than coeruptive infrasound generated at the vent following an eruption on 10 June 2017, consistent with locations from lightning networks. For the 8 March 2017 eruption, arrival times and amplitudes of high-frequency thunder signals correlate well with the timing and strength of lightning detections. In both cases, the thunder is associated with lightning that continues after significant eruptive activity has ended. Infrasonic and sonic observations of volcanic thunder offer a new avenue for studying electrification processes in volcanic plumes.
Active Noise Control of Low Speed Fan Rotor-Stator Modes
NASA Technical Reports Server (NTRS)
Sutliff, Daniel L.; Hu, Ziqiang; Pla, Frederic G.; Heidelberg, Laurence J.
1996-01-01
This report describes the Active Noise Cancellation System designed by General Electric and tested in the NASA Lewis Research Center's 48 inch Active Noise Control Fan. The goal of this study was to assess the feasibility of using wall mounted secondary acoustic sources and sensors within the duct of a high bypass turbofan aircraft engine for active noise cancellation of fan tones. The control system is based on a modal control approach. A known acoustic mode propagating in the fan duct is cancelled using an array of flush-mounted compact sound sources. Controller inputs are signals from a shaft encoder and a microphone array which senses the residual acoustic mode in the duct. The canceling modal signal is generated by a modal controller. The key results are that the (6,0) mode was completely eliminated at 920 Hz and substantially reduced elsewhere. The total tone power was reduced 9.4 dB. Farfield 2BPF SPL reductions of 13 dB were obtained. The (4,0) and (4,1) modes were reduced simultaneously yielding a 15 dB modal PWL decrease. Global attenuation of PWL was obtained using an actuator and sensor system totally contained within the duct.
Evaluation of a scale-model experiment to investigate long-range acoustic propagation
NASA Technical Reports Server (NTRS)
Parrott, Tony L.; Mcaninch, Gerry L.; Carlberg, Ingrid A.
1987-01-01
Tests were conducted to evaluate the feasibility of using a scale-model experiment situated in an anechoic facility to investigate long-range sound propagation over ground terrain. For a nominal scale factor of 100:1, attenuations along a linear array of six microphones colinear with a continuous-wave type of sound source were measured over a wavelength range from 10 to 160 for a nominal test frequency of 10 kHz. Most tests were made for a hard model surface (plywood), but limited tests were also made for a soft model surface (plywood with felt). For grazing-incidence propagation over the hard surface, measured and predicted attenuation trends were consistent for microphone locations out to between 40 and 80 wavelengths. Beyond 80 wavelengths, significant variability was observed that was caused by disturbances in the propagation medium. Also, there was evidence of extraneous propagation-path contributions to data irregularities at more remote microphones. Sensitivity studies for the hard-surface and microphone indicated a 2.5 dB change in the relative excess attenuation for a systematic error in source and microphone elevations on the order of 1 mm. For the soft-surface model, no comparable sensitivity was found.
A Comparative Study of a 1/4-Scale Gulfstream G550 Aircraft Nose Gear Model
NASA Technical Reports Server (NTRS)
Khorrami, Mehdi R.; Neuhart, Dan H.; Zawodny, Nikolas S.; Liu, Fei; Yardibi, Tarik; Cattafesta, Louis; Van de Ven, Thomas
2009-01-01
A series of fluid dynamic and aeroacoustic wind tunnel experiments are performed at the University of Florida Aeroacoustic Flow Facility and the NASA-Langley Basic Aerodynamic Research Tunnel Facility on a high-fidelity -scale model of Gulfstream G550 aircraft nose gear. The primary objectives of this study are to obtain a comprehensive aeroacoustic dataset for a nose landing gear and to provide a clearer understanding of landing gear contributions to overall airframe noise of commercial aircraft during landing configurations. Data measurement and analysis consist of mean and fluctuating model surface pressure, noise source localization maps using a large-aperture microphone directional array, and the determination of far field noise level spectra using a linear array of free field microphones. A total of 24 test runs are performed, consisting of four model assembly configurations, each of which is subjected to three test section speeds, in two different test section orientations. The different model assembly configurations vary in complexity from a fully-dressed to a partially-dressed geometry. The two model orientations provide flyover and sideline views from the perspective of a phased acoustic array for noise source localization via beamforming. Results show that the torque arm section of the model exhibits the highest rms pressures for all model configurations, which is also evidenced in the sideline view noise source maps for the partially-dressed model geometries. Analysis of acoustic spectra data from the linear array microphones shows a slight decrease in sound pressure levels at mid to high frequencies for the partially-dressed cavity open model configuration. In addition, far field sound pressure level spectra scale approximately with the 6th power of velocity and do not exhibit traditional Strouhal number scaling behavior.
NASA Astrophysics Data System (ADS)
Bishop, Steven S.; Moore, Timothy R.; Gugino, Peter; Smith, Brett; Kirkwood, Kathryn P.; Korman, Murray S.
2018-04-01
High Bandwidth Acoustic Detection System (HBADS) is an emerging active acoustic sensor technology undergoing study by the US Army's Night Vision and Electronic Sensors Directorate. Mounted on a commercial all-terrain type vehicle, it uses a single source pulse chirp while moving and a new array (two rows each containing eight microphones) mounted horizontally and oriented in a side scan mode. Experiments are performed with this synthetic aperture air acoustic (SAA) array to image canonical ground targets in clutter or foliage. A commercial audio speaker transmits a linear FM chirp having an effective frequency range of 2 kHz to 15 kHz. The system includes an inertial navigation system using two differential GPS antennas, an inertial measurement unit and a wheel coder. A web camera is mounted midway between the two horizontal microphone arrays and a meteorological unit acquires ambient, temperature, pressure and humidity information. A data acquisition system is central to the system's operation, which is controlled by a laptop computer. Recent experiments include imaging canonical targets located on the ground in a grassy field and similar targets camouflaged by natural vegetation along the side of a road. A recent modification involves implementing SAA stripmap mode interferometry for computing the reflectance of targets placed along the ground. Typical strip map SAA parameters are chirp pulse = 10 or 40 ms, slant range resolution c/(2*BW) = 0.013 m, microphone diameter D = 0.022 m, azimuthal resolution (D/2) = 0.01, air sound speed c ≍ 340 m/s and maximum vehicle speed ≍ 2 m/s.
A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing
NASA Astrophysics Data System (ADS)
Cobos, Maximo; Lopez, JoseJ; Spors, Sascha
2010-12-01
Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.
Vehicle Counting and Moving Direction Identification Based on Small-Aperture Microphone Array.
Zu, Xingshui; Zhang, Shaojie; Guo, Feng; Zhao, Qin; Zhang, Xin; You, Xing; Liu, Huawei; Li, Baoqing; Yuan, Xiaobing
2017-05-10
The varying trend of a moving vehicle's angles provides much important intelligence for an unattended ground sensor (UGS) monitoring system. The present study investigates the capabilities of a small-aperture microphone array (SAMA) based system to identify the number and moving direction of vehicles travelling on a previously established route. In this paper, a SAMA-based acoustic monitoring system, including the system hardware architecture and algorithm mechanism, is designed as a single node sensor for the application of UGS. The algorithm is built on the varying trend of a vehicle's bearing angles around the closest point of approach (CPA). We demonstrate the effectiveness of our proposed method with our designed SAMA-based monitoring system in various experimental sites. The experimental results in harsh conditions validate the usefulness of our proposed UGS monitoring system.
Design of an Acoustic Target Intrusion Detection System Based on Small-Aperture Microphone Array.
Zu, Xingshui; Guo, Feng; Huang, Jingchang; Zhao, Qin; Liu, Huawei; Li, Baoqing; Yuan, Xiaobing
2017-03-04
Automated surveillance of remote locations in a wireless sensor network is dominated by the detection algorithm because actual intrusions in such locations are a rare event. Therefore, a detection method with low power consumption is crucial for persistent surveillance to ensure longevity of the sensor networks. A simple and effective two-stage algorithm composed of energy detector (ED) and delay detector (DD) with all its operations in time-domain using small-aperture microphone array (SAMA) is proposed. The algorithm analyzes the quite different velocities between wind noise and sound waves to improve the detection capability of ED in the surveillance area. Experiments in four different fields with three types of vehicles show that the algorithm is robust to wind noise and the probability of detection and false alarm are 96.67% and 2.857%, respectively.
NASA Astrophysics Data System (ADS)
Cannata, Andrea; Del Bello, Elisabetta; Kueppers, Ulrich; Privitera, Eugenio; Ricci, Tullio; Scarlato, Piergiorgio; Sciotto, Mariangela; Spina, Laura; Taddeucci, Jacopo; Pena Fernandez, Juan Jose; Sesterhenn, Joern
2016-04-01
On 5th July 2014 an eruptive fissure (hereafter referred to as EF) opened at the base of North-East Crater (NEC) of Mt. Etna. EF produced both Strombolian explosions and lava effusion. Thanks to the multiparametric experiment planned in the framework of MEDSUV project, we had the chance to acquire geophysical and volcanological data, in order to investigate the ongoing volcanic activity at EF. Temporary instruments (2 broadband seismometers, 2 microphones, 3-microphone arrays, a high-speed video camera and a thermal-camera) were deployed near the active vents during 15-16 July 2014 and were integrated with the data recorded by the permanent networks. Several kinds of studies are currently in progress, such as: frequency analysis by Fourier Transform and Short Time Fourier Transform to evaluate the spectral content of both seismic and acoustic signals; partitioning of seismic and acoustic energies, whose time variations could reflect changes in the volcanic dynamics; investigation on the intertimes between explosions to investigate their recurrence behaviour; classification of the waveforms of infrasound events. Furthermore, joint analysis of video signals and seismic-acoustic wavefields outlined relationships between pyroclasts ejection velocity, total erupted mass, peak explosion pressure, and air-ground motion coupling. This multiparametric approach allowed distinguishing and characterizing individually the behavior of the two vents active along the eruptive fissure via their thermal, visible and infrasonic signatures and shed light in the eruptive dynamics.
NASA Astrophysics Data System (ADS)
Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung
2016-07-01
A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.
Mennill, Daniel J.; Burt, John M.; Fristrup, Kurt M.; Vehrencamp, Sandra L.
2008-01-01
A field test was conducted on the accuracy of an eight-microphone acoustic location system designed to triangulate the position of duetting rufous-and-white wrens (Thryothorus rufalbus) in Costa Rica’s humid evergreen forest. Eight microphones were set up in the breeding territories of twenty pairs of wrens, with an average inter-microphone distance of 75.2±2.6 m. The array of microphones was used to record antiphonal duets broadcast through stereo loudspeakers. The positions of the loudspeakers were then estimated by evaluating the delay with which the eight microphones recorded the broadcast sounds. Position estimates were compared to coordinates surveyed with a global-positioning system (GPS). The acoustic location system estimated the position of loudspeakers with an error of 2.82±0.26 m and calculated the distance between the “male” and “female” loudspeakers with an error of 2.12±0.42 m. Given the large range of distances between duetting birds, this relatively low level of error demonstrates that the acoustic location system is a useful tool for studying avian duets. Location error was influenced partly by the difficulties inherent in collecting high accuracy GPS coordinates of microphone positions underneath a lush tropical canopy, and partly by the complicating influence of irregular topography and thick vegetation on sound transmission. PMID:16708941
Acoustic Source Elevation Angle Estimates Using Two Microphones
2014-06-01
elevated. Elevation angles are successfully estimated, under certain conditions, for a loudspeaker broadcasting band limited white noise. 15. SUBJECT...INTENTIONALLY LEFT BLANK. 1 1. Introduction The U.S. Army uses acoustic arrays to track and locate various sources including...ground and airborne vehicles, small arms, mortars, and rockets. The tracking and locating algorithms often used with these acoustic arrays perform
Jet Noise Source Localization Using Linear Phased Array
NASA Technical Reports Server (NTRS)
Agboola, Ferni A.; Bridges, James
2004-01-01
A study was conducted to further clarify the interpretation and application of linear phased array microphone results, for localizing aeroacoustics sources in aircraft exhaust jet. Two model engine nozzles were tested at varying power cycles with the array setup parallel to the jet axis. The array position was varied as well to determine best location for the array. The results showed that it is possible to resolve jet noise sources with bypass and other components separation. The results also showed that a focused near field image provides more realistic noise source localization at low to mid frequencies.
Localization of short-range acoustic and seismic wideband sources: Algorithms and experiments
NASA Astrophysics Data System (ADS)
Stafsudd, J. Z.; Asgari, S.; Hudson, R.; Yao, K.; Taciroglu, E.
2008-04-01
We consider the determination of the location (source localization) of a disturbance source which emits acoustic and/or seismic signals. We devise an enhanced approximate maximum-likelihood (AML) algorithm to process data collected at acoustic sensors (microphones) belonging to an array of, non-collocated but otherwise identical, sensors. The approximate maximum-likelihood algorithm exploits the time-delay-of-arrival of acoustic signals at different sensors, and yields the source location. For processing the seismic signals, we investigate two distinct algorithms, both of which process data collected at a single measurement station comprising a triaxial accelerometer, to determine direction-of-arrival. The direction-of-arrivals determined at each sensor station are then combined using a weighted least-squares approach for source localization. The first of the direction-of-arrival estimation algorithms is based on the spectral decomposition of the covariance matrix, while the second is based on surface wave analysis. Both of the seismic source localization algorithms have their roots in seismology; and covariance matrix analysis had been successfully employed in applications where the source and the sensors (array) are typically separated by planetary distances (i.e., hundreds to thousands of kilometers). Here, we focus on very-short distances (e.g., less than one hundred meters) instead, with an outlook to applications in multi-modal surveillance, including target detection, tracking, and zone intrusion. We demonstrate the utility of the aforementioned algorithms through a series of open-field tests wherein we successfully localize wideband acoustic and/or seismic sources. We also investigate a basic strategy for fusion of results yielded by acoustic and seismic arrays.
Phonocardiography with a smartphone
NASA Astrophysics Data System (ADS)
Thoms, Lars-Jochen; Colicchia, Giuseppe; Girwidz, Raimund
2017-03-01
When a stethoscope is placed on the chest over the heart, sounds coming from the heart can be directly heard. These sound vibrations can be captured through a microphone and the electrical signals from the transducer can be processed and plotted in a phonocardiogram. Students can easily use a microphone and smartphone to capture and analyse characteristic heart sounds.
Phonocardiography with a Smartphone
ERIC Educational Resources Information Center
Thoms, Lars-Jochen; Colicchia, Giuseppe; Girwidz, Raimund
2017-01-01
When a stethoscope is placed on the chest over the heart, sounds coming from the heart can be directly heard. These sound vibrations can be captured through a microphone and the electrical signals from the transducer can be processed and plotted in a phonocardiogram. Students can easily use a microphone and smartphone to capture and analyse…
Calibration of Clinical Audio Recording and Analysis Systems for Sound Intensity Measurement.
Maryn, Youri; Zarowski, Andrzej
2015-11-01
Sound intensity is an important acoustic feature of voice/speech signals. Yet recordings are performed with different microphone, amplifier, and computer configurations, and it is therefore crucial to calibrate sound intensity measures of clinical audio recording and analysis systems on the basis of output of a sound-level meter. This study was designed to evaluate feasibility, validity, and accuracy of calibration methods, including audiometric speech noise signals and human voice signals under typical speech conditions. Calibration consisted of 3 comparisons between data from 29 measurement microphone-and-computer systems and data from the sound-level meter: signal-specific comparison with audiometric speech noise at 5 levels, signal-specific comparison with natural voice at 3 levels, and cross-signal comparison with natural voice at 3 levels. Intensity measures from recording systems were then linearly converted into calibrated data on the basis of these comparisons, and validity and accuracy of calibrated sound intensity were investigated. Very strong correlations and quasisimilarity were found between calibrated data and sound-level meter data across calibration methods and recording systems. Calibration of clinical sound intensity measures according to this method is feasible, valid, accurate, and representative for a heterogeneous set of microphones and data acquisition systems in real-life circumstances with distinct noise contexts.
Microphone directionality, pre-emphasis filter, and wind noise in cochlear implants.
Chung, King; McKibben, Nicholas
2011-10-01
Wind noise can be a nuisance or a debilitating masker for cochlear implant users in outdoor environments. Previous studies indicated that wind noise at the microphone/hearing aid output had high levels of low-frequency energy and the amount of noise generated is related to the microphone directionality. Currently, cochlear implants only offer either directional microphones or omnidirectional microphones for users at-large. As all cochlear implants utilize pre-emphasis filters to reduce low-frequency energy before the signal is encoded, effective wind noise reduction algorithms for hearing aids might not be applicable for cochlear implants. The purposes of this study were to investigate the effect of microphone directionality on speech recognition and perceived sound quality of cochlear implant users in wind noise and to derive effective wind noise reduction strategies for cochlear implants. A repeated-measure design was used to examine the effects of spectral and temporal masking created by wind noise recorded through directional and omnidirectional microphones and the effects of pre-emphasis filters on cochlear implant performance. A digital hearing aid was programmed to have linear amplification and relatively flat in-situ frequency responses for the directional and omnidirectional modes. The hearing aid output was then recorded from 0 to 360° at flow velocities of 4.5 and 13.5 m/sec in a quiet wind tunnel. Sixteen postlingually deafened adult cochlear implant listeners who reported to be able to communicate on the phone with friends and family without text messages participated in the study. Cochlear implant users listened to speech in wind noise recorded at locations that the directional and omnidirectional microphones yielded the lowest noise levels. Cochlear implant listeners repeated the sentences and rated the sound quality of the testing materials. Spectral and temporal characteristics of flow noise, as well as speech and/or noise characteristics before and after the pre-emphasis filter, were analyzed. Correlation coefficients between speech recognition scores and crest factors of wind noise before and after pre-emphasis filtering were also calculated. Listeners obtained higher scores using the omnidirectional than the directional microphone mode at 13.5 m/sec, but they obtained similar speech recognition scores for the two microphone modes at 4.5 m/sec. Higher correlation coefficients were obtained between speech recognition scores and crest factors of wind noise after pre-emphasis filtering rather than before filtering. Cochlear implant users would benefit from both directional and omnidirectional microphones to reduce far-field background noise and near-field wind noise. Automatic microphone switching algorithms can be more effective if the incoming signal were analyzed after pre-emphasis filters for microphone switching decisions. American Academy of Audiology.
Park, Steve; Guan, Xiying; Kim, Youngwan; Creighton, Francis Pete X; Wei, Eric; Kymissis, Ioannis John; Nakajima, Hideko Heidi; Olson, Elizabeth S
2018-01-01
We report the fabrication and characterization of a prototype polyvinylidene fluoride polymer-based implantable microphone for detecting sound inside gerbil and human cochleae. With the current configuration and amplification, the signal-to-noise ratios were sufficiently high for normally occurring sound pressures and frequencies (ear canal pressures >50-60 dB SPL and 0.1-10 kHz), though 10 to 20 dB poorer than for some hearing aid microphones. These results demonstrate the feasibility of the prototype devices as implantable microphones for the development of totally implantable cochlear implants. For patients, this will improve sound reception by utilizing the outer ear and will improve the use of cochlear implants.
Guan, Xiying; Kim, Youngwan; Creighton, Francis (Pete) X.; Wei, Eric; Kymissis, Ioannis(John); Nakajima, Hideko Heidi; Olson, Elizabeth S.
2018-01-01
We report the fabrication and characterization of a prototype polyvinylidene fluoride polymer-based implantable microphone for detecting sound inside gerbil and human cochleae. With the current configuration and amplification, the signal-to-noise ratios were sufficiently high for normally occurring sound pressures and frequencies (ear canal pressures >50–60 dB SPL and 0.1–10 kHz), though 10 to 20 dB poorer than for some hearing aid microphones. These results demonstrate the feasibility of the prototype devices as implantable microphones for the development of totally implantable cochlear implants. For patients, this will improve sound reception by utilizing the outer ear and will improve the use of cochlear implants. PMID:29732950
The Detection of Radiated Modes from Ducted Fan Engines
NASA Technical Reports Server (NTRS)
Farassat, F.; Nark, Douglas M.; Thomas, Russell H.
2001-01-01
The bypass duct of an aircraft engine is a low-pass filter allowing some spinning modes to radiate outside the duct. The knowledge of the radiated modes can help in noise reduction, as well as the diagnosis of noise generation mechanisms inside the duct. We propose a nonintrusive technique using a circular microphone array outside the engine measuring the complex noise spectrum on an arc of a circle. The array is placed at various axial distances from the inlet or the exhaust of the engine. Using a model of noise radiation from the duct, an overdetermined system of linear equations is constructed for the complex amplitudes of the radial modes for a fixed circumferential mode. This system of linear equations is generally singular, indicating that the problem is illposed. Tikhonov regularization is employed to solve this system of equations for the unknown amplitudes of the radiated modes. An application of our mode detection technique using measured acoustic data from a circular microphone array is presented. We show that this technique can reliably detect radiated modes with the possible exception of modes very close to cut-off.
Robust snow avalanche detection using machine learning on infrasonic array data
NASA Astrophysics Data System (ADS)
Thüring, Thomas; Schoch, Marcel; van Herwijnen, Alec; Schweizer, Jürg
2014-05-01
Snow avalanches may threaten people and infrastructure in mountain areas. Automated detection of avalanche activity would be highly desirable, in particular during times of poor visibility, to improve hazard assessment, but also to monitor the effectiveness of avalanche control by explosives. In the past, a variety of remote sensing techniques and instruments for the automated detection of avalanche activity have been reported, which are based on radio waves (radar), seismic signals (geophone), optical signals (imaging sensor) or infrasonic signals (microphone). Optical imagery enables to assess avalanche activity with very high spatial resolution, however it is strongly weather dependent. Radar and geophone-based detection typically provide robust avalanche detection for all weather conditions, but are very limited in the size of the monitoring area. On the other hand, due to the long propagation distance of infrasound through air, the monitoring area of infrasonic sensors can cover a large territory using a single sensor (or an array). In addition, they are by far more cost effective than radars or optical imaging systems. Unfortunately, the reliability of infrasonic sensor systems has so far been rather low due to the strong variation of ambient noise (e.g. wind) causing a high false alarm rate. We analyzed the data collected by a low-cost infrasonic array system consisting of four sensors for the automated detection of avalanche activity at Lavin in the eastern Swiss Alps. A comparably large array aperture (~350m) allows highly accurate time delay estimations of signals which arrive at different times at the sensors, enabling precise source localization. An array of four sensors is sufficient for the time resolved source localization of signals in full 3D space, which is an excellent method to anticipate true avalanche activity. Robust avalanche detection is then achieved by using machine learning methods such as support vector machines. The system is initially trained by using characteristic data features from known avalanche and non-avalanche events. Data features are obtained from output signals of the source localization algorithm or from Fourier or time domain processing and support the learning phase of the system. A significantly improved detection rate as well as a reduction of the false alarm rate was achieved compared to previous approaches.
Development of an Experimental Rig for Investigation of Higher Order Modes in Ducts
NASA Technical Reports Server (NTRS)
Gerhold, Carl H.; Cabell, Randolph H.; Brown, Martha C.
2006-01-01
Continued progress to reduce fan noise emission from high bypass ratio engine ducts in aircraft increasingly relies on accurate description of the sound propagation in the duct. A project has been undertaken at NASA Langley Research Center to investigate the propagation of higher order modes in ducts with flow. This is a two-pronged approach, including development of analytic models (the subject of a separate paper) and installation of a laboratory-quality test rig. The purposes of the rig are to validate the analytical models and to evaluate novel duct acoustic liner concepts, both passive and active. The dimensions of the experimental rig test section scale to between 25% and 50% of the aft bypass ducts of most modern engines. The duct is of rectangular cross section so as to provide flexibility to design and fabricate test duct liner samples. The test section can accommodate flow paths that are straight through or offset from inlet to discharge, the latter design allowing investigation of the effect of curvature on sound propagation and duct liner performance. The maximum air flow rate through the duct is Mach 0.3. Sound in the duct is generated by an array of 16 high-intensity acoustic drivers. The signals to the loudspeaker array are generated by a multi-input/multi-output feedforward control system that has been developed for this project. The sound is sampled by arrays of flush-mounted microphones and a modal decomposition is performed at the frequency of sound generation. The data acquisition system consists of two arrays of flush-mounted microphones, one upstream of the test section and one downstream. The data are used to determine parameters such as the overall insertion loss of the test section treatment as well as the effect of the treatment on a modal basis such as mode scattering. The methodology used for modal decomposition is described, as is a description of the mode generation control system. Data are presented which demonstrate the performance of the controller to generate the desired mode while suppressing all other cut on modes in the duct.
NASA Astrophysics Data System (ADS)
Wakamatu, S.; Kawakata, H.; Hirano, S.
2017-12-01
Observation and analysis of infrasonic waves are important for volcanology because they could be associated with mechanisms of volcanic tremors and earthquakes (Sakai et al., 2000). Around the Hakone volcano area, Japan, infrasonic waves had been observed many times in 2015 (Yukutake et al., 2016, JpGU). In the area, seismometers have been installed more than microphones, so that analysis of seismograms may also contribute to understanding some characteristics of the infrasonic waves. In this study, we focused on the infrasonic waves on July 1, 2015, at the area and discussed their propagation. We analyzed the vertical component of seven seismograms and two infrasound records; instruments for these data have been installed within 5 km from the vent emerged in the June 2015 eruption(HSRI, 2015). We summarized distances of the observation points from the vent and appearance of the signals in the seismograms and the microphone records in Table 1. We confirmed that, when the OWD microphone(Fig1) observed the infrasonic waves, seismometers of the OWD and the KIN surface seismic stations(Fig1) recorded pulse-like signals repeatedly while the other five buried seismometers did not. At the same time, the NNT microphone(Fig1) recorded no more than unclear signals despite the shorter distance to the vent than that of the KIN station. We found that the appearance of pulse-like signals at the KIN seismic station usually 10-11 seconds delay after the appearance at the OWD seismic station. The distance between these two stations is 3.5km, so that the signals in seismograms could represent propagation of the infrasonic waves rather than the seismic waves. If so, however, the infrasound propagation could be influenced by the topography of the area because the signals are unclear in the NNT microphone record.To validate the above interpretation, we simulated the diffraction of the infrasonic waves due to the topography. We executed a 3-D finite-difference calculation by discretizing the air above the area. With the topography of 10m grid, we discussed the diffraction effect on the infrasonic waves propagation. Acknowledgments: We used the records acquired by the Japan Meteorological Agency, the Hot Spring Research Institute of Kanagawa Prefecture (HSRI), and the numerical map published by the Geospatial Information Authority of Japan.
Acoustic Measurement of Potato Cannon Velocity
ERIC Educational Resources Information Center
Courtney, Michael; Courtney, Amy
2007-01-01
Potato cannon velocity can be measured with a digitized microphone signal. A microphone is attached to the potato cannon muzzle, and a potato is fired at an aluminum target about 10 m away. Flight time can be determined from the acoustic waveform by subtracting the time in the barrel and time for sound to return from the target. The potato…
Acoustic characterization of wake vortices in ground effect
DOT National Transportation Integrated Search
2005-01-01
The experience and findings of an exploratory effort to characterize the sound emitted by : aircraft wake vortices near the ground are presented. A line array of four directional : microphones was deployed and recorded the wakes of several commercial...
Assessment of Railroad Locomotive Noise
DOT National Transportation Integrated Search
1976-08-01
Measurements of the noise generated by an SD40-2 diesel electric locomotive are described. The noise was measured in three types of moving tests: the first with the locomotive passing a 6-microphone array while under maximum power acceleration, the s...
Phased-Array Study of Dual-Flow Jet Noise: Effect of Nozzles and Mixers
NASA Technical Reports Server (NTRS)
Soo Lee, Sang; Bridges, James
2006-01-01
A 16-microphone linear phased-array installed parallel to the jet axis and a 32-microphone azimuthal phased-array installed in the nozzle exit plane have been applied to identify the noise source distributions of nozzle exhaust systems with various internal mixers (lobed and axisymmetric) and nozzles (three different lengths). Measurements of velocity were also obtained using cross-stream stereo particle image velocimetry (PIV). Among the three nozzle lengths tested, the medium length nozzle was the quietest for all mixers at high frequency on the highest speed flow condition. Large differences in source strength distributions between nozzles and mixers occurred at or near the nozzle exit for this flow condition. The beamforming analyses from the azimuthal array for the 12-lobed mixer on the highest flow condition showed that the core flow and the lobe area were strong noise sources for the long and short nozzles. The 12 noisy spots associated with the lobe locations of the 12-lobed mixer with the long nozzle were very well detected for the frequencies 5 KHz and higher. Meanwhile, maps of the source strength of the axisymmetric splitter show that the outer shear layer was the most important noise source at most flow conditions. In general, there was a good correlation between the high turbulence regions from the PIV tests and the high noise source regions from the phased-array measurements.
Active control of noise on the source side of a partition to increase its sound isolation
NASA Astrophysics Data System (ADS)
Tarabini, Marco; Roure, Alain; Pinhede, Cedric
2009-03-01
This paper describes a local active noise control system that virtually increases the sound isolation of a dividing wall by means of a secondary source array. With the proposed method, sound pressure on the source side of the partition is reduced using an array of loudspeakers that generates destructive interference on the wall surface, where an array of error microphones is placed. The reduction of sound pressure on the incident side of the wall is expected to decrease the sound radiated into the contiguous room. The method efficiency was experimentally verified by checking the insertion loss of the active noise control system; in order to investigate the possibility of using a large number of actuators, a decentralized FXLMS control algorithm was used. Active control performances and stability were tested with different array configurations, loudspeaker directivities and enclosure characteristics (sound source position and absorption coefficient). The influence of all these parameters was investigated with the factorial design of experiments. The main outcome of the experimental campaign was that the insertion loss produced by the secondary source array, in the 50-300 Hz frequency range, was close to 10 dB. In addition, the analysis of variance showed that the active noise control performance can be optimized with a proper choice of the directional characteristics of the secondary source and the distance between loudspeakers and error microphones.
Three-Dimensional Application of DAMAS Methodology for Aeroacoustic Noise Source Definition
NASA Technical Reports Server (NTRS)
Brooks, Thomas F.; Humphreys, William M., Jr.
2005-01-01
At the 2004 AIAA/CEAS Aeroacoustic Conference, a breakthrough in acoustic microphone array technology was reported by the authors. A Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) was developed which decouples the array design and processing influence from the noise being measured, using a simple and robust algorithm. For several prior airframe noise studies, it was shown to permit an unambiguous and accurate determination of acoustic source position and strength. As a follow-on effort, this paper examines the technique for three-dimensional (3D) applications. First, the beamforming ability for arrays, of different size and design, to focus longitudinally and laterally is examined for a range of source positions and frequency. Advantage is found for larger array designs with higher density microphone distributions towards the center. After defining a 3D grid generalized with respect to the array s beamforming characteristics, DAMAS is employed in simulated and experimental noise test cases. It is found that spatial resolution is much less sharp in the longitudinal direction in front of the array compared to side-to-side lateral resolution. 3D DAMAS becomes useful for sufficiently large arrays at sufficiently high frequency. But, such can be a challenge to computational capabilities, with regard to the required expanse and number of grid points. Also, larger arrays can strain basic physical modeling assumptions that DAMAS and all traditional array methodologies use. An important experimental result is that turbulent shear layers can negatively impact attainable beamforming resolution. Still, the usefulness of 3D DAMAS is demonstrated by the measurement of landing gear noise source distributions in a difficult hard-wall wind tunnel environment.
NASA Astrophysics Data System (ADS)
Dieckman, Eric Allen
In order to perform useful tasks for us, robots must have the ability to notice, recognize, and respond to objects and events in their environment. This requires the acquisition and synthesis of information from a variety of sensors. Here we investigate the performance of a number of sensor modalities in an unstructured outdoor environment, including the Microsoft Kinect, thermal infrared camera, and coffee can radar. Special attention is given to acoustic echolocation measurements of approaching vehicles, where an acoustic parametric array propagates an audible signal to the oncoming target and the Kinect microphone array records the reflected backscattered signal. Although useful information about the target is hidden inside the noisy time domain measurements, the Dynamic Wavelet Fingerprint process (DWFP) is used to create a time-frequency representation of the data. A small-dimensional feature vector is created for each measurement using an intelligent feature selection process for use in statistical pattern classification routines. Using our experimentally measured data from real vehicles at 50 m, this process is able to correctly classify vehicles into one of five classes with 94% accuracy. Fully three-dimensional simulations allow us to study the nonlinear beam propagation and interaction with real-world targets to improve classification results.
López-Pacheco, María G; Sánchez-Fernández, Luis P; Molina-Lozano, Herón
2014-01-15
Noise levels of common sources such as vehicles, whistles, sirens, car horns and crowd sounds are mixed in urban soundscapes. Nowadays, environmental acoustic analysis is performed based on mixture signals recorded by monitoring systems. These mixed signals make it difficult for individual analysis which is useful in taking actions to reduce and control environmental noise. This paper aims at separating, individually, the noise source from recorded mixtures in order to evaluate the noise level of each estimated source. A method based on blind deconvolution and blind source separation in the wavelet domain is proposed. This approach provides a basis to improve results obtained in monitoring and analysis of common noise sources in urban areas. The method validation is through experiments based on knowledge of the predominant noise sources in urban soundscapes. Actual recordings of common noise sources are used to acquire mixture signals using a microphone array in semi-controlled environments. The developed method has demonstrated great performance improvements in identification, analysis and evaluation of common urban sources. © 2013 Elsevier B.V. All rights reserved.
Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology
NASA Astrophysics Data System (ADS)
Niwa, Kenta; Nishino, Takanori; Takeda, Kazuya
A sound field reproduction method is proposed that uses blind source separation and a head-related transfer function. In the proposed system, multichannel acoustic signals captured at distant microphones are decomposed to a set of location/signal pairs of virtual sound sources based on frequency-domain independent component analysis. After estimating the locations and the signals of the virtual sources by convolving the controlled acoustic transfer functions with each signal, the spatial sound is constructed at the selected point. In experiments, a sound field made by six sound sources is captured using 48 distant microphones and decomposed into sets of virtual sound sources. Since subjective evaluation shows no significant difference between natural and reconstructed sound when six virtual sources and are used, the effectiveness of the decomposing algorithm as well as the virtual source representation are confirmed.
Yuldashev, Petr; Karzova, Maria; Khokhlova, Vera; Ollivier, Sébastien; Blanc-Benon, Philippe
2015-06-01
A Mach-Zehnder interferometer is used to measure spherically diverging N-waves in homogeneous air. An electrical spark source is used to generate high-amplitude (1800 Pa at 15 cm from the source) and short duration (50 μs) N-waves. Pressure waveforms are reconstructed from optical phase signals using an Abel-type inversion. It is shown that the interferometric method allows one to reach 0.4 μs of time resolution, which is 6 times better than the time resolution of a 1/8-in. condenser microphone (2.5 μs). Numerical modeling is used to validate the waveform reconstruction method. The waveform reconstruction method provides an error of less than 2% with respect to amplitude in the given experimental conditions. Optical measurement is used as a reference to calibrate a 1/8-in. condenser microphone. The frequency response function of the microphone is obtained by comparing the spectra of the waveforms resulting from optical and acoustical measurements. The optically measured pressure waveforms filtered with the microphone frequency response are in good agreement with the microphone output voltage. Therefore, an optical measurement method based on the Mach-Zehnder interferometer is a reliable tool to accurately characterize evolution of weak shock waves in air and to calibrate broadband acoustical microphones.
Application of MetaRail railway noise measurement methodology: comparison of three track systems
NASA Astrophysics Data System (ADS)
Kalivoda, M.; Kudrna, M.; Presle, G.
2003-10-01
Within the fourth RTD Framework Programme, the European Union has supported a research project dealing with the improvement of railway noise (emission) measurement methodologies. This project was called MetaRail and proposed a number of procedures and methods to decrease systematic measurement errors and to increase reproducibility. In 1999 the Austrian Federal Railways installed 1000 m of test track to explore the long-term behaviour of three different ballast track systems. This test included track stability, rail forces and ballast forces, as well as vibration transmission and noise emission. The noise study was carried out using the experience and methods developed within MetaRail. This includes rail roughness measurements as well as measurements of vertical railhead, sleeper and ballast vibration in parallel with the noise emission measurement with a single microphone at a distance of 7.5 m from the track. Using a test train with block- and disc-braked vehicles helped to control operational conditions and indicated the influence of different wheel roughness. It has been shown that the parallel recording of several vibration signals together with the noise signal makes it possible to evaluate the contributions of car body, sleeper, track and wheel sources to the overall noise emission. It must be stressed that this method is not focused as is a microphone-array. However, this methodology is far easier to apply and thus cheaper. Within this study, noise emission was allocated to the different elements to answer questions such as whether the sleeper eigenfrequency is transmitted into the rail.
NASA Astrophysics Data System (ADS)
Werner, E.
In 1876, Alexander Graham Bell described his first telephone with a microphone using magnetic induction to convert the voice input into an electric output signal. The basic principle led to a variety of designs optimized for different needs, from hearing impaired users to singers or broadcast announcers. From the various sound pressure versions, only the moving coil design is still in mass production for speech and music application.
Posatskiy, A O; Chau, T
2012-04-01
Mechanomyography (MMG) is an important kinesiological tool and potential communication pathway for individuals with disabilities. However, MMG is highly susceptible to contamination by motion artifact due to limb movement. A better understanding of the nature of this contamination and its effects on different sensing methods is required to inform robust MMG sensor design. Therefore, in this study, we recorded MMG from the extensor carpi ulnaris of six able-bodied participants using three different co-located condenser microphone and accelerometer pairings. Contractions at 30% MVC were recorded with and without a shaker-induced single-frequency forearm motion artifact delivered via a custom test rig. Using a signal-to-signal-plus-noise-ratio and the adaptive Neyman curve-based statistic, we found that microphone-derived MMG spectra were significantly less influenced by motion artifact than corresponding accelerometer-derived spectra (p⩽0.05). However, non-vanishing motion artifact harmonics were present in both spectra, suggesting that simple bandpass filtering may not remove artifact influences permeating into typical MMG bands of interest. Our results suggest that condenser microphones are preferred for MMG recordings when the mitigation of motion artifact effects is important. Copyright © 2011. Published by Elsevier Ltd.
Koch, Martin; Seidler, Hannes; Hellmuth, Alexander; Bornitz, Matthias; Lasurashvili, Nikoloz; Zahnert, Thomas
2013-07-01
There is a great demand for implantable microphones for future generations of implantable hearing aids, especially Cochlea Implants. An implantable middle ear microphone based on a piezoelectric membrane sensor for insertion into the incudostapedial gap is investigated. The sensor is designed to measure the sound-induced forces acting on the center of the membrane. The sensor mechanically couples to the adjacent ossicles via two contact areas, the sensor membrane and the sensor housing. The sensing element is a piezoelectric single crystal bonded on a titanium membrane. The sensor allows a minimally invasive and reversible implantation without removal of ossicles and without additional sensor fixation in the tympanic cavity. This study investigates the implantable microphone sensor and its implantation concept. It intends to quantify the influence of the sensor's insertion position on the achievable microphone sensitivity. The investigation considers anatomical and pathological variations of the middle ear geometry and its space limitations. Temporal bone experiments on a laboratory model show that anatomical and pathological variations of the middle ear geometry can prevent the sensor from being placed optimally within the incudostapedial joint. Beyond scattering of transfer functions due to anatomic variations of individual middle ears there is the impact of variations in the sensor position within the ossicular chain that has a considerable effect on the transfer characteristics of the middle ear microphone. The centering of the sensor between incus and stapes, the direction of insertion (membrane to stapes or to incus) and the effect of additional contact points with surrounding anatomic structures affect the signal yield of the implanted sensor. The presence of additional contact points has a considerably impact on the sensitivity, yet the microphone sensitivity is quite robust against small changes in the positioning of the incus on the sensor. Signal losses can be avoided by adjusting the position of the sensor within the joint. The findings allow the development of an improved surgical insertion technique to ensure maximally achievable signal yield of the membrane sensor in the ISJ and provides valuable knowledge for a future design considerations including sensor miniaturization and geometry. Measurements of the implanted sensor in temporal bone specimens showed a microphone sensitivity in the order of 1 mV/Pa. This article is part of a special issue entitled "MEMRO 2012". Copyright © 2012 Elsevier B.V. All rights reserved.
Infra-sound Signature of Lightning
NASA Astrophysics Data System (ADS)
Arechiga, R. O.; Badillo, E.; Johnson, J.; Edens, H. E.; Rison, W.; Thomas, R. J.
2012-12-01
We have analyzed thunder from over 200 lightning flashes to determine which part of thunder comes from the gas dynamic expansion of portions of the rapidly heated lightning channel and which from electrostatic field changes. Thunder signals were recorded by a ~1500 m network of 3 to 4 4-element microphone deployed in the Magdalena mountains of New Mexico in the summers of 2011 and 2012. The higher frequency infra-sound and audio-range portion of thunder is thought to come from the gas dynamic expansion, and the electrostatic mechanism gives rise to a signature infra-sound pulse peaked at a few Hz. More than 50 signature infra-sound pulses were observed in different portions of the thunder signal, with no preference towards the beginning or the end of the signal. Detection of the signature pulse occurs sometimes only for one array and sometimes for several arrays, which agrees with the theory that the pulse is highly directional (i.e., the recordings have to be in a specific position with respect to the cloud generating the pulse to be able to detect it). The detection of these pulses under quiet wind conditions by different acoustic arrays corroborates the electrostatic mechanism originally proposed by Wilson [1920], further studied by Dessler [1973] and Few [1985], observed by Bohannon [1983] and Balachandran [1979, 1983], and recently analyzed by Pasko [2009]. Pasko employed a model to explain the electrostatic-to-acoustic energy conversion and the initial compression waves in observed infrasonic pulses, which agrees with the observations we have made. We present thunder samples that exhibit signature infra-sound pulses at different times and acoustic source reconstruction to demonstrate the beaming effect.
NASA Astrophysics Data System (ADS)
Thong-un, Natee; Hirata, Shinnosuke; Kurosawa, Minoru K.
2015-07-01
In this paper, we describe an expansion of the airborne ultrasonic systems for object localization in the three-dimensional spaces of navigation. A system, which revises the microphone arrangement and algorithm, can expand the object-position measurement from +90° in a previous method up to +180° for both the elevation and azimuth angles. The proposed system consists of a sound source and four acoustical receivers. Moreover, the system is designed to utilize low-cost devices, and low-cost computation relying on 1-bit signal processing is used to support the real-time application on a field-programmable gate array (FPGA). An object location is identified using spherical coordinates. A spherical object, which has a curved surface, is considered a target for this system. The transmit pulse to the target is a linear-period-modulated ultrasonic wave with a chirp rate of 50-20 kHz. Statistical evaluation of this work is the experimental investigation under repeatability.
NASA Astrophysics Data System (ADS)
Ribeiro, Juliano G. C.; Serrenho, Felipe G.; Apolinário, José A.; Ramos, António L. L.
2018-04-01
Spotting a shooter from a drone has been the subject of great interest lately due to its many applications in the fields of defense and security and law enforcement. Using a drone can be an effective way to detect potential threats in many real-life scenarios. Nevertheless, acoustic signals recorded from a drone usually exhibit a very low SNR, mainly due to the distance to the source and the proximity of the sensors to the propellers. This is a serious limiting factor and, therefore, the use of signal enhancement techniques is required. This work addresses the problem of determining the Direction-of-Arrival (DoA) of the muzzle blast, captured using a planar microphone array mounted on a commercial DJI PHANTOM 4 drone in flight. This new shooter localization method that relies solely on detecting and estimating the DoA of the muzzle blast. However, the typical low SNR in this scenario requires the use of preprocessing techniques, such as signal clipping and median filtering, to enhance the signal of interest (muzzle blast). In addition, we employ a recently introduced improved data selection DoA estimation method suitable for gunshot signals recorded from a low to medium altitude mobile aerial platform. Positive results achieved indicate that this approach is effective and of practical interest.
Jet-Surface Interaction Test: Phased Array Noise Source Localization Results
NASA Technical Reports Server (NTRS)
Podboy, Gary G.
2013-01-01
An experiment was conducted to investigate the effect that a planar surface located near a jet flow has on the noise radiated to the far-field. Two different configurations were tested: 1) a shielding configuration in which the surface was located between the jet and the far-field microphones, and 2) a reflecting configuration in which the surface was mounted on the opposite side of the jet, and thus the jet noise was free to reflect off the surface toward the microphones. Both conventional far-field microphone and phased array noise source localization measurements were obtained. This paper discusses phased array results, while a companion paper (Brown, C.A., "Jet-Surface Interaction Test: Far-Field Noise Results," ASME paper GT2012-69639, June 2012.) discusses far-field results. The phased array data show that the axial distribution of noise sources in a jet can vary greatly depending on the jet operating condition and suggests that it would first be necessary to know or be able to predict this distribution in order to be able to predict the amount of noise reduction to expect from a given shielding configuration. The data obtained on both subsonic and supersonic jets show that the noise sources associated with a given frequency of noise tend to move downstream, and therefore, would become more difficult to shield, as jet Mach number increases. The noise source localization data obtained on cold, shock-containing jets suggests that the constructive interference of sound waves that produces noise at a given frequency within a broadband shock noise hump comes primarily from a small number of shocks, rather than from all the shocks at the same time. The reflecting configuration data illustrates that the law of reflection must be satisfied in order for jet noise to reflect off of a surface to an observer, and depending on the relative locations of the jet, the surface, and the observer, only some of the jet noise sources may satisfy this requirement.
Jet-Surface Interaction Test: Phased Array Noise Source Localization Results
NASA Technical Reports Server (NTRS)
Podboy, Gary G.
2012-01-01
An experiment was conducted to investigate the effect that a planar surface located near a jet flow has on the noise radiated to the far-field. Two different configurations were tested: 1) a shielding configuration in which the surface was located between the jet and the far-field microphones, and 2) a reflecting configuration in which the surface was mounted on the opposite side of the jet, and thus the jet noise was free to reflect off the surface toward the microphones. Both conventional far-field microphone and phased array noise source localization measurements were obtained. This paper discusses phased array results, while a companion paper discusses far-field results. The phased array data show that the axial distribution of noise sources in a jet can vary greatly depending on the jet operating condition and suggests that it would first be necessary to know or be able to predict this distribution in order to be able to predict the amount of noise reduction to expect from a given shielding configuration. The data obtained on both subsonic and supersonic jets show that the noise sources associated with a given frequency of noise tend to move downstream, and therefore, would become more difficult to shield, as jet Mach number increases. The noise source localization data obtained on cold, shock-containing jets suggests that the constructive interference of sound waves that produces noise at a given frequency within a broadband shock noise hump comes primarily from a small number of shocks, rather than from all the shocks at the same time. The reflecting configuration data illustrates that the law of reflection must be satisfied in order for jet noise to reflect off of a surface to an observer, and depending on the relative locations of the jet, the surface, and the observer, only some of the jet noise sources may satisfy this requirement.
Ultra-narrow bandwidth voice coding
Holzrichter, John F [Berkeley, CA; Ng, Lawrence C [Danville, CA
2007-01-09
A system of removing excess information from a human speech signal and coding the remaining signal information, transmitting the coded signal, and reconstructing the coded signal. The system uses one or more EM wave sensors and one or more acoustic microphones to determine at least one characteristic of the human speech signal.
Crukley, Jeffery; Scollie, Susan D
2014-03-01
The purpose of this study was to determine the effects of hearing instruments set to Desired Sensation Level version 5 (DSL v5) hearing instrument prescription algorithm targets and equipped with directional microphones and digital noise reduction (DNR) on children's sentence recognition in noise performance and loudness perception in a classroom environment. Ten children (ages 8-17 years) with stable, congenital sensorineural hearing losses participated in the study. Participants were fitted bilaterally with behind-the-ear hearing instruments set to DSL v5 prescriptive targets. Sentence recognition in noise was evaluated using the Bamford-Kowal-Bench Speech in Noise Test (Niquette et al., 2003). Loudness perception was evaluated using a modified version of the Contour Test of Loudness Perception (Cox, Alexander, Taylor, & Gray, 1997). Children's sentence recognition in noise performance was significantly better when using directional microphones alone or in combination with DNR than when using omnidirectional microphones alone or in combination with DNR. Children's loudness ratings for sounds above 72 dB SPL were lowest when fitted with the DSL v5 Noise prescription combined with directional microphones. DNR use showed no effect on loudness ratings. Use of the DSL v5 Noise prescription with a directional microphone improved sentence recognition in noise performance and reduced loudness perception ratings for loud sounds relative to a typical clinical reference fitting with the DSL v5 Quiet prescription with no digital signal processing features enabled. Potential clinical strategies are discussed.
Beck, Christoph; Garreau, Guillaume; Georgiou, Julius
2016-01-01
Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.
NASA Astrophysics Data System (ADS)
Vostrukhin, A. A.; Golovin, D. V.; Kozyrev, A. S.; Litvak, M. L.; Malakhov, A. V.; Mitrofanov, I. G.; Mokrousov, M. I.; Tomilina, T. M.; Bobrovnitskiy, Yu. I.; Grebennikov, A. S.; Laktionova, M. M.; Bakhtin, B. N.; Sotov, A. V.
2018-05-01
The results of testing a number of space-based detectors that contain PMTs or high-voltage electrodes for the noise from the microphonics that occurs in the signal path due to external mechanical action have been presented. A method for the vibration isolation of instruments aboard a spacecraft has been proposed to reduce their responsivity to vibrations.
Acoustic investigation of wall jet over a backward-facing step using a microphone phased array
NASA Astrophysics Data System (ADS)
Perschke, Raimund F.; Ramachandran, Rakesh C.; Raman, Ganesh
2015-02-01
The acoustic properties of a wall jet over a hard-walled backward-facing step of aspect ratios 6, 3, 2, and 1.5 are studied using a 24-channel microphone phased array at Mach numbers up to M=0.6. The Reynolds number based on inflow velocity and step height assumes values from Reh = 3.0 ×104 to 7.2 ×105. Flow without and with side walls is considered. The experimental setup is open in the wall-normal direction and the expansion ratio is effectively 1. In case of flow through a duct, symmetry of the flow in the spanwise direction is lost downstream of separation at all but the largest aspect ratio as revealed by oil paint flow visualization. Hydrodynamic scattering of turbulence from the trailing edge of the step contributes significantly to the radiated sound. Reflection of acoustic waves from the bottom plate results in a modulation of power spectral densities. Acoustic source localization has been conducted using a 24-channel microphone phased array. Convective mean-flow effects on the apparent source origin have been assessed by placing a loudspeaker underneath a perforated flat plate and evaluating the displacement of the beamforming peak with inflow Mach number. Two source mechanisms are found near the step. One is due to interaction of the turbulent wall jet with the convex edge of the step. Free-stream turbulence sound is found to be peaked downstream of the step. Presence of the side walls increases free-stream sound. Results of the flow visualization are correlated with acoustic source maps. Trailing-edge sound and free-stream turbulence sound can be discriminated using source localization.
Spherical beamforming for spherical array with impedance surface
NASA Astrophysics Data System (ADS)
Tontiwattanakul, Khemapat
2018-01-01
Spherical microphone array beamforming has been a popular research topic for recent years. Due to their isotropic beam in three dimensional spaces as well as a certain frequency range, the arrays are widely used in many applications such as sound field recording, acoustic beamforming, and noise source localisation. The body of a spherical array is usually considered perfectly rigid. A sound field captured by the sensors on spherical array can be decomposed into a series of spherical harmonics. In noise source localisation, the amplitude density of sound sources is estimated and illustrated by mean of colour maps. In this work, a rigid spherical array covered by fibrous materials is studied via numerical simulation and the performance of the spherical beamforming is discussed.
Energy and Power Spectra of Thunder in the Magdalena Mountains, Central New Mexico
NASA Astrophysics Data System (ADS)
Johnson, R. L.; Johnson, J. B.; Arechiga, R. O.; Michnovicz, J. C.; Edens, H. E.; Rison, W.
2011-12-01
Thunder is generated primarily by heating and expansion of the atmosphere around a lightning channel and by charge relaxation within a cloud. Broadband acoustic studies are important for inferring dynamic charge behavior during and after lightning events. During the Summer monsoon seasons of 2009-2011, we deployed networks of 3-5 stations consisting of broadband (0.01 to 500 Hz) acoustic arrays and audio microphones in the Magdalena Mountains in central New Mexico. We utilize Lightning Mapping Array (LMA) data for accurate timing of lightning events within a 10 km radius of our network. Unlike the LMA, which detects VHF signals from breakdown processes, thunder signals may be used to observe charge dynamics and thermal shocking of the atmosphere. Previous investigations show that thunder spectral content may distinguish between electrostatic and thermal heating processes. We collected extensive datasets in terms of number of independent broadband sensors (up to 20), number of observed flashes (hundreds from multiple storms), and available coincident LMA data. We use infrasound and audio data to quantify total acoustic energy produced at lightning sources in various frequency bands. We attribute the spectral content and intensity of thunder signals to source characteristics, sensor locations, propagation effects, and noise. We observe variations in acoustic energy for both entire storm systems and individual lightning flashes. We propose that some variations may be related to the type of lightning flash and that spectral content is important for distinguishing between thunder generation mechanisms.
Reconstruction of sound source signal by analytical passive TR in the environment with airflow
NASA Astrophysics Data System (ADS)
Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu
2017-03-01
In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.
Implementation of the Phase Difference Trace Function for a Circular Array.
1980-06-01
15 B. HICROPHONES............... ... .. ..... ....... 16 C. PREAMPLIFIER /FILTER ................. o....... 16 D. PHASE SHIFrER...Photgraphs ......................... 31 5. Microphone Specifications ........................... 32 6. Preamplifier /Filter Schematic...33 7. Preamplifier /Filter Printel Cir-cait Board ........... 34 8. Phase Shifter Schemitic ............................. 35 9. Phase Shifter Print.l
Inconspicuous echolocation in hoary bats (Lasiurus cinereus)
Aaron J. Corcoran; Theodore J. Weller
2018-01-01
Echolocation allows bats to occupy diverse nocturnal niches. Bats almost always use echolocation, even when other sensory stimuli are available to guide navigation. Here, using arrays of calibrated infrared cameras and ultrasonic microphones, we demonstrate that hoary bats (Lasiurus cinereus) use previously unknown echolocation behaviours that...
Directional hearing aid using hybrid adaptive beamformer (HAB) and binaural ITE array
NASA Astrophysics Data System (ADS)
Shaw, Scott T.; Larow, Andy J.; Gibian, Gary L.; Sherlock, Laguinn P.; Schulein, Robert
2002-05-01
A directional hearing aid algorithm called the Hybrid Adaptive Beamformer (HAB), developed for NIH/NIA, can be applied to many different microphone array configurations. In this project the HAB algorithm was applied to a new array employing in-the-ear microphones at each ear (HAB-ITE), to see if previous HAB performance could be achieved with a more cosmetically acceptable package. With diotic output, the average benefit in threshold SNR was 10.9 dB for three HoH and 11.7 dB for five normal-hearing subjects. These results are slightly better than previous results of equivalent tests with a 3-in. array. With an innovative binaural fitting, a small benefit beyond that provided by diotic adaptive beamforming was observed: 12.5 dB for HoH and 13.3 dB for normal-hearing subjects, a 1.6 dB improvement over the diotic presentation. Subjectively, the binaural fitting preserved binaural hearing abilities, giving the user a sense of space, and providing left-right localization. Thus the goal of creating an adaptive beamformer that simultaneously provides excellent noise reduction and binaural hearing was achieved. Further work remains before the HAB-ITE can be incorporated into a real product, optimizing binaural adaptive beamforming, and integrating the concept with other technologies to produce a viable product prototype. [Work supported by NIH/NIDCD.
Langley Aerospace Research Summer Scholars (LARSS) Scholars Pres
2013-08-07
250 students participated in the Langley Aerospace Research Summer Scholars (LARSS) Presentations focused on 3D modeling of STARBUKS calibration components in the National Transonic Facility, hypersonic aerodynamic inflatable decelerator, and optimization of a microphone-based array for flight testing. Reid Center LaRC Hampton, VA
Micro acoustic spectrum analyzer
Schubert, W. Kent; Butler, Michael A.; Adkins, Douglas R.; Anderson, Larry F.
2004-11-23
A micro acoustic spectrum analyzer for determining the frequency components of a fluctuating sound signal comprises a microphone to pick up the fluctuating sound signal and produce an alternating current electrical signal; at least one microfabricated resonator, each resonator having a different resonant frequency, that vibrate in response to the alternating current electrical signal; and at least one detector to detect the vibration of the microfabricated resonators. The micro acoustic spectrum analyzer can further comprise a mixer to mix a reference signal with the alternating current electrical signal from the microphone to shift the frequency spectrum to a frequency range that is a better matched to the resonant frequencies of the microfabricated resonators. The micro acoustic spectrum analyzer can be designed specifically for portability, size, cost, accuracy, speed, power requirements, and use in a harsh environment. The micro acoustic spectrum analyzer is particularly suited for applications where size, accessibility, and power requirements are limited, such as the monitoring of industrial equipment and processes, detection of security intrusions, or evaluation of military threats.
Distant Speech Recognition Using a Microphone Array Network
NASA Astrophysics Data System (ADS)
Nakano, Alberto Yoshihiro; Nakagawa, Seiichi; Yamamoto, Kazumasa
In this work, spatial information consisting of the position and orientation angle of an acoustic source is estimated by an artificial neural network (ANN). The estimated position of a speaker in an enclosed space is used to refine the estimated time delays for a delay-and-sum beamformer, thus enhancing the output signal. On the other hand, the orientation angle is used to restrict the lexicon used in the recognition phase, assuming that the speaker faces a particular direction while speaking. To compensate the effect of the transmission channel inside a short frame analysis window, a new cepstral mean normalization (CMN) method based on a Gaussian mixture model (GMM) is investigated and shows better performance than the conventional CMN for short utterances. The performance of the proposed method is evaluated through Japanese digit/command recognition experiments.
Synchronized and noise-robust audio recordings during realtime magnetic resonance imaging scans.
Bresch, Erik; Nielsen, Jon; Nayak, Krishna; Narayanan, Shrikanth
2006-10-01
This letter describes a data acquisition setup for recording, and processing, running speech from a person in a magnetic resonance imaging (MRI) scanner. The main focus is on ensuring synchronicity between image and audio acquisition, and in obtaining good signal to noise ratio to facilitate further speech analysis and modeling. A field-programmable gate array based hardware design for synchronizing the scanner image acquisition to other external data such as audio is described. The audio setup itself features two fiber optical microphones and a noise-canceling filter. Two noise cancellation methods are described including a novel approach using a pulse sequence specific model of the gradient noise of the MRI scanner. The setup is useful for scientific speech production studies. Sample results of speech and singing data acquired and processed using the proposed method are given.
Synchronized and noise-robust audio recordings during realtime magnetic resonance imaging scans (L)
Bresch, Erik; Nielsen, Jon; Nayak, Krishna; Narayanan, Shrikanth
2007-01-01
This letter describes a data acquisition setup for recording, and processing, running speech from a person in a magnetic resonance imaging (MRI) scanner. The main focus is on ensuring synchronicity between image and audio acquisition, and in obtaining good signal to noise ratio to facilitate further speech analysis and modeling. A field-programmable gate array based hardware design for synchronizing the scanner image acquisition to other external data such as audio is described. The audio setup itself features two fiber optical microphones and a noise-canceling filter. Two noise cancellation methods are described including a novel approach using a pulse sequence specific model of the gradient noise of the MRI scanner. The setup is useful for scientific speech production studies. Sample results of speech and singing data acquired and processed using the proposed method are given. PMID:17069275
Microphone Handling Noise: Measurements of Perceptual Threshold and Effects on Audio Quality
Kendrick, Paul; Jackson, Iain R.; Fazenda, Bruno M.; Cox, Trevor J.; Li, Francis F.
2015-01-01
A psychoacoustic experiment was carried out to test the effects of microphone handling noise on perceived audio quality. Handling noise is a problem affecting both amateurs using their smartphones and cameras, as well as professionals using separate microphones and digital recorders. The noises used for the tests were measured from a variety of devices, including smartphones, laptops and handheld microphones. The signal features that characterise these noises are analysed and presented. The sounds include various types of transient, impact noises created by tapping or knocking devices, as well as more sustained sounds caused by rubbing. During the perceptual tests, listeners auditioned speech podcasts and were asked to rate the degradation of any unwanted sounds they heard. A representative design test methodology was developed that tried to encourage everyday rather than analytical listening. Signal-to-noise ratio (SNR) of the handling noise events was shown to be the best predictor of quality degradation. Other factors such as noise type or background noise in the listening environment did not significantly affect quality ratings. Podcast, microphone type and reproduction equipment were found to be significant but only to a small extent. A model allowing the prediction of degradation from the SNR is presented. The SNR threshold at which 50% of subjects noticed handling noise was found to be 4.2 ± 0.6 dBA. The results from this work are important for the understanding of our perception of impact sound and resonant noises in recordings, and will inform the future development of an automated predictor of quality for handling noise. PMID:26473498
Recording high quality speech during tagged cine-MRI studies using a fiber optic microphone.
NessAiver, Moriel S; Stone, Maureen; Parthasarathy, Vijay; Kahana, Yuvi; Paritsky, Alexander; Paritsky, Alex
2006-01-01
To investigate the feasibility of obtaining high quality speech recordings during cine imaging of tongue movement using a fiber optic microphone. A Complementary Spatial Modulation of Magnetization (C-SPAMM) tagged cine sequence triggered by an electrocardiogram (ECG) simulator was used to image a volunteer while speaking the syllable pairs /a/-/u/, /i/-/u/, and the words "golly" and "Tamil" in sync with the imaging sequence. A noise-canceling, optical microphone was fastened approximately 1-2 inches above the mouth of the volunteer. The microphone was attached via optical fiber to a laptop computer, where the speech was sampled at 44.1 kHz. A reference recording of gradient activity with no speech was subtracted from target recordings. Good quality speech was discernible above the background gradient sound using the fiber optic microphone without reference subtraction. The audio waveform of gradient activity was extremely stable and reproducible. Subtraction of the reference gradient recording further reduced gradient noise by roughly 21 dB, resulting in exceptionally high quality speech waveforms. It is possible to obtain high quality speech recordings using an optical microphone even during exceptionally loud cine imaging sequences. This opens up the possibility of more elaborate MRI studies of speech including spectral analysis of the speech signal in all types of MRI.
Maneuver Acoustic Flight Test of the Bell 430 Helicopter Data Report
NASA Technical Reports Server (NTRS)
Watts, Michael E.; Greenwood, Eric; Smith, Charles D.; Snider, Royce; Conner, David A.
2014-01-01
A cooperative ight test by NASA, Bell Helicopter and the U.S. Army to characterize the steady state acoustics and measure the maneuver noise of a Bell Helicopter 430 aircraft was accomplished. The test occurred during June/July 2011 at Eglin Air Force Base, Florida. This test gathered a total of 410 test points over 10 test days and compiled an extensive database of dynamic maneuver measurements. Three microphone arrays with up to 31 microphon. es in each were used to acquire acoustic data. Aircraft data included Differential Global Positioning System, aircraft state and rotor state information. This paper provides an overview of the test and documents the data acquired.
Background Noise Reduction Using Adaptive Noise Cancellation Determined by the Cross-Correlation
NASA Technical Reports Server (NTRS)
Spalt, Taylor B.; Brooks, Thomas F.; Fuller, Christopher R.
2012-01-01
Background noise due to flow in wind tunnels contaminates desired data by decreasing the Signal-to-Noise Ratio. The use of Adaptive Noise Cancellation to remove background noise at measurement microphones is compromised when the reference sensor measures both background and desired noise. The technique proposed modifies the classical processing configuration based on the cross-correlation between the reference and primary microphone. Background noise attenuation is achieved using a cross-correlation sample width that encompasses only the background noise and a matched delay for the adaptive processing. A present limitation of the method is that a minimum time delay between the background noise and desired signal must exist in order for the correlated parts of the desired signal to be separated from the background noise in the crosscorrelation. A simulation yields primary signal recovery which can be predicted from the coherence of the background noise between the channels. Results are compared with two existing methods.
Bird population density estimated from acoustic signals
Dawson, D.K.; Efford, M.G.
2009-01-01
Many animal species are detected primarily by sound. Although songs, calls and other sounds are often used for population assessment, as in bird point counts and hydrophone surveys of cetaceans, there are few rigorous methods for estimating population density from acoustic data. 2. The problem has several parts - distinguishing individuals, adjusting for individuals that are missed, and adjusting for the area sampled. Spatially explicit capture-recapture (SECR) is a statistical methodology that addresses jointly the second and third parts of the problem. We have extended SECR to use uncalibrated information from acoustic signals on the distance to each source. 3. We applied this extension of SECR to data from an acoustic survey of ovenbird Seiurus aurocapilla density in an eastern US deciduous forest with multiple four-microphone arrays. We modelled average power from spectrograms of ovenbird songs measured within a window of 0??7 s duration and frequencies between 4200 and 5200 Hz. 4. The resulting estimates of the density of singing males (0??19 ha -1 SE 0??03 ha-1) were consistent with estimates of the adult male population density from mist-netting (0??36 ha-1 SE 0??12 ha-1). The fitted model predicts sound attenuation of 0??11 dB m-1 (SE 0??01 dB m-1) in excess of losses from spherical spreading. 5.Synthesis and applications. Our method for estimating animal population density from acoustic signals fills a gap in the census methods available for visually cryptic but vocal taxa, including many species of bird and cetacean. The necessary equipment is simple and readily available; as few as two microphones may provide adequate estimates, given spatial replication. The method requires that individuals detected at the same place are acoustically distinguishable and all individuals vocalize during the recording interval, or that the per capita rate of vocalization is known. We believe these requirements can be met, with suitable field methods, for a significant number of songbird species. ?? 2009 British Ecological Society.
Visualizing Sound Directivity via Smartphone Sensors
ERIC Educational Resources Information Center
Hawley, Scott H.; McClain, Robert E., Jr.
2018-01-01
When Yang-Hann Kim received the Rossing Prize in Acoustics Education at the 2015 meeting of the Acoustical Society of America, he stressed the importance of offering visual depictions of sound fields when teaching acoustics. Often visualization methods require specialized equipment such as microphone arrays or scanning apparatus. We present a…
The effect of hearing aid technologies on listening in an automobile.
Wu, Yu-Hsiang; Stangl, Elizabeth; Bentler, Ruth A; Stanziola, Rachel W
2013-06-01
Communication while traveling in an automobile often is very difficult for hearing aid users. This is because the automobile/road noise level is usually high, and listeners/drivers often do not have access to visual cues. Since the talker of interest usually is not located in front of the listener/driver, conventional directional processing that places the directivity beam toward the listener's front may not be helpful and, in fact, could have a negative impact on speech recognition (when compared to omnidirectional processing). Recently, technologies have become available in commercial hearing aids that are designed to improve speech recognition and/or listening effort in noisy conditions where talkers are located behind or beside the listener. These technologies include (1) a directional microphone system that uses a backward-facing directivity pattern (Back-DIR processing), (2) a technology that transmits audio signals from the ear with the better signal-to-noise ratio (SNR) to the ear with the poorer SNR (Side-Transmission processing), and (3) a signal processing scheme that suppresses the noise at the ear with the poorer SNR (Side-Suppression processing). The purpose of the current study was to determine the effect of (1) conventional directional microphones and (2) newer signal processing schemes (Back-DIR, Side-Transmission, and Side-Suppression) on listener's speech recognition performance and preference for communication in a traveling automobile. A single-blinded, repeated-measures design was used. Twenty-five adults with bilateral symmetrical sensorineural hearing loss aged 44 through 84 yr participated in the study. The automobile/road noise and sentences of the Connected Speech Test (CST) were recorded through hearing aids in a standard van moving at a speed of 70 mph on a paved highway. The hearing aids were programmed to omnidirectional microphone, conventional adaptive directional microphone, and the three newer schemes. CST sentences were presented from the side and back of the hearing aids, which were placed on the ears of a manikin. The recorded stimuli were presented to listeners via earphones in a sound-treated booth to assess speech recognition performance and preference with each programmed condition. Compared to omnidirectional microphones, conventional adaptive directional processing had a detrimental effect on speech recognition when speech was presented from the back or side of the listener. Back-DIR and Side-Transmission processing improved speech recognition performance (relative to both omnidirectional and adaptive directional processing) when speech was from the back and side, respectively. The performance with Side-Suppression processing was better than with adaptive directional processing when speech was from the side. The participants' preferences for a given processing scheme were generally consistent with speech recognition results. The finding that performance with adaptive directional processing was poorer than with omnidirectional microphones demonstrates the importance of selecting the correct microphone technology for different listening situations. The results also suggest the feasibility of using hearing aid technologies to provide a better listening experience for hearing aid users in automobiles. American Academy of Audiology.
Wang, Yan; Chen, Kean
2017-10-01
A spherical microphone array has proved effective in reconstructing an enclosed sound field by a superposition of spherical wave functions in Fourier domain. It allows successful reconstructions surrounding the array, but the accuracy will be degraded at a distance. In order to extend the effective reconstruction to the entire cavity, a plane-wave basis in space domain is used owing to its non-decaying propagating characteristic and compared with the conventional spherical wave function method in a low frequency sound field within a cylindrical cavity. The sensitivity to measurement noise, the effects of the numbers of plane waves, and measurement positions are discussed. Simulations show that under the same measurement conditions, the plane wave function method is superior in terms of reconstruction accuracy and data processing efficiency, that is, the entire sound field imaging can be achieved by only one time calculation instead of translations of local sets of coefficients with respect to every measurement position into a global one. An experiment was conducted inside an aircraft cabin mock-up for validation. Additionally, this method provides an alternative possibility to recover the coefficients of high order spherical wave functions in a global coordinate system without coordinate translations with respect to local origins.
SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization
Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah
2014-01-01
Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431
NASA Technical Reports Server (NTRS)
Costanza, Bryan T.; Horne, William C.; Schery, S. D.; Babb, Alex T.
2011-01-01
The Aero-Physics Branch at NASA Ames Research Center utilizes a 32- by 48-inch subsonic wind tunnel for aerodynamics research. The feasibility of acquiring acoustic measurements with a phased microphone array was recently explored. Acoustic characterization of the wind tunnel was carried out with a floor-mounted 24-element array and two ceiling-mounted speakers. The minimum speaker level for accurate level measurement was evaluated for various tunnel speeds up to a Mach number of 0.15 and streamwise speaker locations. A variety of post-processing procedures, including conventional beamforming and deconvolutional processing such as TIDY, were used. The speaker measurements, with and without flow, were used to compare actual versus simulated in-flow speaker calibrations. Data for wind-off speaker sound and wind-on tunnel background noise were found valuable for predicting sound levels for which the speakers were detectable when the wind was on. Speaker sources were detectable 2 - 10 dB below the peak background noise level with conventional data processing. The effectiveness of background noise cross-spectral matrix subtraction was assessed and found to improve the detectability of test sound sources by approximately 10 dB over a wide frequency range.
Veligdan, James T.
2000-11-14
A microphone for detecting sound pressure waves includes a laser resonator having a laser gain material aligned coaxially between a pair of first and second mirrors for producing a laser beam. A reference cell is disposed between the laser material and one of the mirrors for transmitting a reference portion of the laser beam between the mirrors. A sensing cell is disposed between the laser material and one of the mirrors, and is laterally displaced from the reference cell for transmitting a signal portion of the laser beam, with the sensing cell being open for receiving the sound waves. A photodetector is disposed in optical communication with the first mirror for receiving the laser beam, and produces an acoustic signal therefrom for the sound waves.
NASA Astrophysics Data System (ADS)
Viswanathan, V. R.; Karnofsky, K. F.; Stevens, K. N.; Alakel, M. N.
1983-12-01
The use of multiple sensors to transduce speech was investigated. A data base of speech and noise was collected from a number of transducers located on and around the head of the speaker. The transducers included pressure, first order gradient, second order gradient microphones and an accelerometer. The effort analyzed this data and evaluated the performance of a multiple sensor configuration. The conclusion was: multiple transducer configurations can provide a signal containing more useable speech information than that provided by a microphone.
Chung, King; Nelson, Lance; Teske, Melissa
2012-09-01
The purpose of this study was to investigate whether a multichannel adaptive directional microphone and a modulation-based noise reduction algorithm could enhance cochlear implant performance in reverberant noise fields. A hearing aid was modified to output electrical signals (ePreprocessor) and a cochlear implant speech processor was modified to receive electrical signals (eProcessor). The ePreprocessor was programmed to flat frequency response and linear amplification. Cochlear implant listeners wore the ePreprocessor-eProcessor system in three reverberant noise fields: 1) one noise source with variable locations; 2) three noise sources with variable locations; and 3) eight evenly spaced noise sources from 0° to 360°. Listeners' speech recognition scores were tested when the ePreprocessor was programmed to omnidirectional microphone (OMNI), omnidirectional microphone plus noise reduction algorithm (OMNI + NR), and adaptive directional microphone plus noise reduction algorithm (ADM + NR). They were also tested with their own cochlear implant speech processor (CI_OMNI) in the three noise fields. Additionally, listeners rated overall sound quality preferences on recordings made in the noise fields. Results indicated that ADM+NR produced the highest speech recognition scores and the most preferable rating in all noise fields. Factors requiring attention in the hearing aid-cochlear implant integration process are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.
Human Action Recognition Using Wireless Wearable In-Ear Microphone
NASA Astrophysics Data System (ADS)
Nishimura, Jun; Kuroda, Tadahiro
To realize the ubiquitous eating habits monitoring, we proposed the use of sounds sensed by an in-ear placed wireless wearable microphone. A prototype of wireless wearable in-ear microphone was developed by utilizing a common Bluetooth headset. We proposed a robust chewing action recognition algorithm which consists of two recognition stages: “chew-like” signal detection and chewing sound verification stages. We also provide empirical results on other action recognition using in-ear sound including swallowing, cough, belch, and etc. The average chewing number counting error rate of 1.93% is achieved. Lastly, chewing sound mapping is proposed as a new prototypical approach to provide an additional intuitive feedback on food groups to be able to infer the eating habits in their daily life context.
Jenkins, Herman A; Uhler, Kristin
2012-01-01
To compare the speech understanding abilities of cochlear implant listeners using 2 microphone technologies, the Otologics fully implantable Carina and the Cochlear Freedom microphones. Feasibility study using direct comparison of the 2 microphones, nonrandomized and nonblinded within case studies. Tertiary referral center hospital outpatient clinic. Four subjects with greater than 1 year of unilateral listening experience with the Freedom Cochlear Implant and a CNC word score higher than 40%. A Carina microphone coupled to a percutaneous plug was implanted on the ipsilateral side of the cochlear implant. Two months were allowed for healing before connecting to the Carina microphone. The percutaneous plug was connected to a body worn external processor with output leads inserted into the auxiliary port of the Freedom processor. Subjects were instructed to use each of the 2 microphones for half of their daily implant use. Aided pure tone thresholds, consonant-nucleus-consonant (CNC), Bamford-Kowel-Bench Speech in Noise test (BKN-SIN), and Abbreviated Profile of Hearing Aid Benefit. All subjects had sound perceptions using both microphones. The loudness and quality of the sound was judged to be poorer with the Carina in the first 2 subjects. The latter 2 demonstrated essential equivalence in the second two listeners, with the exception of the Abbreviated Profile of Hearing Aid Benefit reporting greater percentage of problems for the Carina in the background noise situation for subject 0011-003PP. CNC word scores were better with the Freedom than the Carina in all 4 subjects. The latter 2 showed improved speech perception abilities with the Carina, compared with the first 2. The BKB-SIN showed consistently better results with the Freedom in noise. Early observations indicate that it is potentially feasible to use the fully implanted Carina microphone with the Freedom Cochlear Implant. The authors would anticipate that outcomes would improve as more knowledge is gained in signal processing and with the fabrication of an integrated device.
Real-time algorithm for acoustic imaging with a microphone array.
Huang, Xun
2009-05-01
Acoustic phased array has become an important testing tool in aeroacoustic research, where the conventional beamforming algorithm has been adopted as a classical processing technique. The computation however has to be performed off-line due to the expensive cost. An innovative algorithm with real-time capability is proposed in this work. The algorithm is similar to a classical observer in the time domain while extended for the array processing to the frequency domain. The observer-based algorithm is beneficial mainly for its capability of operating over sampling blocks recursively. The expensive experimental time can therefore be reduced extensively since any defect in a testing can be corrected instantaneously.
Single-point nonlinearity indicators for the propagation of high-amplitude acoustic signals
NASA Astrophysics Data System (ADS)
Falco, Lauren E.
In the study of jet noise, prediction schemes and impact assessment models based on linear acoustic theory are not always sufficient to describe the character of the radiated noise. Typically, a spectral comparison method is employed to determine whether nonlinear effects are important. A power spectral density recorded at one propagation distance is extrapolated to a different distance using linear theory and compared with a measurement at the second distance. Discrepancies between the measured and extrapolated spectra are often attributed to nonlinearity. There are many other factors that can influence the outcome of this operation, though, including meteorological factors such as wind and temperature gradients, ground reflections, and uncertainty in the source location. Therefore, an improved method for assessing the importance of nonlinearity that requires only a single measurement is desirable. This work examines four candidate single-point nonlinearity indicators derived from the quantity Qp2 p found in the work of Morfey and Howell. These include: Qneg/Qpos, a ratio designed to test for conservation of energy; Qpos/p3rms , a bandlimited quantity that describes energy lost from a certain part of the spectrum due to nonlinearity; the spectral Gol'dberg number Gamma s, a dimensionless quantity whose sign indicates the direction of nonlinear energy transfer and whose magnitude can be used to compare the relative importance of linear and nonlinear effects; and the coherence indicator gamma Q, which also denotes the direction of nonlinear energy transfer and which is bounded between -1 and 1. Two sets of experimental data are presented. The first was recorded in a plane wave tube built of 2" inner-diameter PVC pipe with four evenly-spaced microphones flush-mounted with the inside wall of the tube. One or two compression drivers were used as the sound source, and an anechoic termination made of fiberglass served to minimize reflections from the far end of the tube. Both single-frequency signals and band-limited noise were used as sources, and waveforms were recorded at all four propagation distances. The second set of data was obtained at the model-scale jet facility at the University of Mississippi's National Center for Physical Acoustics. A computer controlled microphone boom was constructed to hold an array of six microphones. The array was rotated about the presumed location of the acoustic source center (4 jet diameters downstream of the nozzle exit), and two stationary microphones were mounted on the walls. Measurements were made for several jet conditions; data presented here represent Mach 0.85 and Mach 2 conditions. Application of the four candidate nonlinearity indicators to the experimental data reveals that each indicator has advantages and disadvantages. Qneg/Qpos does not detect the presence of shocks as postulated, but it does conform to expectations in the shock-free region and support the use of Qpos as an indicator. The main advantage of Qpos/p3rms is that it can be used for band-limited measurements. Increased indicator values are seen for signals with higher source frequencies and amplitudes that are expected to undergo stronger nonlinear evolution. However, no physical meaning can yet be derived from the numerical value of the indicator. The spectral Gol'dberg number Gammas is the most promising of the candidate quantities. It has the ability to indicate the direction of nonlinear energy transfer as well as provide a comparison between the strengths of linear and nonlinear effects. These attributes allow it to be used to qualitatively predict the evolution of a spectrum. The coherence indicator gammaQ also specifies the direction of nonlinear energy transfer, but its numerical value holds less meaning. However, it is bounded between -1 and 1, so values near zero denote very weak or no nonlinearity, and values near -1 or 1 denote strong nonlinearity. Further, because it is bounded, it does not become unstable for spectral components beneath the system noise floor.
Analysis of the cochlear microphonic to a low-frequency tone embedded in filtered noise
Chertoff, Mark E.; Earl, Brian R.; Diaz, Francisco J.; Sorensen, Janna L.
2012-01-01
The cochlear microphonic was recorded in response to a 733 Hz tone embedded in noise that was high-pass filtered at 25 different frequencies. The amplitude of the cochlear microphonic increased as the high-pass cutoff frequency of the noise increased. The amplitude growth for a 60 dB SPL tone was steeper and saturated sooner than that of an 80 dB SPL tone. The growth for both signal levels, however, was not entirely cumulative with plateaus occurring at about 4 and 7 mm from the apex. A phenomenological model of the electrical potential in the cochlea that included a hair cell probability function and spiral geometry of the cochlea could account for both the slope of the growth functions and the plateau regions. This suggests that with high-pass-filtered noise, the cochlear microphonic recorded at the round window comes from the electric field generated at the source directed towards the electrode and not down the longitudinal axis of the cochlea. PMID:23145616
Drive-by large-region acoustic noise-source mapping via sparse beamforming tomography.
Tuna, Cagdas; Zhao, Shengkui; Nguyen, Thi Ngoc Tho; Jones, Douglas L
2016-10-01
Environmental noise is a risk factor for human physical and mental health, demanding an efficient large-scale noise-monitoring scheme. The current technology, however, involves extensive sound pressure level (SPL) measurements at a dense grid of locations, making it impractical on a city-wide scale. This paper presents an alternative approach using a microphone array mounted on a moving vehicle to generate two-dimensional acoustic tomographic maps that yield the locations and SPLs of the noise-sources sparsely distributed in the neighborhood traveled by the vehicle. The far-field frequency-domain delay-and-sum beamforming output power values computed at multiple locations as the vehicle drives by are used as tomographic measurements. The proposed method is tested with acoustic data collected by driving an electric vehicle with a rooftop-mounted microphone array along a straight road next to a large open field, on which various pre-recorded noise-sources were produced by a loudspeaker at different locations. The accuracy of the tomographic imaging results demonstrates the promise of this approach for rapid, low-cost environmental noise-monitoring.
Mathevon, Nicolas; Dabelsteen, Torben; Blumenrath, Sandra H
2005-01-01
Birds often sing from high perches referred to as song posts. However, birds also listen and keep a lookout from these perches. We used a sound transmission experiment to investigate the changes for receiving and sending conditions that a territorial songbird may experience by moving upwards in the vegetation. Representative song elements of the blackcap Sylvia atricapilla were transmitted in a forest habitat in spring using a complete factorial design with natural transmission distances and speaker and microphone heights. Four aspects of sound degradation were quantified: signal-to-noise ratio, excess attenuation, distortion within the sounds determined as a blur ratio, and prolongation of the sounds with "tails" of echoes determined as a tail-to-signal ratio. All four measures indicated that degradation decreased with speaker and microphone height. However, the decrease was considerably higher for the microphone than for the speaker. This suggests that choosing high perches in a forest at spring results in more benefits to blackcaps in terms of improved communication conditions when they act as receivers than as senders.
NASA Technical Reports Server (NTRS)
Sutliff, Daniel L.; Dougherty, Robert P.; Walker, Bruce E.
2010-01-01
An in-duct beamforming technique for imaging rotating broadband fan sources has been used to evaluate the acoustic characteristics of a Foam-Metal Liner installed over-the-rotor of a low-speed fan. The NASA Glenn Research Center s Advanced Noise Control Fan was used as a test bed. A duct wall-mounted phased array consisting of several rings of microphones was employed. The data are mathematically resampled in the fan rotating reference frame and subsequently used in a conventional beamforming technique. The steering vectors for the beamforming technique are derived from annular duct modes, so that effects of reflections from the duct walls are reduced.
Susceptibility study of audio recording devices to electromagnetic stimulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halligan, Matthew S.; Grant, Steven L.; Beetner, Daryl G.
2014-02-01
Little research has been performed to study how intentional electromagnetic signals may couple into recording devices. An electromagnetic susceptibility study was performed on an analog tape recorder, a digital video camera, a wired computer microphone, and a wireless microphone system to electromagnetic interference. Devices were subjected to electromagnetic stimulations in the frequency range of 1-990 MHz and field strengths up to 4.9 V/m. Carrier and message frequencies of the stimulation signals were swept, and the impacts of device orientation and antenna polarization were explored. Message signals coupled into all devices only when amplitude modulated signals were used as stimulation signals.more » Test conditions that produced maximum sensitivity were highly specific to each device. Only narrow carrier frequency ranges could be used for most devices to couple messages into recordings. A basic detection technique using cross-correlation demonstrated the need for messages to be as long as possible to maximize message detection and minimize detection error. Analysis suggests that detectable signals could be coupled to these recording devices under realistic ambient conditions.« less
Female mice ultrasonically interact with males during courtship displays
Neunuebel, Joshua P; Taylor, Adam L; Arthur, Ben J; Egnor, SE Roian
2015-01-01
During courtship males attract females with elaborate behaviors. In mice, these displays include ultrasonic vocalizations. Ultrasonic courtship vocalizations were previously attributed to the courting male, despite evidence that both sexes produce virtually indistinguishable vocalizations. Because of this similarity, and the difficulty of assigning vocalizations to individuals, the vocal contribution of each individual during courtship is unknown. To address this question, we developed a microphone array system to localize vocalizations from socially interacting, individual adult mice. With this system, we show that female mice vocally interact with males during courtship. Males and females jointly increased their vocalization rates during chases. Furthermore, a female's participation in these vocal interactions may function as a signal that indicates a state of increased receptivity. Our results reveal a novel form of vocal communication during mouse courtship, and lay the groundwork for a mechanistic dissection of communication during social behavior. DOI: http://dx.doi.org/10.7554/eLife.06203.001 PMID:26020291
The Benefit of Remote Microphones Using Four Wireless Protocols.
Rodemerk, Krishna S; Galster, Jason A
2015-09-01
Many studies have reported the speech recognition benefits of a personal remote microphone system when used by adult listeners with hearing loss. The advance of wireless technology has allowed for many wireless audio transmission protocols. Some of these protocols interface with commercially available hearing aids. As a result, commercial remote microphone systems use a variety of different protocols for wireless audio transmission. It is not known how these systems compare, with regard to adult speech recognition in noise. The primary goal of this investigation was to determine the speech recognition benefits of four different commercially available remote microphone systems, each with a different wireless audio transmission protocol. A repeated-measures design was used in this study. Sixteen adults, ages 52 to 81 yr, with mild to severe sensorineural hearing loss participated in this study. Participants were fit with three different sets of bilateral hearing aids and four commercially available remote microphone systems (FM, 900 MHz, 2.4 GHz, and Bluetooth(®) paired with near-field magnetic induction). Speech recognition scores were measured by an adaptive version of the Hearing in Noise Test (HINT). The participants were seated both 6 and 12' away from the talker loudspeaker. Participants repeated HINT sentences with and without hearing aids and with four commercially available remote microphone systems in both seated positions with and without contributions from the hearing aid or environmental microphone (24 total conditions). The HINT SNR-50, or the signal-to-noise ratio required for correct repetition of 50% of the sentences, was recorded for all conditions. A one-way repeated measures analysis of variance was used to determine statistical significance of microphone condition. The results of this study revealed that use of the remote microphone systems statistically improved speech recognition in noise relative to unaided and hearing aid-only conditions across all four wireless transmission protocols at 6 and 12' away from the talker. Participants showed a significant improvement in speech recognition in noise when comparing four remote microphone systems with different wireless transmission methods to hearing aids alone. American Academy of Audiology.
Open photoacoustic cell x-ray detection
NASA Astrophysics Data System (ADS)
Bento, A. C.; Aguiar, M. M. F.; Vargas, H.; da Silva, M. D.; Bandeira, I. N.; Miranda, L. C. M.
1989-03-01
A simple open-cell configuration photoacoustic x-ray detector is experimentally demonstrated. The front air chamber of a commercial electret microphone is used as the transducer medium of conventional photoacoustics. The observed signal is well described by the thermal diffusion model for the photoacoustic signal.
Acoustic results of the Boeing model 360 whirl tower test
NASA Astrophysics Data System (ADS)
Watts, Michael E.; Jordan, David
1990-09-01
An evaluation is presented for whirl tower test results of the Model 360 helicopter's advanced, high-performance four-bladed composite rotor system intended to facilitate over-200-knot flight. During these performance measurements, acoustic data were acquired by seven microphones. A comparison of whirl-tower tests with theory indicate that theoretical prediction accuracies vary with both microphone position and the inclusion of ground reflection. Prediction errors varied from 0 to 40 percent of the measured signal-to-peak amplitude.
NASA Technical Reports Server (NTRS)
Pla, Frederic G.; Hu, Ziqiang; Sutliff, Daniel L.
1996-01-01
This report describes the Active Noise Cancellation (ANC) System designed by General Electric and tested in the NASA Lewis Research Center's (LERC) 48 inch Active Noise Control Fan (ANCF). The goal of this study is to assess the feasibility of using wall mounted secondary acoustic sources and sensors within the duct of a high bypass turbofan aircraft engine for global active noise cancellation of fan tones. The GE ANC system is based on a modal control approach. A known acoustic mode propagating in the fan duct is canceled using an array of flush-mounted compact sound sources. The canceling modal signal is generated by a modal controller. Inputs to the controller are signals from a shaft encoder and from a microphone array which senses the residual acoustic mode in the duct. The key results are that the (6,0) was completely eliminated at the 920 Hz design frequency and substantially reduced elsewhere. The total tone power was reduced 6.8 dB (out of a possible 9.8 dB). Farfield reductions of 15 dB (SPL) were obtained. The (4,0) and (4,1) modes were reduced simultaneously yielding a 15 dB PWL decrease. The results indicate that global attenuation of PWL at the target frequency was obtained in the aft quadrant using an ANC actuator and sensor system totally contained within the duct. The quality of the results depended on precise mode generation. High spillover into spurious modes generated by the ANC actuator array caused less than optimum levels of PWL reduction. The variation in spillover is believed to be due to calibration procedure, but must be confirmed in subsequent tests.
Best, Virginia; Mejia, Jorge; Freeston, Katrina; van Hoesel, Richard J; Dillon, Harvey
2015-01-01
Binaural beamformers are super-directional hearing aids created by combining microphone outputs from each side of the head. While they offer substantial improvements in SNR over conventional directional hearing aids, the benefits (and possible limitations) of these devices in realistic, complex listening situations have not yet been fully explored. In this study we evaluated the performance of two experimental binaural beamformers. Testing was carried out using a horizontal loudspeaker array. Background noise was created using recorded conversations. Performance measures included speech intelligibility, localization in noise, acceptable noise level, subjective ratings, and a novel dynamic speech intelligibility measure. Participants were 27 listeners with bilateral hearing loss, fitted with BTE prototypes that could be switched between conventional directional or binaural beamformer microphone modes. Relative to the conventional directional microphones, both binaural beamformer modes were generally superior for tasks involving fixed frontal targets, but not always for situations involving dynamic target locations. Binaural beamformers show promise for enhancing listening in complex situations when the location of the source of interest is predictable.
Best, Virginia; Mejia, Jorge; Freeston, Katrina; van Hoesel, Richard J.; Dillon, Harvey
2016-01-01
Objective Binaural beamformers are super-directional hearing aids created by combining microphone outputs from each side of the head. While they offer substantial improvements in SNR over conventional directional hearing aids, the benefits (and possible limitations) of these devices in realistic, complex listening situations have not yet been fully explored. In this study we evaluated the performance of two experimental binaural beamformers. Design Testing was carried out using a horizontal loudspeaker array. Background noise was created using recorded conversations. Performance measures included speech intelligibility, localisation in noise, acceptable noise level, subjective ratings, and a novel dynamic speech intelligibility measure. Study sample Participants were 27 listeners with bilateral hearing loss, fitted with BTE prototypes that could be switched between conventional directional or binaural beamformer microphone modes. Results Relative to the conventional directional microphones, both binaural beamformer modes were generally superior for tasks involving fixed frontal targets, but not always for situations involving dynamic target locations. Conclusions Binaural beamformers show promise for enhancing listening in complex situations when the location of the source of interest is predictable. PMID:26140298
NASA Astrophysics Data System (ADS)
Sujono, A.; Santoso, B.; Juwana, W. E.
2016-03-01
Problems of detonation (knock) on Otto engine (petrol engine) is completely unresolved problem until now, especially if want to improve the performance. This research did sound vibration signal processing engine with a microphone sensor, for the detection and identification of detonation. A microphone that can be mounted is not attached to the cylinder block, that's high temperature, so that its performance will be more stable, durable and inexpensive. However, the method of analysis is not very easy, because a lot of noise (interference). Therefore the use of new methods of pattern recognition, through filtration, and the regression function normalized envelope. The result is quite good, can achieve a success rate of about 95%.
An active drop counting device using condenser microphone for superheated emulsion detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Mala; Marick, C.; Kanjilal, D.
2008-11-15
An active device for superheated emulsion detector is described. A capacitive diaphragm sensor or condenser microphone is used to convert the acoustic pulse of drop nucleation to electrical signal. An active peak detector is included in the circuit to avoid multiple triggering of the counter. The counts are finally recorded by a microprocessor based data acquisition system. Genuine triggers, missed by the sensor, were studied using a simulated clock pulse. The neutron energy spectrum of {sup 252}Cf fission neutron source was measured using the device with R114 as the sensitive liquid and compared with the calculated fission neutron energy spectrummore » of {sup 252}Cf. Frequency analysis of the detected signals was also carried out.« less
An active drop counting device using condenser microphone for superheated emulsion detector
NASA Astrophysics Data System (ADS)
Das, Mala; Arya, A. S.; Marick, C.; Kanjilal, D.; Saha, S.
2008-11-01
An active device for superheated emulsion detector is described. A capacitive diaphragm sensor or condenser microphone is used to convert the acoustic pulse of drop nucleation to electrical signal. An active peak detector is included in the circuit to avoid multiple triggering of the counter. The counts are finally recorded by a microprocessor based data acquisition system. Genuine triggers, missed by the sensor, were studied using a simulated clock pulse. The neutron energy spectrum of C252f fission neutron source was measured using the device with R114 as the sensitive liquid and compared with the calculated fission neutron energy spectrum of C252f. Frequency analysis of the detected signals was also carried out.
Factors influencing individual variation in perceptual directional microphone benefit.
Keidser, Gitte; Dillon, Harvey; Convery, Elizabeth; Mejia, Jorge
2013-01-01
Large variations in perceptual directional microphone benefit, which far exceed the variation expected from physical performance measures of directional microphones, have been reported in the literature. The cause for the individual variation has not been systematically investigated. To determine the factors that are responsible for the individual variation in reported perceptual directional benefit. A correlational study. Physical performance measures of the directional microphones obtained after they had been fitted to individuals, cognitive abilities of individuals, and measurement errors were related to perceptual directional benefit scores. Fifty-nine hearing-impaired adults with varied degrees of hearing loss participated in the study. All participants were bilaterally fitted with a Motion behind-the-ear device (500 M, 501 SX, or 501 P) from Siemens according to the National Acoustic Laboratories' non-linear prescription, version two (NAL-NL2). Using the Bamford-Kowal-Bench (BKB) sentences, the perceptual directional benefit was obtained as the difference in speech reception threshold measured in babble noise (SRTn) with the devices in directional (fixed hypercardioid) and in omnidirectional mode. The SRTn measurements were repeated three times with each microphone mode. Physical performance measures of the directional microphone included the angle of the microphone ports to loudspeaker axis, the frequency range dominated by amplified sound, the in situ signal-to-noise ratio (SNR), and the in situ three-dimensional, articulation-index weighted directivity index (3D AI-DI). The cognitive tests included auditory selective attention, speed of processing, and working memory. Intraparticipant variation on the repeated SRTn's and the interparticipant variation on the average SRTn were used to determine the effect of measurement error. A multiple regression analysis was used to determine the effect of other factors. Measurement errors explained 52% of the variation in perceptual directional microphone benefit (95% confidence interval [CI]: 34-78%), while another 37% of variation was explained primarily by the physical performance of the directional microphones after they were fitted to individuals. The most contributing factor was the in situ 3D AI-DI measured across the low frequencies. Repeated SRTn measurements are needed to obtain a reliable indication of the perceptual directional benefit in an individual. Further, to obtain optimum benefit from directional microphones, the effectiveness of the microphones should be maximized across the low frequencies. American Academy of Audiology.
2000-11-27
After arriving at the Shuttle Landing Facility, the STS-97 crew gather to address the media. At the microphone is Pilot Michael Bloomfield. Behind him can be seen Mission Specialists Joseph Tanner and Carlos Noriega. Mission STS-97 is the sixth construction flight to the International Space Station. Its payload includes the P6 Integrated Truss Structure and a photovoltaic (PV) module, with giant solar arrays that will provide power to the Station. The mission includes two spacewalks to complete the solar array connections. STS-97 is scheduled to launch Nov. 30 at about 10:06 p.m. EST
NASA Astrophysics Data System (ADS)
Gallin, Louis-Jonardan; Farges, Thomas; Marchiano, Régis; Coulouvrat, François; Defer, Eric; Rison, William; Schulz, Wolfgang; Nuret, Mathieu
2016-04-01
In the framework of the European Hydrological Cycle in the Mediterranean Experiment project, a field campaign devoted to the study of electrical activity during storms took place in the south of France in 2012. An acoustic station composed of four microphones and four microbarometers was deployed within the coverage of a Lightning Mapping Array network. On the 26 October 2012, a thunderstorm passed just over the acoustic station. Fifty-six natural thunder events, due to cloud-to-ground and intracloud flashes, were recorded. This paper studies the acoustic reconstruction, in the low frequency range from 1 to 40 Hz, of the recorded flashes and their comparison with detections from electromagnetic networks. Concurrent detections from the European Cooperation for Lightning Detection lightning location system were also used. Some case studies show clearly that acoustic signal from thunder comes from the return stroke but also from the horizontal discharges which occur inside the clouds. The huge amount of observation data leads to a statistical analysis of lightning discharges acoustically recorded. Especially, the distributions of altitudes of reconstructed acoustic detections are explored in detail. The impact of the distance to the source on these distributions is established. The capacity of the acoustic method to describe precisely the lower part of nearby cloud-to-ground discharges, where the Lightning Mapping Array network is not effective, is also highlighted.
Preliminary Analysis of Acoustic Measurements from the NASA-Gulfstream Airframe Noise Flight Test
NASA Technical Reports Server (NTRS)
Khorrami, Mehdi R.; Lockhard, David D.; Humphreys, Willliam M.; Choudhari, Meelan M.; Van De Ven, Thomas
2008-01-01
The NASA-Gulfstream joint Airframe Noise Flight Test program was conducted at the NASA Wallops Flight Facility during October, 2006. The primary objective of the AFN flight test was to acquire baseline airframe noise data on a regional jet class of transport in order to determine noise source strengths and distributions for model validation. To accomplish this task, two measuring systems were used: a ground-based microphone array and individual microphones. Acoustic data for a Gulfstream G550 aircraft were acquired over the course of ten days. Over twenty-four test conditions were flown. The test matrix was designed to provide an acoustic characterization of both the full aircraft and individual airframe components and included cruise to landing configurations. Noise sources were isolated by selectively deploying individual components (flaps, main landing gear, nose gear, spoilers, etc.) and altering the airspeed, glide path, and engine settings. The AFN flight test program confirmed that the airframe is a major contributor to the noise from regional jets during landing operations. Sound pressure levels from the individual microphones on the ground revealed the flap system to be the dominant airframe noise source for the G550 aircraft. The corresponding array beamform maps showed that most of the radiated sound from the flaps originates from the side edges. Using velocity to the sixth power and Strouhal scaling of the sound pressure spectra obtained at different speeds failed to collapse the data into a single spectrum. The best data collapse was obtained when the frequencies were left unscaled.
Noise Spectra and Directivity For a Scale-Model Landing Gear
NASA Technical Reports Server (NTRS)
Humphreys, William M., Jr.; Brooks, Thomas F.
2007-01-01
An extensive experimental study has been conducted to acquire detailed noise spectra and directivity data for a high-fidelity, 6.3%-scale, Boeing 777 main landing gear. The measurements were conducted in the NASA Langley Quiet Flow Facility using a 41-microphone directional array system positioned at a range of polar and azimuthal observer angles with respect to the model. DAMAS (Deconvolution Approach for the Mapping of Acoustic Sources) array processing as well as straightforward individual microphone processing were employed to compile unique flyover and sideline directivity databases for a range of freestream Mach numbers (0.11 - 0.17) covering typical approach conditions. Comprehensive corrections were applied to the test data to account for shear layer ray path and amplitude variations. This allowed proper beamforming at different measurement orientations, as well as directivity presentation in free-field emission coordinates. Four different configurations of the landing gear were tested: a baseline configuration with and without an attached side door, and a noise reduction concept "toboggan" truck fairing with and without side door. DAMAS noise source distributions were determined. Spectral analyses demonstrated that individual microphones could establish model spectra. This finding permitted the determination of unique, spatially-detailed directivity contours of spectral band levels over a hemispherical surface. Spectral scaling for the baseline model confirmed that the acoustic intensity scaled with the expected sixth-power of the Mach number. Finally, comparison of spectra and directivity between the baseline gear and the gear with an attached toboggan indicated that the toboggan fairing may be of some value in reducing gear noise over particular frequency ranges.
Optimization of Acoustic Pressure Measurements for Impedance Eduction
NASA Technical Reports Server (NTRS)
Jones, M. G.; Watson, W. R.; Nark, D. M.
2007-01-01
As noise constraints become increasingly stringent, there is continued emphasis on the development of improved acoustic liner concepts to reduce the amount of fan noise radiated to communities surrounding airports. As a result, multiple analytical prediction tools and experimental rigs have been developed by industry and academia to support liner evaluation. NASA Langley has also placed considerable effort in this area over the last three decades. More recently, a finite element code (Q3D) based on a quasi-3D implementation of the convected Helmholtz equation has been combined with measured data acquired in the Langley Grazing Incidence Tube (GIT) to reduce liner impedance in the presence of grazing flow. A new Curved Duct Test Rig (CDTR) has also been developed to allow evaluation of liners in the presence of grazing flow and controlled, higher-order modes, with straight and curved waveguides. Upgraded versions of each of these two test rigs are expected to begin operation by early 2008. The Grazing Flow Impedance Tube (GFIT) will replace the GIT, and additional capabilities will be incorporated into the CDTR. The current investigation uses the Q3D finite element code to evaluate some of the key capabilities of these two test rigs. First, the Q3D code is used to evaluate the microphone distribution designed for the GFIT. Liners ranging in length from 51 to 610 mm are investigated to determine whether acceptable impedance eduction can be achieved with microphones placed on the wall opposite the liner. This analysis indicates the best results are achieved for liner lengths of at least 203 mm. Next, the effects of moving this GFIT microphone array to the wall adjacent to the liner are evaluated, and acceptable results are achieved if the microphones are placed off the centerline. Finally, the code is used to investigate potential microphone placements in the CDTR rigid wall adjacent to the wall containing an acoustic liner, to determine if sufficient fidelity can be achieved with 32 microphones available for this purpose. Initial results indicate 32 microphones can provide acceptable measurements to support impedance eduction with this test rig.
NASA Technical Reports Server (NTRS)
Klos, Jacob; Palumbo, Daniel L.; Buehrle, Ralph D.; Williams, Earl G.; Valdivia, Nicolas; Herdic, Peter C.; Sklanka, Bernard
2005-01-01
A series of tests was planned and conducted in the Interior Noise Test Facility at Boeing Field, on the NASA Aries 757 flight research aircraft, and in the Structural Acoustic Loads and Transmission Facility at NASA Langley Research Center. These tests were designed to answer several questions concerning the use of array methods in flight. One focus of the tests was determining whether and to what extent array methods could be used to identify the effects of an acoustical treatment applied to a limited portion of an aircraft fuselage. Another focus of the tests was to verify that the arrays could be used to localize and quantify a known source purposely placed in front of the arrays. Thus the issues related to backside sources and flanking paths present in the complicated sound field were addressed during these tests. These issues were addressed through the use of reference transducers, both accelerometers mounted to the fuselage and microphones in the cabin, that were used to correlate the pressure holograms. measured by the microphone arrays using either SVD methods or partial coherence methods. This correlation analysis accepts only energy that is coherent with the sources sensed by the reference transducers, allowing a noise control engineer to only identify and study those vibratory sources of interest. The remainder of this paper will present a detailed description of the test setups that were used in this test sequence and typical results of the NAH/IBEM analysis used to reconstruct the sound fields. Also, a comparison of data obtained in the laboratory environments and during flights of the 757 aircraft will be made.
The effect of hearing aid technologies on listening in an automobile
Wu, Yu-Hsiang; Stangl, Elizabeth; Bentler, Ruth A.; Stanziola, Rachel W.
2014-01-01
Background Communication while traveling in an automobile often is very difficult for hearing aid users. This is because the automobile /road noise level is usually high, and listeners/drivers often do not have access to visual cues. Since the talker of interest usually is not located in front of the driver/listener, conventional directional processing that places the directivity beam toward the listener’s front may not be helpful, and in fact, could have a negative impact on speech recognition (when compared to omnidirectional processing). Recently, technologies have become available in commercial hearing aids that are designed to improve speech recognition and/or listening effort in noisy conditions where talkers are located behind or beside the listener. These technologies include (1) a directional microphone system that uses a backward-facing directivity pattern (Back-DIR processing), (2) a technology that transmits audio signals from the ear with the better signal-to-noise ratio (SNR) to the ear with the poorer SNR (Side-Transmission processing), and (3) a signal processing scheme that suppresses the noise at the ear with the poorer SNR (Side-Suppression processing). Purpose The purpose of the current study was to determine the effect of (1) conventional directional microphones and (2) newer signal processing schemes (Back-DIR, Side-Transmission, and Side-Suppression) on listener’s speech recognition performance and preference for communication in a traveling automobile. Research design A single-blinded, repeated-measures design was used. Study Sample Twenty-five adults with bilateral symmetrical sensorineural hearing loss aged 44 through 84 years participated in the study. Data Collection and Analysis The automobile/road noise and sentences of the Connected Speech Test (CST) were recorded through hearing aids in a standard van moving at a speed of 70 miles/hour on a paved highway. The hearing aids were programmed to omnidirectional microphone, conventional adaptive directional microphone, and the three newer schemes. CST sentences were presented from the side and back of the hearing aids, which were placed on the ears of a manikin. The recorded stimuli were presented to listeners via earphones in a sound treated booth to assess speech recognition performance and preference with each programmed condition. Results Compared to omnidirectional microphones, conventional adaptive directional processing had a detrimental effect on speech recognition when speech was presented from the back or side of the listener. Back-DIR and Side-Transmission processing improved speech recognition performance (relative to both omnidirectional and adaptive directional processing) when speech was from the back and side, respectively. The performance with Side-Suppression processing was better than with adaptive directional processing when speech was from the side. The participants’ preferences for a given processing scheme were generally consistent with speech recognition results. Conclusions The finding that performance with adaptive directional processing was poorer than with omnidirectional microphones demonstrates the importance of selecting the correct microphone technology for different listening situations. The results also suggest the feasibility of using hearing aid technologies to provide a better listening experience for hearing aid users in automobiles. PMID:23886425
Acoustic Surveys of a Scaled-Model CESTOL Transport Aircraft in Static and Forward Speed Conditions
NASA Technical Reports Server (NTRS)
Burnside, Nathan; Horne, Clifton
2012-01-01
An 11% scale-model of a Cruise-Efficient Short Take-off and Landing (CESTOL) scalemodel test was recently completed. The test was conducted in the AEDC National Full-Scale Aerodynamic Complex (NFAC) 40- by 80-Foot Wind Tunnel at NASA Ames Research Center. The model included two over-wing pod-mounted turbine propulsion simulators (TPS). The hybrid blended wing-body used a circulation control wing (CCW) with leadingand trailing-edge blowing. The bulk of the test matrix included three forward velocities (40 kts, 60 kts, and 100kts), angle-of-attack variation between -5 and 25 , and CCW mass flow variation. Seven strut-mounted microphones outboard of the left wing provided source directivity. A phased microphone array was mounted outboard of the right wing for source location. The goal of this paper is to provide a preliminary look at the acoustic data acquired during the Advanced Model for Extreme Lift and Improved Aeroacoustics (AMELIA) test for 0 angle-of-attack and 0 sideslip conditions. Data presented provides a good overview of the test conditions and the signal-to-noise quality of the data. TPS height variation showed a difference of 2 dB to 3 dB due to wing shielding. Variation of slot mass flow showed increases of 12 dB to 26 dB above the airframe noise and the TPS increased the overall levels an additional 5 dB to 10 dB.
A low-cost acoustic permeameter
NASA Astrophysics Data System (ADS)
Drake, Stephen A.; Selker, John S.; Higgins, Chad W.
2017-04-01
Intrinsic permeability is an important parameter that regulates air exchange through porous media such as snow. Standard methods of measuring snow permeability are inconvenient to perform outdoors, are fraught with sampling errors, and require specialized equipment, while bringing intact samples back to the laboratory is also challenging. To address these issues, we designed, built, and tested a low-cost acoustic permeameter that allows computation of volume-averaged intrinsic permeability for a homogenous medium. In this paper, we validate acoustically derived permeability of homogenous, reticulated foam samples by comparison with results derived using a standard flow-through permeameter. Acoustic permeameter elements were designed for use in snow, but the measurement methods are not snow-specific. The electronic components - consisting of a signal generator, amplifier, speaker, microphone, and oscilloscope - are inexpensive and easily obtainable. The system is suitable for outdoor use when it is not precipitating, but the electrical components require protection from the elements in inclement weather. The permeameter can be operated with a microphone either internally mounted or buried a known depth in the medium. The calibration method depends on choice of microphone positioning. For an externally located microphone, calibration was based on a low-frequency approximation applied at 500 Hz that provided an estimate of both intrinsic permeability and tortuosity. The low-frequency approximation that we used is valid up to 2 kHz, but we chose 500 Hz because data reproducibility was maximized at this frequency. For an internally mounted microphone, calibration was based on attenuation at 50 Hz and returned only intrinsic permeability. We found that 50 Hz corresponded to a wavelength that minimized resonance frequencies in the acoustic tube and was also within the response limitations of the microphone. We used reticulated foam of known permeability (ranging from 2 × 10-7 to 3 × 10-9 m2) and estimated tortuosity of 1.05 to validate both methods. For the externally mounted microphone the mean normalized standard deviation was 6 % for permeability and 2 % for tortuosity. The mean relative error from known measurements was 17 % for permeability and 2 % for tortuosity. For the internally mounted microphone the mean normalized standard deviation for permeability was 10 % and the relative error was also 10 %. Permeability determination for an externally mounted microphone is less sensitive to environmental noise than is the internally mounted microphone and is therefore the recommended method. The approximation using the internally mounted microphone was developed as an alternative for circumstances in which placing the microphone in the medium was not feasible. Environmental noise degrades precision of both methods and is recognizable as increased scatter for replicate data points.
The benefits of remote microphone technology for adults with cochlear implants.
Fitzpatrick, Elizabeth M; Séguin, Christiane; Schramm, David R; Armstrong, Shelly; Chénier, Josée
2009-10-01
Cochlear implantation has become a standard practice for adults with severe to profound hearing loss who demonstrate limited benefit from hearing aids. Despite the substantial auditory benefits provided by cochlear implants, many adults experience difficulty understanding speech in noisy environments and in other challenging listening conditions such as television. Remote microphone technology may provide some benefit in these situations; however, little is known about whether these systems are effective in improving speech understanding in difficult acoustic environments for this population. This study was undertaken with adult cochlear implant recipients to assess the potential benefits of remote microphone technology. The objectives were to examine the measurable and perceived benefit of remote microphone devices during television viewing and to assess the benefits of a frequency-modulated system for speech understanding in noise. Fifteen adult unilateral cochlear implant users were fit with remote microphone devices in a clinical environment. The study used a combination of direct measurements and patient perceptions to assess speech understanding with and without remote microphone technology. The direct measures involved a within-subject repeated-measures design. Direct measures of patients' speech understanding during television viewing were collected using their cochlear implant alone and with their implant device coupled to an assistive listening device. Questionnaires were administered to document patients' perceptions of benefits during the television-listening tasks. Speech recognition tests of open-set sentences in noise with and without remote microphone technology were also administered. Participants showed improved speech understanding for television listening when using remote microphone devices coupled to their cochlear implant compared with a cochlear implant alone. This benefit was documented both when listening to news and talk show recordings. Questionnaire results also showed statistically significant differences between listening with a cochlear implant alone and listening with a remote microphone device. Participants judged that remote microphone technology provided them with better comprehension, more confidence, and greater ease of listening. Use of a frequency-modulated system coupled to a cochlear implant also showed significant improvement over a cochlear implant alone for open-set sentence recognition in +10 and +5 dB signal to noise ratios. Benefits were measured during remote microphone use in focused-listening situations in a clinical setting, for both television viewing and speech understanding in noise in the audiometric sound suite. The results suggest that adult cochlear implant users should be counseled regarding the potential for enhanced speech understanding in difficult listening environments through the use of remote microphone technology.
An experimental investigation of flow-induced oscillations of the Bruel and Kjaer in-flow microphone
NASA Technical Reports Server (NTRS)
Fields, Richard S., Jr.
1995-01-01
One source contributing to wind tunnel background noise is microphone self-noise. An experiment was conducted to investigate the flow-induced acoustic oscillations of Bruel & Kjaer (B&K) in-flow microphones. The results strongly suggest the B&K microphone cavity behaves more like an open cavity. Their cavity acoustic oscillations are likely caused by strong interactions between the cavity shear layer and the cavity trailing edge. But the results also suggest that cavity shear layer oscillations could be coupled with cavity acoustic resonance to generate tones. Detailed flow velocity measurements over the cavity screen have shown inflection points in the mean velocity profiles and high disturbance and spectral intensities in the vicinity of the cavity trailing edge. These results are the evidence for strong interactions between cavity shear layer oscillations and the cavity trailing edge. They also suggest that beside acoustic signals, the microphone inside the cavity has likely recorded hydrodynamic pressure oscillations, too. The results also suggest that the forebody shape does not have a direct effect on cavity oscillations. For the FITE (Flow Induced Tone Eliminator) microphone, it is probably the forebody length and the resulting boundary layer turbulence that have made it work. Turbulence might have thickened the boundary layer at the separation point, weakened the shear layer vortices, or lifted them to miss impinging on the cavity trailing edge. In addition, the study shows that the cavity screen can modulate the oscillation frequency but not the cavity acoustic oscillation mechanisms.
Single and Multiple Microphone Noise Reduction Strategies in Cochlear Implants
Azimi, Behnam; Hu, Yi; Friedland, David R.
2012-01-01
To restore hearing sensation, cochlear implants deliver electrical pulses to the auditory nerve by relying on sophisticated signal processing algorithms that convert acoustic inputs to electrical stimuli. Although individuals fitted with cochlear implants perform well in quiet, in the presence of background noise, the speech intelligibility of cochlear implant listeners is more susceptible to background noise than that of normal hearing listeners. Traditionally, to increase performance in noise, single-microphone noise reduction strategies have been used. More recently, a number of approaches have suggested that speech intelligibility in noise can be improved further by making use of two or more microphones, instead. Processing strategies based on multiple microphones can better exploit the spatial diversity of speech and noise because such strategies rely mostly on spatial information about the relative position of competing sound sources. In this article, we identify and elucidate the most significant theoretical aspects that underpin single- and multi-microphone noise reduction strategies for cochlear implants. More analytically, we focus on strategies of both types that have been shown to be promising for use in current-generation implant devices. We present data from past and more recent studies, and furthermore we outline the direction that future research in the area of noise reduction for cochlear implants could follow. PMID:22923425
Flaperon Modification Effect on Jet-Flap Interaction Noise Reduction for Chevron Nozzles
NASA Technical Reports Server (NTRS)
Thomas, Russell H.; Mengle, Vinod G.; Stoker, Robert W.; Brusniak, Leon; Elkoby, Ronen
2007-01-01
Jet-flap interaction (JFI) noise can become an important component of far field noise when a flap is immersed in the engine propulsive stream or is in its entrained region, as in approach conditions for under-the-wing engine configurations. We experimentally study the effect of modifying the flaperon, which is a high speed aileron between the inboard and outboard flaps, at both approach and take-off conditions using scaled models in a free jet. The flaperon modifications were of two types: sawtooth trailing edge and mini vortex generators (vg s). Parametric variations of these two concepts were tested with a round coaxial nozzle and an advanced chevron nozzle, with azimuthally varying fan chevrons, using both far field microphone arrays and phased microphone arrays for source diagnostics purposes. In general, the phased array results corroborated the far field results in the upstream quadrant pointing to JFI near the flaperon trailing edge as the origin of the far field noise changes. Specific sawtooth trailing edges in conjunction with the round nozzle gave marginal reduction in JFI noise at approach, and parallel co-rotating mini-vg s were somewhat more beneficial over a wider range of angles, but both concepts were noisier at take-off conditions. These two concepts had generally an adverse JFI effect when used in conjunction with the advanced chevron nozzle at both approach and take-off conditions.
Blood pressure measurement and display system
NASA Technical Reports Server (NTRS)
Farkas, A. J.
1972-01-01
System is described that employs solid state circuitry to transmit visual display of patient's blood pressure. Response of sphygmomanometer cuff and microphone provide input signals. Signals and their amplitudes, from turn-on time to turn-off time, are continuously fed to data transmitter which transmits to display device.
Razza, Sergio; Zaccone, Monica; Meli, Aannalisa; Cristofari, Eliana
2017-12-01
Children affected by hearing loss can experience difficulties in challenging and noisy environments even when deafness is corrected by Cochlear implant (CI) devices. These patients have a selective attention deficit in multiple listening conditions. At present, the most effective ways to improve the performance of speech recognition in noise consists of providing CI processors with noise reduction algorithms and of providing patients with bilateral CIs. The aim of this study was to compare speech performances in noise, across increasing noise levels, in CI recipients using two kinds of wireless remote-microphone radio systems that use digital radio frequency transmission: the Roger Inspiro accessory and the Cochlear Wireless Mini Microphone accessory. Eleven Nucleus Cochlear CP910 CI young user subjects were studied. The signal/noise ratio, at a speech reception threshold (SRT) value of 50%, was measured in different conditions for each patient: with CI only, with the Roger or with the MiniMic accessory. The effect of the application of the SNR-noise reduction algorithm in each of these conditions was also assessed. The tests were performed with the subject positioned in front of the main speaker, at a distance of 2.5 m. Another two speakers were positioned at 3.50 m. The main speaker at 65 dB issued disyllabic words. Babble noise signal was delivered through the other speakers, with variable intensity. The use of both wireless remote microphones improved the SRT results. Both systems improved gain of speech performances. The gain was higher with the Mini Mic system (SRT = -4.76) than the Roger system (SRT = -3.01). The addition of the NR algorithm did not statistically further improve the results. There is significant improvement in speech recognition results with both wireless digital remote microphone accessories, in particular with the Mini Mic system when used with the CP910 processor. The use of a remote microphone accessory surpasses the benefit of application of NR algorithm. Copyright © 2017. Published by Elsevier B.V.
Digital Signal Processing in Acoustics--Part 2.
ERIC Educational Resources Information Center
Davies, H.; McNeill, D. J.
1986-01-01
Reviews the potential of a data acquisition system for illustrating the nature and significance of ideas in digital signal processing. Focuses on the fast Fourier transform and the utility of its two-channel format, emphasizing cross-correlation and its two-microphone technique of acoustic intensity measurement. Includes programing format. (ML)
Monitoring volcanic activity using correlation patterns between infrasound and ground motion
NASA Astrophysics Data System (ADS)
Ichihara, M.; Takeo, M.; Yokoo, A.; Oikawa, J.; Ohminato, T.
2012-02-01
This paper presents a simple method to distinguish infrasonic signals from wind noise using a cross-correlation function of signals from a microphone and a collocated seismometer. The method makes use of a particular feature of the cross-correlation function of vertical ground motion generated by infrasound, and the infrasound itself. Contribution of wind noise to the correlation function is effectively suppressed by separating the microphone and the seismometer by several meters because the correlation length of wind noise is much shorter than wavelengths of infrasound. The method is applied to data from two recent eruptions of Asama and Shinmoe-dake volcanoes, Japan, and demonstrates that the method effectively detects not only the main eruptions, but also minor activity generating weak infrasound hardly visible in the wave traces. In addition, the correlation function gives more information about volcanic activity than infrasound alone, because it reflects both features of incident infrasonic and seismic waves. Therefore, a graphical presentation of temporal variation in the cross-correlation function enables one to see qualitative changes of eruptive activity at a glance. This method is particularly useful when available sensors are limited, and will extend the utility of a single microphone and seismometer in monitoring volcanic activity.
Adaptive Suppression of Noise in Voice Communications
NASA Technical Reports Server (NTRS)
Kozel, David; DeVault, James A.; Birr, Richard B.
2003-01-01
A subsystem for the adaptive suppression of noise in a voice communication system effects a high level of reduction of noise that enters the system through microphones. The subsystem includes a digital signal processor (DSP) plus circuitry that implements voice-recognition and spectral- manipulation techniques. The development of the adaptive noise-suppression subsystem was prompted by the following considerations: During processing of the space shuttle at Kennedy Space Center, voice communications among test team members have been significantly impaired in several instances because some test participants have had to communicate from locations with high ambient noise levels. Ear protection for the personnel involved is commercially available and is used in such situations. However, commercially available noise-canceling microphones do not provide sufficient reduction of noise that enters through microphones and thus becomes transmitted on outbound communication links.
Graphene electrostatic microphone and ultrasonic radio
Zhou, Qin; Zheng, Jinglin; Onishi, Seita; Crommie, M. F.; Zettl, Alex K.
2015-01-01
We present a graphene-based wideband microphone and a related ultrasonic radio that can be used for wireless communication. It is shown that graphene-based acoustic transmitters and receivers have a wide bandwidth, from the audible region (20∼20 kHz) to the ultrasonic region (20 kHz to at least 0.5 MHz). Using the graphene-based components, we demonstrate efficient high-fidelity information transmission using an ultrasonic band centered at 0.3 MHz. The graphene-based microphone is also shown to be capable of directly receiving ultrasound signals generated by bats in the field, and the ultrasonic radio, coupled to electromagnetic (EM) radio, is shown to function as a high-accuracy rangefinder. The ultrasonic radio could serve as a useful addition to wireless communication technology where the propagation of EM waves is difficult. PMID:26150483
Efficient source separation algorithms for acoustic fall detection using a microsoft kinect.
Li, Yun; Ho, K C; Popescu, Mihail
2014-03-01
Falls have become a common health problem among older adults. In previous study, we proposed an acoustic fall detection system (acoustic FADE) that employed a microphone array and beamforming to provide automatic fall detection. However, the previous acoustic FADE had difficulties in detecting the fall signal in environments where interference comes from the fall direction, the number of interferences exceeds FADE's ability to handle or a fall is occluded. To address these issues, in this paper, we propose two blind source separation (BSS) methods for extracting the fall signal out of the interferences to improve the fall classification task. We first propose the single-channel BSS by using nonnegative matrix factorization (NMF) to automatically decompose the mixture into a linear combination of several basis components. Based on the distinct patterns of the bases of falls, we identify them efficiently and then construct the interference free fall signal. Next, we extend the single-channel BSS to the multichannel case through a joint NMF over all channels followed by a delay-and-sum beamformer for additional ambient noise reduction. In our experiments, we used the Microsoft Kinect to collect the acoustic data in real-home environments. The results show that in environments with high interference and background noise levels, the fall detection performance is significantly improved using the proposed BSS approaches.
Brinkløv, Signe; Elemans, Coen P. H.
2017-01-01
Oilbirds are active at night, foraging for fruits using keen olfaction and extremely light-sensitive eyes, and echolocate as they leave and return to their cavernous roosts. We recorded the echolocation behaviour of wild oilbirds using a multi-microphone array as they entered and exited their roosts under different natural light conditions. During echolocation, the birds produced click bursts (CBs) lasting less than 10 ms and consisting of a variable number (2–8) of clicks at 2–3 ms intervals. The CBs have a bandwidth of 7–23 kHz at −6 dB from signal peak frequency. We report on two unique characteristics of this avian echolocation system. First, oilbirds reduce both the energy and number of clicks in their CBs under conditions of clear, moonlit skies, compared with dark, moonless nights. Second, we document a frequency mismatch between the reported best frequency of oilbird hearing (approx. 2 kHz) and the bandwidth of their echolocation CBs. This unusual signal-to-sensory system mismatch probably reflects avian constraints on high-frequency hearing but may still allow oilbirds fine-scale, close-range detail resolution at the upper extreme (approx. 10 kHz) of their presumed hearing range. Alternatively, oilbirds, by an as-yet unknown mechanism, are able to hear frequencies higher than currently appreciated. PMID:28573036
NASA Astrophysics Data System (ADS)
de Vries, Diemer; Hörchens, Lars; Grond, Peter
2007-12-01
The state of the art of wave field synthesis (WFS) systems is that they can reproduce sound sources and secondary (mirror image) sources with natural spaciousness in a horizontal plane, and thus perform satisfactory 2D auralization of an enclosed space, based on multitrace impulse response data measured or simulated along a 2D microphone array. However, waves propagating with a nonzero elevation angle are also reproduced in the horizontal plane, which is neither physically nor perceptually correct. In most listening environments to be auralized, the floor is highly absorptive since it is covered with upholstered seats, occupied during performances by a well-dressed audience. A first-order ceiling reflection, reaching the floor directly or via a wall, will be severely damped and will not play a significant role in the room response anymore. This means that a spatially correct WFS reproduction of first-order ceiling reflections, by means of a loudspeaker array at the ceiling of the auralization reproduction room, is necessary and probably sufficient to create the desired 3D spatial perception. To determine the driving signals for the loudspeakers in the ceiling array, it is necessary to identify the relevant ceiling reflection(s) in the multichannel impulse response data and separate those events from the data set. Two methods are examined to identify, separate, and reproduce the relevant reflections: application of the Radon transform, and decomposition of the data into cylindrical harmonics. Application to synthesized and measured data shows that both methods in principle are able to identify, separate, and reproduce the relevant events.
A static acoustic signature system for the analysis of dynamic flight information
NASA Technical Reports Server (NTRS)
Ramer, D. J.
1978-01-01
The Army family of helicopters was analyzed to measure the polar octave band acoustic signature in various modes of flight. A static array of calibrated microphones was used to simultaneously acquire the signature and differential times required to mathematically position the aircraft in space. The signature was then reconstructed, mathematically normalized to a fixed radius around the aircraft.
Two-dimensional grid-free compressive beamforming.
Yang, Yang; Chu, Zhigang; Xu, Zhongming; Ping, Guoli
2017-08-01
Compressive beamforming realizes the direction-of-arrival (DOA) estimation and strength quantification of acoustic sources by solving an underdetermined system of equations relating microphone pressures to a source distribution via compressive sensing. The conventional method assumes DOAs of sources to lie on a grid. Its performance degrades due to basis mismatch when the assumption is not satisfied. To overcome this limitation for the measurement with plane microphone arrays, a two-dimensional grid-free compressive beamforming is developed. First, a continuum based atomic norm minimization is defined to denoise the measured pressure and thus obtain the pressure from sources. Next, a positive semidefinite programming is formulated to approximate the atomic norm minimization. Subsequently, a reasonably fast algorithm based on alternating direction method of multipliers is presented to solve the positive semidefinite programming. Finally, the matrix enhancement and matrix pencil method is introduced to process the obtained pressure and reconstruct the source distribution. Both simulations and experiments demonstrate that under certain conditions, the grid-free compressive beamforming can provide high-resolution and low-contamination imaging, allowing accurate and fast estimation of two-dimensional DOAs and quantification of source strengths, even with non-uniform arrays and noisy measurements.
Acoustic imaging of a duct spinning mode by the use of an in-duct circular microphone array.
Wei, Qingkai; Huang, Xun; Peers, Edward
2013-06-01
An imaging method of acoustic spinning modes propagating within a circular duct simply with surface pressure information is introduced in this paper. The proposed method is developed in a theoretical way and is demonstrated by a numerical simulation case. Nowadays, the measurements within a duct have to be conducted using in-duct microphone array, which is unable to provide information of complete acoustic solutions across the test section. The proposed method can estimate immeasurable information by forming a so-called observer. The fundamental idea behind the testing method was originally developed in control theory for ordinary differential equations. Spinning mode propagation, however, is formulated in partial differential equations. A finite difference technique is used to reduce the associated partial differential equations to a classical form in control. The observer method can thereafter be applied straightforwardly. The algorithm is recursive and, thus, could be operated in real-time. A numerical simulation for a straight circular duct is conducted. The acoustic solutions on the test section can be reconstructed with good agreement to analytical solutions. The results suggest the potential and applications of the proposed method.
Acoustic Measurements of a Large Civil Transport Main Landing Gear Model
NASA Technical Reports Server (NTRS)
Ravetta, Patricio A.; Khorrami, Mehdi R.; Burdisso, Ricardo A.; Wisda, David M.
2016-01-01
Microphone phased array acoustic measurements of a 26 percent-scale, Boeing 777-200 main landing gear model with and without noise reduction fairings installed were obtained in the anechoic configuration of the Virginia Tech Stability Tunnel. Data were acquired at Mach numbers of 0.12, 0.15, and 0.17 with the latter speed used as the nominal test condition. The fully and partially dressed gear with the truck angle set at 13 degrees toe-up landing configuration were the two most extensively tested configurations, serving as the baselines for comparison purposes. Acoustic measurements were also acquired for the same two baseline configurations with the truck angle set at 0 degrees. In addition, a previously tested noise reducing, toboggan-shaped fairing was re-evaluated extensively to address some of the lingering questions regarding the extent of acoustic benefit achievable with this device. The integrated spectra generated from the acoustic source maps reconfirm, in general terms, the previously reported noise reduction performance of the toboggan fairing as installed on an isolated gear. With the recent improvements to the Virginia Tech tunnel acoustic quality and microphone array capabilities, the present measurements provide an additional, higher quality database to the acoustic information available for this gear model.
NASA Astrophysics Data System (ADS)
Bicen, Baris
Measuring acoustic pressure gradients is critical in many applications such as directional microphones for hearing aids and sound intensity probes. This measurement is especially challenging with decreasing microphone size, which reduces the sensitivity due to small spacing between the pressure ports. Novel, micromachined biomimetic microphone diaphragms are shown to provide high sensitivity to pressure gradients on one side of the diaphragm with low thermal mechanical noise. These structures have a dominant mode shape with see-saw like motion in the audio band, responding to pressure gradients as well as spurious higher order modes sensitive to pressure. In this dissertation, integration of a diffraction based optical detection method with these novel diaphragm structures to implement a low noise optical pressure gradient microphone is described and experimental characterization results are presented, showing 36 dBA noise level with 1mm port spacing, nearly an order of magnitude better than the current gradient microphones. The optical detection scheme also provides electrostatic actuation capability from both sides of the diaphragm separately which can be used for active force feedback. A 4-port electromechanical equivalent circuit model of this microphone with optical readout is developed to predict the overall response of the device to different acoustic and electrostatic excitations. The model includes the damping due to complex motion of air around the microphone diaphragm, and it calculates the detected optical signal on each side of the diaphragm as a combination of two separate dominant vibration modes. This equivalent circuit model is verified by experiments and used to predict the microphone response with different force feedback schemes. Single sided force feedback is used for active damping to improve the linearity and the frequency response of the microphone. Furthermore, it is shown that using two sided force feedback one can significantly suppress or enhance the desired vibration modes of the diaphragm. This approach provides an electronic means to tailor the directional response of the microphones, with significant implications in device performance for various applications. As an example, the use of this device as a particle velocity sensor for sound intensity and sound power measurements is investigated. Without force feedback, the gradient microphone provides accurate particle velocity measurement for frequencies below 2 kHz, after which the pressure response of the second order mode becomes significant. With two-sided force feedback, the calculations show that this upper frequency limit may be increased to 10 kHz. This improves the pressure residual intensity index by more than 15 dB in the 50 Hz--10 kHz range, matching the Class I requirements of IEC 1043 standards for intensity probes without any need for multiple spacers.
The relationship between speech recognition, behavioural listening effort, and subjective ratings.
Picou, Erin M; Ricketts, Todd A
2018-06-01
The purpose of this study was to evaluate the reliability and validity of four subjective questions related to listening effort. A secondary purpose of this study was to evaluate the effects of hearing aid beamforming microphone arrays on word recognition and listening effort. Participants answered subjective questions immediately following testing in a dual-task paradigm with three microphone settings in a moderately reverberant laboratory environment in two noise configurations. Participants rated their: (1) mental work, (2) desire to improve the situation, (3) tiredness, and (4) desire to give up. Data were analysed using repeated measures and reliability analyses. Eighteen adults with symmetrical sensorineural hearing loss participated. Beamforming differentially affected word recognition and listening effort. Analysis revealed the same pattern of results for behavioural listening effort and subjective ratings of desire to improve the situation. Conversely, ratings of work revealed the same pattern of results as word recognition performance. Ratings of tiredness and desire to give up were unaffected by hearing aid microphone or noise configuration. Participant ratings of their desire to control the listening situation appear to reliable subjective indicators of listening effort that align with results from a behavioural measure of listening effort.
Optimization of actuator arrays for aircraft interior noise control
NASA Technical Reports Server (NTRS)
Cabell, R. H.; Lester, H. C.; Mathur, G. P.; Tran, B. N.
1993-01-01
A numerical procedure for grouping actuators in order to reduce the number of degrees of freedom in an active noise control system is evaluated using experimental data. Piezoceramic actuators for reducing aircraft interior noise are arranged into groups using a nonlinear optimization routine and clustering algorithm. An actuator group is created when two or more actuators are driven with the same control input. This procedure is suitable for active control applications where actuators are already mounted on a structure. The feasibility of this technique is demonstrated using measured data from the aft cabin of a Douglas DC-9 fuselage. The measured data include transfer functions between 34 piezoceramic actuators and 29 interior microphones and microphone responses due to the primary noise produced by external speakers. Control inputs for the grouped actuators were calculated so that a cost function, defined as a quadratic pressure term and a penalty term, was a minimum. The measured transfer functions and microphone responses are checked by comparing calculated noise reductions with measured noise reductions for four frequencies. The grouping procedure is then used to determine actuator groups that improve overall interior noise reductions by 5.3 to 15 dB, compared to the baseline experimental configuration.
Keidser, Gitte; O'Brien, Anna; Hain, Jens-Uwe; McLelland, Margot; Yeend, Ingrid
2009-11-01
Frequency-dependent microphone directionality alters the spectral shape of sound as a function of arrival azimuth. The influence of this on horizontal-plane localization performance was investigated. Using a 360 degrees loudspeaker array and five stimuli with different spectral characteristics, localization performance was measured on 21 hearing-impaired listeners when wearing no hearing aids and aided with no directionality, partial (from 1 and 2 kHz) directionality, and full directionality. The test schemes were also evaluated in everyday life. Without hearing aids, localization accuracy was significantly poorer than normative data. Due to inaudibility of high-frequency energy, front/back reversals were prominent. Front/back reversals remained prominent when aided with omnidirectional microphones. For stimuli with low-frequency emphasis, directionality had no further effect on localization. For stimuli with sufficient mid- and high-frequency information, full directionality had a small positive effect on front/back localization but a negative effect on left/right localization. Partial directionality further improved front/back localization and had no significant effect on left/right localization. The field test revealed no significant effects. The alternative spectral cues provided by frequency-dependent directionality improve front/back localization in hearing-aid users.
Improved Phased Array Imaging of a Model Jet
NASA Technical Reports Server (NTRS)
Dougherty, Robert P.; Podboy, Gary G.
2010-01-01
An advanced phased array system, OptiNav Array 48, and a new deconvolution algorithm, TIDY, have been used to make octave band images of supersonic and subsonic jet noise produced by the NASA Glenn Small Hot Jet Acoustic Rig (SHJAR). The results are much more detailed than previous jet noise images. Shock cell structures and the production of screech in an underexpanded supersonic jet are observed directly. Some trends are similar to observations using spherical and elliptic mirrors that partially informed the two-source model of jet noise, but the radial distribution of high frequency noise near the nozzle appears to differ from expectations of this model. The beamforming approach has been validated by agreement between the integrated image results and the conventional microphone data.
Wavelet-Based Adaptive Denoising of Phonocardiographic Records
2001-10-25
phonocardiography, including the recording of fetal heart sounds on the maternal abdominal surface. Keywords - phonocardiography, wavelets, denoising, signal... fetal heart rate monitoring [2], [7], [8]. Unfortunately, heart sound records are very often disturbed by various factors, which can prohibit their...recorded the acoustic signals. The first microphone was inserted into the focus of a stethoscope and it recorded the acoustic signals of the heart ( heart
NASA Astrophysics Data System (ADS)
Malfense Fierro, Gian Piero; Meo, Michele
2018-03-01
Two non-contact methods were evaluated to address the reliability and reproducibility concerns affecting industry adoption of nonlinear ultrasound techniques for non-destructive testing and evaluation (NDT/E) purposes. A semi and a fully air-coupled linear and nonlinear ultrasound method was evaluated by testing for barely visible impact damage (BVID) in composite materials. Air coupled systems provide various advantages over contact driven systems; such as: ease of inspection, no contact and lubrication issues and a great potential for non-uniform geometry evaluation. The semi air-coupled setup used a suction attached piezoelectric transducer to excite the sample and an array of low-cost microphones to capture the signal over the inspection area, while the second method focused on a purely air-coupled setup, using an air-coupled transducer to excite the structure and capture the signal. One of the issues facing nonlinear and any air-coupled systems is transferring enough energy to stimulate wave propagation and in the case of nonlinear ultrasound; damage regions. Results for both methods provided nonlinear imaging (NIM) of damage regions using a sweep excitation methodology, with the semi aircoupled system providing clearer results.
Multimodal physiological sensor for motion artefact rejection.
Goverdovsky, Valentin; Looney, David; Kidmose, Preben; Mandic, Danilo P
2014-01-01
This work introduces a novel physiological sensor, which combines electrical and mechanical modalities in a co-located arrangement, to reject motion-induced artefacts. The mechanically sensitive element consists of an electret condenser microphone containing a light diaphragm, allowing it to detect local mechanical displacements and disregard large-scale whole body movements. The electrically sensitive element comprises a highly flexible membrane, conductive on one side and insulating on the other. It covers the sound hole of the microphone, thereby forming an isolated pocket of air between the membrane and the diaphragm. The co-located arrangement of the modalities allows the microphone to sense mechanical disturbances directly through the electrode, thus providing an accurate proxy to artefacts caused by relative motion between the skin and the electrode. This proxy is used to reject such artefacts in the electrical physiological signals, enabling enhanced recording quality in wearable health applications.
Instrumentation for measurement of aircraft noise and sonic boom
NASA Technical Reports Server (NTRS)
Zuckerwar, A. J. (Inventor)
1975-01-01
A jet aircraft noise and sonic boom measuring device which converts sound pressure into electric current is described. An electric current proportional to the sound pressure level at a condenser microphone is produced and transmitted over a cable, amplified by a zero drive amplifier and recorded on magnetic tape. The converter is comprised of a local oscillator, a dual-gate field-effect transistor (FET) mixer and a voltage regulator/impedance translator. A carrier voltage that is applied to one of the gates of the FET mixer is generated by the local oscillator. The microphone signal is mixed with the carrier to produce an electrical current at the frequency of vibration of the microphone diaphragm by the FET mixer. The voltage of the local oscillator and mixer stages is regulated, the carrier at the output is eliminated, and a low output impedance at the cable terminals is provided by the voltage regulator/impedance translator.
Identification and tracking of particular speaker in noisy environment
NASA Astrophysics Data System (ADS)
Sawada, Hideyuki; Ohkado, Minoru
2004-10-01
Human is able to exchange information smoothly using voice under different situations such as noisy environment in a crowd and with the existence of plural speakers. We are able to detect the position of a source sound in 3D space, extract a particular sound from mixed sounds, and recognize who is talking. By realizing this mechanism with a computer, new applications will be presented for recording a sound with high quality by reducing noise, presenting a clarified sound, and realizing a microphone-free speech recognition by extracting particular sound. The paper will introduce a realtime detection and identification of particular speaker in noisy environment using a microphone array based on the location of a speaker and the individual voice characteristics. The study will be applied to develop an adaptive auditory system of a mobile robot which collaborates with a factory worker.
Speech enhancement on smartphone voice recording
NASA Astrophysics Data System (ADS)
Tris Atmaja, Bagus; Nur Farid, Mifta; Arifianto, Dhany
2016-11-01
Speech enhancement is challenging task in audio signal processing to enhance the quality of targeted speech signal while suppress other noises. In the beginning, the speech enhancement algorithm growth rapidly from spectral subtraction, Wiener filtering, spectral amplitude MMSE estimator to Non-negative Matrix Factorization (NMF). Smartphone as revolutionary device now is being used in all aspect of life including journalism; personally and professionally. Although many smartphones have two microphones (main and rear) the only main microphone is widely used for voice recording. This is why the NMF algorithm widely used for this purpose of speech enhancement. This paper evaluate speech enhancement on smartphone voice recording by using some algorithms mentioned previously. We also extend the NMF algorithm to Kulback-Leibler NMF with supervised separation. The last algorithm shows improved result compared to others by spectrogram and PESQ score evaluation.
Spectral Separation of the Turbofan Engine Coherent Combustion Noise Component
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2008-01-01
The core noise components of a dual spool turbofan engine (Honeywell TECH977) were separated by the use of a coherence function. A source location technique based on adjusting the time delay between the combustor pressure sensor signal and the far-field microphone signal to maximize the coherence and remove as much variation of the phase angle with frequency as possible was used. While adjusting the time delay to maximize the coherence and minimize the cross spectrum phase angle variation with frequency, the discovery was made that for the 130 microphone a 90.027 ms time shift worked best for the frequency band from 0 to 200 Hz while a 86.975 ms time shift worked best for the frequency band from 200 to 400 Hz. Since the 0 to 200 Hz band signal took more time to travel the same distance, it is slower than the 200 to 400 Hz band signal. This suggests the 0 to 200 Hz coherent cross spectral density band is partly due to indirect combustion noise attributed to hot spots interacting with the turbine. The signal in the 200 to 400 Hz frequency band is attributed mostly to direct combustion noise.
Time Delay Analysis of Turbofan Engine Direct and Indirect Combustion Noise Sources
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2008-01-01
The core noise components of a dual spool turbofan engine were separated by the use of a coherence function. A source location technique based on adjusting the time delay between the combustor pressure sensor signal and the far-field microphone signal to maximize the coherence and remove as much variation of the phase angle with frequency as possible was used. The discovery was made that for the 130o microphone a 90.027 ms time shift worked best for the frequency band from 0 to 200 Hz while a 86.975 ms time shift worked best for the frequency band from 200 to 400 Hz. Hence, the 0 to 200 Hz band signal took more time than the 200 to 400 Hz band signal to travel the same distance. This suggests the 0 to 200 Hz coherent cross spectral density band is partly due to indirect combustion noise attributed to entropy fluctuations, which travel at the flow velocity, interacting with the turbine. The signal in the 200 to 400 Hz frequency band is attributed mostly to direct combustion noise. Results are presented herein for engine power settings of 48, 54, and 60 percent of the maximum power setting
The Benefit of a Visually Guided Beamformer in a Dynamic Speech Task
Roverud, Elin; Streeter, Timothy; Mason, Christine R.; Kidd, Gerald
2017-01-01
The aim of this study was to evaluate the performance of a visually guided hearing aid (VGHA) under conditions designed to capture some aspects of “real-world” communication settings. The VGHA uses eye gaze to steer the acoustic look direction of a highly directional beamforming microphone array. Although the VGHA has been shown to enhance speech intelligibility for fixed-location, frontal targets, it is currently not known whether these benefits persist in the face of frequent changes in location of the target talker that are typical of conversational turn-taking. Participants were 14 young adults, 7 with normal hearing and 7 with bilateral sensorineural hearing impairment. Target stimuli were sequences of 12 question–answer pairs that were embedded in a mixture of competing conversations. The participant’s task was to respond via a key press after each answer indicating whether it was correct or not. Spatialization of the stimuli and microphone array processing were done offline using recorded impulse responses, before presentation over headphones. The look direction of the array was steered according to the eye movements of the participant as they followed a visual cue presented on a widescreen monitor. Performance was compared for a “dynamic” condition in which the target stimulus moved between three locations, and a “fixed” condition with a single target location. The benefits of the VGHA over natural binaural listening observed in the fixed condition were reduced in the dynamic condition, largely because visual fixation was less accurate. PMID:28758567
Mach stem formation in outdoor measurements of acoustic shocks.
Leete, Kevin M; Gee, Kent L; Neilsen, Tracianne B; Truscott, Tadd T
2015-12-01
Mach stem formation during outdoor acoustic shock propagation is investigated using spherical oxyacetylene balloons exploded above pavement. The location of the transition point from regular to irregular reflection and the path of the triple point are experimentally resolved using microphone arrays and a high-speed camera. The transition point falls between recent analytical work for weak irregular reflections and an empirical relationship derived from large explosions.
Infrasound-array-element frequency response: in-situ measurement and modeling
NASA Astrophysics Data System (ADS)
Gabrielson, T.
2011-12-01
Most array elements at the infrasound stations of the International Monitoring System use some variant of a multiple-inlet pipe system for wind-noise suppression. These pipe systems have a significant impact on the overall frequency response of the element. The spatial distribution of acoustic inlets introduces a response dependence that is a function of frequency and of vertical and horizontal arrival angle; the system of inlets, pipes, and summing junctions further shapes that response as the signal is ducted to the transducer. In-situ measurements, using a co-located reference microphone, can determine the overall frequency response and diagnose problems with the system. As of July 2011, the in-situ frequency responses for 25 individual elements at 6 operational stations (I10, I53, I55, I56, I57, and I99) have been measured. In support of these measurements, a fully thermo-viscous model for the acoustics of these multiple-inlet pipe systems has been developed. In addition to measurements at operational stations, comparative analyses have been done on experimental systems: a multiple-inlet radial-pipe system with varying inlet hole size; a one-quarter scale model of a 70-meter rosette system; and vertical directionality of a small rosette system using aircraft flyovers. [Funded by the US Army Space and Missile Defense Command
Cochlear implant microphone location affects speech recognition in diffuse noise.
Kolberg, Elizabeth R; Sheffield, Sterling W; Davis, Timothy J; Sunderhaus, Linsey W; Gifford, René H
2015-01-01
Despite improvements in cochlear implants (CIs), CI recipients continue to experience significant communicative difficulty in background noise. Many potential solutions have been proposed to help increase signal-to-noise ratio in noisy environments, including signal processing and external accessories. To date, however, the effect of microphone location on speech recognition in noise has focused primarily on hearing aid users. The purpose of this study was to (1) measure physical output for the T-Mic as compared with the integrated behind-the-ear (BTE) processor mic for various source azimuths, and (2) to investigate the effect of CI processor mic location for speech recognition in semi-diffuse noise with speech originating from various source azimuths as encountered in everyday communicative environments. A repeated-measures, within-participant design was used to compare performance across listening conditions. A total of 11 adults with Advanced Bionics CIs were recruited for this study. Physical acoustic output was measured on a Knowles Experimental Mannequin for Acoustic Research (KEMAR) for the T-Mic and BTE mic, with broadband noise presented at 0 and 90° (directed toward the implant processor). In addition to physical acoustic measurements, we also assessed recognition of sentences constructed by researchers at Texas Instruments, the Massachusetts Institute of Technology, and the Stanford Research Institute (TIMIT sentences) at 60 dBA for speech source azimuths of 0, 90, and 270°. Sentences were presented in a semi-diffuse restaurant noise originating from the R-SPACE 8-loudspeaker array. Signal-to-noise ratio was determined individually to achieve approximately 50% correct in the unilateral implanted listening condition with speech at 0°. Performance was compared across the T-Mic, 50/50, and the integrated BTE processor mic. The integrated BTE mic provided approximately 5 dB attenuation from 1500-4500 Hz for signals presented at 0° as compared with 90° (directed toward the processor). The T-Mic output was essentially equivalent for sources originating from 0 and 90°. Mic location also significantly affected sentence recognition as a function of source azimuth, with the T-Mic yielding the highest performance for speech originating from 0°. These results have clinical implications for (1) future implant processor design with respect to mic location, (2) mic settings for implant recipients, and (3) execution of advanced speech testing in the clinic. American Academy of Audiology.
DART Core/Combustor-Noise Initial Test Results
NASA Technical Reports Server (NTRS)
Boyle, Devin K.; Henderson, Brenda S.; Hultgren, Lennart S.
2017-01-01
Contributions from the combustor to the overall propulsion noise of civilian transport aircraft are starting to become important due to turbofan design trends and advances in mitigation of other noise sources. Future propulsion systems for ultra-efficient commercial air vehicles are projected to be of increasingly higher bypass ratio from larger fans combined with much smaller cores, with ultra-clean burning fuel-flexible combustors. Unless effective noise-reduction strategies are developed, combustor noise is likely to become a prominent contributor to overall airport community noise in the future. The new NASA DGEN Aero0propulsion Research Turbofan (DART) is a cost-efficient testbed for the study of core-noise physics and mitigation. This presentation gives a brief description of the recently completed DART core combustor-noise baseline test in the NASA GRC Aero-Acoustic Propulsion Laboratory (AAPL). Acoustic data was simultaneously acquired using the AAPL overhead microphone array in the engine aft quadrant far field, a single midfield microphone, and two semi-infinite-tube unsteady pressure sensors at the core-nozzle exit. An initial assessment shows that the data is of high quality and compares well with results from a quick 2014 feasibility test. Combustor noise components of measured total-noise signatures were educed using a two-signal source-separation method an dare found to occur in the expected frequency range. The research described herein is aligned with the NASA Ultra-Efficient Commercial Transport strategic thrust and is supported by the NASA Advanced Air Vehicle Program, Advanced Air Transport Technology Project, under the Aircraft Noise Reduction Subproject.
A unified acquisition system for acoustic data
NASA Technical Reports Server (NTRS)
Zuckerwar, A. J.; Holmes, H. K.
1977-01-01
A multichannel, acoustic AM carrier system was developed for a wide variety of applications, particularly for aircraft noise and sonic boom measurements. Each data acquisition channel consists of a condenser microphone, an acoustic signal converter, and a Zero Drive amplifier, along with peripheral supporting equipment. A control network insures continuous optimal tuning of the converter and permits remote calibration of the condenser microphone. With a 12.70-mm (1/2-in.) condenser microphone, the converter/Zero Drive amplifier combination has a frequency response from 0 Hz to 20 kHz (-3 db), a dynamic range exceeding 70 db, and a minimum noise floor of 50 db ref. 20 micro Pa) in the band 22.4 Hz to 22.4 kHz. The system requires no external impedance matching networks and is insensitive to cable length, at least up to 900 m (3,000 ft). System gain varies only + or - 1 db over the temperature range 4 to 54 C (40 to 130 F). Adapters are available to accommodate 23.77-mm (1-in.) and 6.35-mm (1/4-in.) microphones and to provide 30-db attenuation. A field test to obtain the acoustical time history of a helicopter flyover proved successful.
NASA Technical Reports Server (NTRS)
Zuckerwar, Allan J.; Shams, Qamar A.; Sealey, Bradley S.; Comeaux, Toby
2005-01-01
A compact windscreen has been conceived for a microphone of a type used outdoors to detect atmospheric infrasound from a variety of natural and manmade sources. Wind at the microphone site contaminates received infrasonic signals (defined here as sounds having frequencies <20 Hz), because a microphone cannot distinguish between infrasonic pressures (which propagate at the speed of sound) and convective pressure fluctuations generated by wind turbulence. Hence, success in measurement of outdoor infrasound depends on effective screening of the microphone from the wind. The present compact windscreen is based on a principle: that infrasound at sufficiently large wavelength can penetrate any barrier of practical thickness. Thus, a windscreen having solid, non-porous walls can block convected pressure fluctuations from the wind while transmitting infrasonic acoustic waves. The transmission coefficient depends strongly upon the ratio between the acoustic impedance of the windscreen and that of air. Several materials have been found to have impedance ratios that render them suitable for use in constructing walls that have practical thicknesses and are capable of high transmission of infrasound. These materials (with their impedance ratios in parentheses) are polyurethane foam (222), space shuttle tile material (332), balsa (323), cedar (3,151), and pine (4,713).
Estimation of Temporal Gait Parameters Using a Wearable Microphone-Sensor-Based System
Wang, Cheng; Wang, Xiangdong; Long, Zhou; Yuan, Jing; Qian, Yueliang; Li, Jintao
2016-01-01
Most existing wearable gait analysis methods focus on the analysis of data obtained from inertial sensors. This paper proposes a novel, low-cost, wireless and wearable gait analysis system which uses microphone sensors to collect footstep sound signals during walking. This is the first time a microphone sensor is used as a wearable gait analysis device as far as we know. Based on this system, a gait analysis algorithm for estimating the temporal parameters of gait is presented. The algorithm fully uses the fusion of two feet footstep sound signals and includes three stages: footstep detection, heel-strike event and toe-on event detection, and calculation of gait temporal parameters. Experimental results show that with a total of 240 data sequences and 1732 steps collected using three different gait data collection strategies from 15 healthy subjects, the proposed system achieves an average 0.955 F1-measure for footstep detection, an average 94.52% accuracy rate for heel-strike detection and 94.25% accuracy rate for toe-on detection. Using these detection results, nine temporal related gait parameters are calculated and these parameters are consistent with their corresponding normal gait temporal parameters and labeled data calculation results. The results verify the effectiveness of our proposed system and algorithm for temporal gait parameter estimation. PMID:27999321
F-16XL and F-18 High Speed Acoustic Flight Test Databases
NASA Technical Reports Server (NTRS)
Kelly, J. J.; Wilson, M. R.; Rawls, J., Jr.; Norum, T. D.; Golub, R. A.
1999-01-01
This report presents the recorded acoustic data and the computed narrow-band and 1/3-octave band spectra produced by F-18 and F-16XL aircraft in subsonic flight over an acoustic array. Both broadband-shock noise and turbulent mixing noise are observed in the spectra. Radar and c-band tracking systems provided the aircraft position which enabled directivity and smear angles from the aircraft to each microphone to be computed. These angles are based on source emission time and thus give some idea about the directivity of the radiated sound field due to jet noise. A follow-on static test was also conducted where acoustic and engine data were obtained. The acoustic data described in the report has application to community noise analysis, noise source characterization and validation of prediction models. A detailed description of the signal processing procedures is provided. Follow-on static tests of each aircraft were also conducted for which engine data and far-field acoustic data are presented.
Real-time spectrum estimation–based dual-channel speech-enhancement algorithm for cochlear implant
2012-01-01
Background Improvement of the cochlear implant (CI) front-end signal acquisition is needed to increase speech recognition in noisy environments. To suppress the directional noise, we introduce a speech-enhancement algorithm based on microphone array beamforming and spectral estimation. The experimental results indicate that this method is robust to directional mobile noise and strongly enhances the desired speech, thereby improving the performance of CI devices in a noisy environment. Methods The spectrum estimation and the array beamforming methods were combined to suppress the ambient noise. The directivity coefficient was estimated in the noise-only intervals, and was updated to fit for the mobile noise. Results The proposed algorithm was realized in the CI speech strategy. For actual parameters, we use Maxflat filter to obtain fractional sampling points and cepstrum method to differentiate the desired speech frame and the noise frame. The broadband adjustment coefficients were added to compensate the energy loss in the low frequency band. Discussions The approximation of the directivity coefficient is tested and the errors are discussed. We also analyze the algorithm constraint for noise estimation and distortion in CI processing. The performance of the proposed algorithm is analyzed and further be compared with other prevalent methods. Conclusions The hardware platform was constructed for the experiments. The speech-enhancement results showed that our algorithm can suppresses the non-stationary noise with high SNR. Excellent performance of the proposed algorithm was obtained in the speech enhancement experiments and mobile testing. And signal distortion results indicate that this algorithm is robust with high SNR improvement and low speech distortion. PMID:23006896
2000-11-27
After their arrival at the Shuttle Landing Facility, the STS-97 crew gather to address the media. At the microphone is Mission Specialist Carlos Noriega. Behind him stand Commander Brent Jett, Pilot Michael Bloomfield and Mission Specialists Joseph Tanner and Marc Garneau, who is with the Canadian Space Agency. Mission STS-97is the sixth construction flight to the International Space Station. Its payload includes the P6 Integrated Truss Structure and a photovoltaic (PV) module, with giant solar arrays that will provide power to the Station. The mission includes two spacewalks to complete the solar array connections. STS-97 is scheduled to launch Nov. 30 at about 10:06 p.m. EST
2000-11-27
After arriving at the Shuttle Landing Facility, the STS-97 crew gather to address the media. At the microphone is Mission Specialist Marc Garneau, who is with the Canadian Space Agency. Behind him can be seen Mission Specialists Joseph Tanner (left) and Carlos Noriega. Mission STS-97 is the sixth construction flight to the International Space Station. Its payload includes the P6 Integrated Truss Structure and a photovoltaic (PV) module, with giant solar arrays that will provide power to the Station. The mission includes two spacewalks to complete the solar array connections. STS-97 is scheduled to launch Nov. 30 at about 10:06 p.m. EST
2000-11-27
After their arrival at the Shuttle Landing Facility, the STS-97 crew gather to address the media. At the microphone is Mission Specialist Marc Garneau, who is with the Canadian Space Agency. Behind him stand Commander Brent Jett, Pilot Michael Bloomfield and Mission Specialists Joseph Tanner and Carlos Noriega. Mission STS-97is the sixth construction flight to the International Space Station. Its payload includes the P6 Integrated Truss Structure and a photovoltaic (PV) module, with giant solar arrays that will provide power to the Station. The mission includes two spacewalks to complete the solar array connections. STS-97 is scheduled to launch Nov. 30 at about 10:06 p.m. EST
2000-11-27
After their arrival at the Shuttle Landing Facility, the STS-97 crew gather to address the media. At the microphone is Mission Specialist Joseph Tanner. Behind him stand Commander Brent Jett, Pilot Michael Bloomfield and Mission Specialists Marc Garneau, who is with the Canadian Space Agency, and Carlos Noriega. Mission STS-97is the sixth construction flight to the International Space Station. Its payload includes the P6 Integrated Truss Structure and a photovoltaic (PV) module, with giant solar arrays that will provide power to the Station. The mission includes two spacewalks to complete the solar array connections. STS-97 is scheduled to launch Nov. 30 at about 10:06 p.m. EST
2000-11-27
After their arrival at the Shuttle Landing Facility, the STS-97 crew gather to address the media. At the microphone is Pilot Michael Bloomfield. Behind him stand Commander Brent Jett and Mission Specialists Joseph Tanner, Carolos Noriega and Marc Garneau, who is with the Canadian Space Agency. Mission STS-97is the sixth construction flight to the International Space Station. Its payload includes the P6 Integrated Truss Structure and a photovoltaic (PV) module, with giant solar arrays that will provide power to the Station. The mission includes two spacewalks to complete the solar array connections. STS-97 is scheduled to launch Nov. 30 at about 10:06 p.m. EST
NASA Technical Reports Server (NTRS)
Hilton, D. A.; Henderson, H. R.; Maglieri, D. J.; Bigler, W. B., II
1978-01-01
In order to expand the data base of helicopter external noise characteristics, a flyover noise measurement program was conducted utilizing the NASA Civil Helicopter Research Aircraft. The remotely operated multiple array acoustics range (ROMAAR) and a 2560-m linear microphone array were utilized for the purpose of documenting the noise characteristics of the test helicopter during flyby and landing operations. By utilizing both ROMAAR concept and the linear array, the data necessary to plot the ground noise footprints and noise radiation patterns were obtained. Examples of the measured noise signature of the test helicopter, the ground noise footprint or contours, and the directivity patterns measured during level flyby and landing operations of a large, multibladed, nonbanging helicopter, the CH-53, are presented.
Monitoring of atmospheric nuclear explosions with infrasonic microphone arrays
NASA Astrophysics Data System (ADS)
Wilson, Charles R.
2002-11-01
A review is given of the various United States programs for the infrasonic monitoring of atmospheric nuclear explosions from their inception in 1946 to their termination in 1975. The US Atomic Energy Detection System (USAEDS) monitored all nuclear weapons tests that were conducted by the Soviet Union, France, China, and the US with arrays of sensitive microbarographs in a worldwide network of infrasonic stations. A discussion of the source mechanism for the creation and subsequent propagation around the globe of long wavelength infrasound from explosions (volcanic and nuclear) is given to show the efficacy of infrasonic monitoring for the detection of atmospheric nuclear weapons tests. The equipment that was used for infrasound detection, the design of the sensor arrays, and the data processing techniques that were used by USAEDS are all discussed.
Infrasonic monitoring of snow avalanches in the Alps
NASA Astrophysics Data System (ADS)
Marchetti, E.; Ulivieri, G.; Ripepe, M.; Chiambretti, I.; Segor, V.
2012-04-01
Risk assessment of snow avalanches is mostly related to weather conditions and snow cover. However a robust risk validation requires to identify all avalanches occurring, in order to compare predictions to real effects. For this purpose on December 2010 we installed a permanent 4-element, small aperture (100 m), infrasound array in the Alps, after a pilot experiment carried out in Gressonay during the 2009-2010 winter season. The array has been deployed in the Ayas Valley, at an elevation of 2000 m a.s.l., where natural avalanches are expected and controlled events are regularly performed. The array consists into 4 Optimic 2180 infrasonic microphones, with a sensitivity of 10-3 Pa in the 0.5-50 Hz frequency band and a 4 channel Guralp CMG-DM24 A/D converter, sampling at 100 Hz. Timing is achieved with a GPS receiver. Data are transmitted to the Department of Earth Sciences of the University of Firenze, where data is recorded and processed in real-time. A multi-channel semblance is carried out on the continuous data set as a function of slowness, back-azimuth and frequency of recorded infrasound in order to detect all avalanches occurring from the back-ground signal, strongly affected by microbarom and mountain induced gravity waves. This permanent installation in Italy will allow to verify the efficiency of the system in short-to-medium range (2-8 km) avalanche detection, and might represent an important validation to model avalanches activity during this winter season. Moreover, the real-time processing of infrasonic array data, might strongly contribute to avalanche risk assessments providing an up-to-description of ongoing events.
Automated Cough Assessment on a Mobile Platform
2014-01-01
The development of an Automated System for Asthma Monitoring (ADAM) is described. This consists of a consumer electronics mobile platform running a custom application. The application acquires an audio signal from an external user-worn microphone connected to the device analog-to-digital converter (microphone input). This signal is processed to determine the presence or absence of cough sounds. Symptom tallies and raw audio waveforms are recorded and made easily accessible for later review by a healthcare provider. The symptom detection algorithm is based upon standard speech recognition and machine learning paradigms and consists of an audio feature extraction step followed by a Hidden Markov Model based Viterbi decoder that has been trained on a large database of audio examples from a variety of subjects. Multiple Hidden Markov Model topologies and orders are studied. Performance of the recognizer is presented in terms of the sensitivity and the rate of false alarm as determined in a cross-validation test. PMID:25506590
Sensing of Particular Speakers for the Construction of Voice Interface Utilized in Noisy Environment
NASA Astrophysics Data System (ADS)
Sawada, Hideyuki; Ohkado, Minoru
Human is able to exchange information smoothly using voice under different situations such as noisy environment in a crowd and with the existence of plural speakers. We are able to detect the position of a source sound in 3D space, extract a particular sound from mixed sounds, and recognize who is talking. By realizing this mechanism with a computer, new applications will be presented for recording a sound with high quality by reducing noise, presenting a clarified sound, and realizing a microphone-free speech recognition by extracting particular sound. The paper will introduce a realtime detection and identification of particular speaker in noisy environment using a microphone array based on the location of a speaker and the individual voice characteristics. The study will be applied to develop an adaptive auditory system of a mobile robot which collaborates with a factory worker.
Method of fan sound mode structure determination
NASA Technical Reports Server (NTRS)
Pickett, G. F.; Sofrin, T. G.; Wells, R. W.
1977-01-01
A method for the determination of fan sound mode structure in the Inlet of turbofan engines using in-duct acoustic pressure measurements is presented. The method is based on the simultaneous solution of a set of equations whose unknowns are modal amplitude and phase. A computer program for the solution of the equation set was developed. An additional computer program was developed which calculates microphone locations the use of which results in an equation set that does not give rise to numerical instabilities. In addition to the development of a method for determination of coherent modal structure, experimental and analytical approaches are developed for the determination of the amplitude frequency spectrum of randomly generated sound models for use in narrow annulus ducts. Two approaches are defined: one based on the use of cross-spectral techniques and the other based on the use of an array of microphones.
NASA Technical Reports Server (NTRS)
Hubbard, H. H.; Maglieri, D. J.
1990-01-01
Tables are provided of measured sonic boom signature data derived from supersonic flyover tests of the XB-70, B-58 and F-104 aircraft for ranges of altitude and Mach number. These tables represent a convenient hard copy version of available electronic files and complement preliminary information included in a reference National Sonic Boom Evaluation Office document.
NASA Technical Reports Server (NTRS)
Prosser, W. H.; Jackson, K. E.; Kellas, S.; Smith, B. T.; McKeon, J.; Friedman, A.
1995-01-01
Transverse matrix cracking in cross-ply gr/ep laminates was studied with advanced acoustic emission (AE) techniques. The primary goal of this research was to measure the load required to initiate the first transverse matrix crack in cross-ply laminates of different thicknesses. Other methods had been previously used for these measurements including penetrant enhanced radiography, optical microscopy, and audible acoustic microphone measurements. The former methods required that the mechanical test be paused for measurements at load intervals. This slowed the test procedure and did not provide the required resolution in load. With acoustic microphones, acoustic signals from cracks could not be clearly differentiated from other noise sources such as grip damage, specimen slippage, or test machine noise. A second goal for this work was to use the high resolution source location accuracy of the advanced acoustic emission techniques to determine whether the crack initiation site was at the specimen edge or in the interior of the specimen.In this research, advanced AE techniques using broad band sensors, high capture rate digital waveform acquisition, and plate wave propagation based analysis were applied to cross-ply composite coupons with different numbers of 0 and 90 degree plies. Noise signals, believed to be caused by grip damage or specimen slipping, were eliminated based on their plate wave characteristics. Such signals were always located outside the sensor gage length in the gripped region of the specimen. Cracks were confirmed post-test by microscopic analysis of a polished specimen edge, backscatter ultrasonic scans, and in limited cases, by penetrant enhanced radiography. For specimens with three or more 90 degree plies together, there was an exact 1-1 correlation between AE crack signals and observed cracks. The ultrasonic scans and some destructive sectioning analysis showed that the cracks extended across the full width of the specimen. Furthermore, the locations of the cracks from the AE data were in excellent agreement with the locations measured with the microscope. The high resolution source location capability of this technique, combined with an array of sensors, was able to determine that the cracks initiated at the specimen edges, rather than in the interior. For specimens with only one or two 90 degree plies, the crack-like signals were significantly smaller in amplitude and there was not a 1-1 correlation to observed cracks. This was similar to previous results. In this case, however, ultrasonic and destructive sectioning analysis revealed that the cracks did not extend across the specimen. They initiated at the edge, but did not propagate any appreciable distance into the specimen. This explains the much smaller AE signal amplitudes and the difficulty in correlating these signals to actual cracks in this, as well as in the previous study.
Acoustic temperature measurement in a rocket noise field.
Giraud, Jarom H; Gee, Kent L; Ellsworth, John E
2010-05-01
A 1 μm diameter platinum wire resistance thermometer has been used to measure temperature fluctuations generated during a static GEM-60 rocket motor test. Exact and small-signal relationships between acoustic pressure and acoustic temperature are derived in order to compare the temperature probe output with that of a 3.18 mm diameter condenser microphone. After preliminary plane wave tests yielded good agreement between the transducers within the temperature probe's ∼2 kHz bandwidth, comparison between the temperature probe and microphone data during the motor firing show that the ±∼3 K acoustic temperature fluctuations are a significant contributor to the total temperature variations.
Electrode surface profile and the performance of condenser microphones.
Fletcher, N H; Thwaites, S
2002-12-01
Condenser microphones of all types are traditionally made with a planar electrode parallel to an electrically conducting diaphragm, additional diaphragm stiffness at acoustic frequencies being provided by the air enclosed in a cavity behind the diaphragm. In all designs, the motion of the diaphragm in response to an acoustic signal is greatest near its center and reduces to zero at its edges. Analysis shows that this construction leads to less than optimal sensitivity and to harmonic distortion at high sound levels when the diaphragm motion is appreciable compared with its spacing from the electrode. Microphones of this design are also subject to acoustic collapse of the diaphragm under the influence of pressure pulses such as might be produced by wind. A new design is proposed in which the electrode is shaped as a shallow dish, and it is shown that this construction increases the sensitivity by about 4.5 dB, and also completely eliminates harmonic distortion originating in the cartridge.
Keidser, Gitte; Hartley, David; Carter, Lyndal
2008-12-01
To investigate the long-term benefit of multichannel wide dynamic range compression (WDRC) alone and in combination with directional microphones and noise reduction/speech enhancement for listeners with severe or profound hearing loss. At the conclusion of a research project, 39 participants with severe or profound hearing loss were fitted with WDRC in one program and WDRC with directional microphones and speech enhancement enabled in a 2nd program. More than 2 years after the 1st participants exited the project, a retrospective survey was conducted to determine the participants' use of, and satisfaction with, the 2 programs. From the 30 returned questionnaires, it seems that WDRC is used with a high degree of satisfaction in general everyday listening situations. The reported benefit from the addition of a directional microphone and speech enhancement for listening in noisy environments was lower and varied among the users. This variable was significantly correlated with how much the program was used. The less frequent and more varied use of the program with directional microphones and speech enhancement activated in combination suggests that these features may be best offered in a 2nd listening program for listeners with severe or profound hearing loss.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suryan, Robert; Albertani, Roberto; Polagye, Brian
Wind energy production in the U.S. is projected to increase to 35% of our nation’s energy by 2050. This substantial increase in the U.S. is only a portion of the global wind industry growth, as many countries strive to reduce greenhouse gas emissions. A major environmental concern and potential market barrier for expansion of wind energy is bird and bat mortality from impacts with turbine blades, towers, and nacelles. Carcass surveys are the standard protocol for quantifying mortality at onshore sites. This method is imperfect, however, due to survey frequency at remote sites, removal of carcasses by scavengers between surveys,more » searcher efficiency, and other biases as well as delays of days to weeks or more in obtaining information on collision events. Furthermore, carcass surveys are not feasible at offshore wind energy sites. Near-real-time detection and quantification of interaction rates is possible at both onshore and offshore wind facilities using an onboard, integrated sensor package with data transmitted to central processing centers. We developed and experimentally tested an array of sensors that continuously monitors for interactions (including impacts) of birds and bats with wind turbines. The synchronized array includes three sensor nodes: (1) vibration (accelerometers and contact microphones), (2) optical (visual and infrared spectrum cameras), and (3) bioacoustics (acoustic and ultrasonic microphones). Accelerometers and contact acoustic microphones are placed at the root of each blade to detect impact vibrations and sound waves propagating through the structure. On-board data processing algorithms using wavelet analysis detect impact signals exceeding background vibration. Stereo-visual and infrared cameras were placed on the nacelle to allow target tracking, distance, and size calculations. On-board image processing and target detection algorithms identify moving targets within the camera field of view. Bioacoustic recorders monitor vocalizations and echolocations to aid in identifying organisms involved in interactions. Data from all sensors are temporarily stored in ring (i.e., circular) buffers with a duration varying by sensor type. Detection of target presence or impact by any of the sensors can trigger the archiving of data from all buffers for transmission to a central data processing center for evaluation and post-processing. This mitigates the risk of “data mortgages” posed by continual recording and minimizes personnel time required to manually review event data. We first conducted individual component tests at laboratories and field sites in Corvallis and Newport, Oregon, and Seattle and Sequim, Washington. We conducted additional component tests on research wind turbines at the North American Wind Research and Training Center, Mesalands Community College (MCC; General Electric 1.5 MW turbine), New Mexico, and the National Wind Technology Center, National Renewable Energy Laboratory (NREL; Controls Advanced Research Turbines 3 [CART 3] 600 kW Westinghouse turbine), Colorado. We conducted fully integrated system tests at NREL in October 2014 and April 2015. We used only research wind turbines so that we could conduct controlled, experimentally generated impacts using empty and water-filled tennis balls shot from a compressed air cannon on the ground. The approx. 57-140 g tennis balls (depending on water content) were at the upper mass range for bats, but lower mass range for marine birds. Therefore, the ability to detect collisions of most seabirds is likely greater than our experiments demonstrate, but possibly lower for some bats depending on the background signal of a given turbine. Vibration data demonstrated that background signals of operating turbines varied markedly among the CART 3 under normal operation (greatest), GE (moderate), and CART 3 during idle rotation (generator not engaged; least). In total, we measured 63 experimental blade impacts on the two research turbines. Impaction detection was dependent on background signals, position of impact on the blade (a tip strike resulted in the strongest impact signal), and impact kinetics (velocity of ball and whether the ball struck the surface of the blade or the leading edge of the blade struck the ball). Overall detection percentage ranged from 100% for the “quietest” conditions (CART 3 idle rotation), down to 35% for the noisiest (CART 3 normal operation). Impact signals were detected from sensors on more than one blade (i.e., blades other than the blade struck) 50% - 75% of the time. Stereo imaging provided valuable metrics, but increased data processing and equipment cost. Given the cost of cameras with sufficient resolution for target identification, we suggest mounting cameras directly on the blades to continuously view the entire rotor swept area with the fewest number of cameras. Bioacoustic microphones provide taxonomic identification, as well as information on ambient noise levels. They also assist in identifying environmental conditions such as hail storms, high winds, thunder, lightning, etc., that may contribute to a collision or a false positive detection. We demonstrated a proof of concept for an integrated sensor array to detect and identify bird and bat collisions with wind turbines. The next phase of research and development for this system will miniaturize and integrate sensors from all three nodes into a single wireless package that can be attached directly to the blade. This next generation system would use all “smart” sensors capable of onboard data processing to drastically reduce data streams and processing time on a central computer. A provisional patent for the blade mounted system was submitted by Oregon State University and recorded by the U.S. Patent and Trademark Office (application no. 62313028). Eventually, technology and industry advances will allow this low cost monitoring system to be designed into materials during manufacturing so that all turbines could be monitored with either a subset or full suite of sensors. As standard equipment on all commercial turbines, the sensor suite would allow the industry to effectively monitor whether individual turbines were causing mortalities or not and under what circumstances. It would also provide real-time evaluation of mechanical and structural integrity of a turbine via vibration, image, and acoustic data streams, thereby permitting modifications in operation to limit environmental or mechanical damage.« less
Kim, Hannah; Ricketts, Todd A
2013-01-01
To investigate the test-retest reliability of real-ear aided response (REAR) measures in open and closed hearing aid fittings in children using appropriate probe-microphone calibration techniques (stored equalization for open fittings and concurrent equalization for closed fittings). Probe-microphone measurements were completed for two mini-behind-the-ear (BTE) hearing aids which were coupled to the ear using open and closed eartips via thin (0.9 mm) tubing. Before probe-microphone testing, the gain of each of the test hearing aids was programmed using an artificial ear simulator (IEC 711) and a Knowles Electronic Manikin for Acoustic Research to match the National Acoustic Laboratories-Non-Linear, version 1 targets for one of two separate hearing loss configurations using an Audioscan Verifit. No further adjustments were made, and the same amplifier gain was used within each hearing aid across both eartip configurations and all participants. Probe-microphone testing included real-ear occluded response (REOR) and REAR measures using the Verifit's standard speech signal (the carrot passage) presented at 65 dB sound pressure level (SPL). Two repeated probe-microphone measures were made for each participant with the probe-tube and hearing aid removed and repositioned between each trial in order to assess intrasubject measurement variability. These procedures were repeated using both open and closed domes. Thirty-two children, ages ranging from 4 to 14 yr. The test-retest standard deviations for open and closed measures did not exceed 4 dB at any frequency. There was also no significant difference between the open (stored equalization) and closed (concurrent equalization) methods. Reliability was particularly similar in the high frequencies and was also quite similar to that reported in previous research. There was no correlation between reliability and age, suggesting high reliability across all ages evaluated. The findings from this study suggest that reliable probe-microphone measurements are obtainable on children 4 yr and older for both traditional unvented and open-canal hearing aid fittings. These data suggest that clinicians should not avoid fitting open technology to children as young as 4 y because of concerns regarding the reliability of verification techniques. American Academy of Audiology.
Introduction to Communication Systems
2014-01-17
channel modeling in complex baseband using ray tracing, reinforced by a software lab which applies these ideas to simulate link time variations for a...analog acoustic signal is generated (often translated to an analog electrical signal using a microphone). Even when this music is recorded onto a...include line of sight (LOS) and reflected paths. Equation (2.35) immediately tells us how to model multipath channels, in which multiple scat- tered
Castres, Fabrice O; Joseph, Phillip F
2007-08-01
This paper is an experimental investigation of an inverse technique for deducing the amplitudes of the modes radiated from a turbofan engine, including schemes for stablizing the solution. The detection of broadband modes generated by a laboratory-scaled fan inlet is performed using a near-field array of microphones arranged in a geodesic geometry. This array geometry is shown to allow a robust and accurate modal inversion. The sound power radiated from the fan inlet and the coherence function between different modal amplitudes are also presented. The knowledge of such modal content is useful in helping to characterize the source mechanisms of fan broadband noise generation, for determining the most appropriate mode distribution model for duct liner predictions, and for making sound power measurements of the radiated sound field.
Cryogenic Design of the Setup for MARE-1 in Milan
NASA Astrophysics Data System (ADS)
Schaeffer, D.; Arnaboldi, C.; Ceruti, G.; Ferri, E.; Kilbourne, C.; Kraft-Bermuth, S.; Margesin, B.; McCammon, D.; Monfardini, A.; Nucciotti, A.; Pessina, G.; Previtali, E.; Sisti, M.
2008-05-01
A large worldwide collaboration is growing around the project of Micro-calorimeter Arrays for a Rhenium Experiment (MARE) for a direct calorimetric measurement of the neutrino mass. To validate the use of cryogenic detectors by checking the presence of unexpected systematic errors, two first experiments are planned using the available techniques composed of arrays of 300 detectors to measure 1010 events in a reasonable time of 3 years (step MARE-1) to reach a sensitivity on the neutrino mass of ˜2 eV/c2. Our experiment in Milan is based on compensated doped silicon implanted thermistor arrays made in NASA/GSFC and on AgReO4 crystals. We present here the design of the cryogenic system that integrates all the requirements for such experiment (electronics for high impedances, low parasitic capacitances, low micro-phonic noise).
2000-11-27
After arriving at the Shuttle Landing Facility, the STS-97 crew gather to address the media. At the microphone, Commander Brent Jett praises the efforts of the KSC workers to get ready for the launch. Behind Jett are Pilot Michael Bloomfield and Mission Specialists Joseph Tanner, Carlos Noriega and Marc Garneau, who is with the Canadian Space Agency. Mission STS-97 is the sixth construction flight to the International Space Station. Its payload includes the P6 Integrated Truss Structure and a photovoltaic (PV) module, with giant solar arrays that will provide power to the Station. The mission includes two spacewalks to complete the solar array connections. STS-97 is scheduled to launch Nov. 30 at about 10:06 p.m. EST
2000-11-27
After their arrival at the Shuttle Landing Facility, the STS-97 crew gather to address the media. At the microphone, Commander Brent Jett praises the efforts of the KSC workers to get ready for the launch. Behind Jett are Pilot Michael Bloomfield and Mission Specialists Joseph Tanner, Carolos Noriega and Marc Garneau, who is with the Canadian Space Agency. Mission STS-97is the sixth construction flight to the International Space Station. Its payload includes the P6 Integrated Truss Structure and a photovoltaic (PV) module, with giant solar arrays that will provide power to the Station. The mission includes two spacewalks to complete the solar array connections. STS-97 is scheduled to launch Nov. 30 at about 10:06 p.m. EST
2000-11-27
After arriving at the Shuttle Landing Facility, the STS-97 crew gather to address the media. At the microphone, Commander Brent Jett praises the efforts of the KSC workers to get ready for the launch. Behind Jett are Pilot Michael Bloomfield and Mission Specialists Joseph Tanner, Carlos Noriega and Marc Garneau, who is with the Canadian Space Agency. Mission STS-97 is the sixth construction flight to the International Space Station. Its payload includes the P6 Integrated Truss Structure and a photovoltaic (PV) module, with giant solar arrays that will provide power to the Station. The mission includes two spacewalks to complete the solar array connections. STS-97 is scheduled to launch Nov. 30 at about 10:06 p.m. EST
STS-97 Mission Specialist Noriega talks to media after arrival for launch
NASA Technical Reports Server (NTRS)
2000-01-01
After their arrival at the Shuttle Landing Facility, the STS-97 crew gather to address the media. At the microphone is Mission Specialist Carlos Noriega. Behind him stand Commander Brent Jett, Pilot Michael Bloomfield and Mission Specialists Joseph Tanner and Marc Garneau, who is with the Canadian Space Agency. Mission STS-97is the sixth construction flight to the International Space Station. Its payload includes the P6 Integrated Truss Structure and a photovoltaic (PV) module, with giant solar arrays that will provide power to the Station. The mission includes two spacewalks to complete the solar array connections. STS-97 is scheduled to launch Nov. 30 at about 10:06 p.m. EST.
STS-97 Mission Specialist Tanner talks to media after arrival for launch
NASA Technical Reports Server (NTRS)
2000-01-01
After their arrival at the Shuttle Landing Facility, the STS-97 crew gather to address the media. At the microphone is Mission Specialist Joseph Tanner. Behind him stand Commander Brent Jett, Pilot Michael Bloomfield and Mission Specialists Marc Garneau, who is with the Canadian Space Agency, and Carlos Noriega. Mission STS-97is the sixth construction flight to the International Space Station. Its payload includes the P6 Integrated Truss Structure and a photovoltaic (PV) module, with giant solar arrays that will provide power to the Station. The mission includes two spacewalks to complete the solar array connections. STS-97 is scheduled to launch Nov. 30 at about 10:06 p.m. EST.
STS-97 Mission Specialist Garneau talks to media after arrival for launch
NASA Technical Reports Server (NTRS)
2000-01-01
After their arrival at the Shuttle Landing Facility, the STS-97 crew gather to address the media. At the microphone is Mission Specialist Marc Garneau, who is with the Canadian Space Agency. Behind him stand Commander Brent Jett, Pilot Michael Bloomfield and Mission Specialists Joseph Tanner and Carlos Noriega. Mission STS-97is the sixth construction flight to the International Space Station. Its payload includes the P6 Integrated Truss Structure and a photovoltaic (PV) module, with giant solar arrays that will provide power to the Station. The mission includes two spacewalks to complete the solar array connections. STS-97 is scheduled to launch Nov. 30 at about 10:06 p.m. EST.
Infrasound from ground to space
NASA Astrophysics Data System (ADS)
Bowman, Daniel Charles
Acoustic detector networks are usually located on the Earth's surface. However, these networks suffer from shortcomings such as poor detection range and pervasive wind noise. An alternative is to deploy acoustic sensors on high altitude balloons. In theory, such platforms can resolve signals arriving from great distances, acquire others that never reach the surface at all, and avoid wind noise entirely. This dissertation focuses on scientific advances, instrumentation, and analytical techniques resulting from the development of such sensor arrays. Results from infrasound microphones deployed on balloon flights in the middle stratosphere are described, and acoustic sources such as the ocean microbarom and building ventilation systems are discussed. Electromagnetic noise originating from the balloon, flight system, and other payloads is shown to be a pervasive issue. An experiment investigating acoustic sensor calibration at low pressures is presented, and implications for high altitude recording are considered. Outstanding challenges and opportunities in sound measurement using sensors embedded in the free atmosphere are outlined. Acoustic signals from field scale explosions designed to emulate volcanic eruptions are described, and their generation mechanisms modeled. Wave forms recorded on sensors suspended from tethered helium balloons are compared with those detected on ground stations during the experiment. Finally, the Hilbert-Huang transform, a high time resolution spectral analysis method for nonstationary and nonlinear time series, is presented.
a Study of Ultrasonic Wave Propagation Through Parallel Arrays of Immersed Tubes
NASA Astrophysics Data System (ADS)
Cocker, R. P.; Challis, R. E.
1996-06-01
Tubular array structures are a very common component in industrial heat exchanging plant and the non-destructive testing of these arrays is essential. Acoustic methods using microphones or ultrasound are attractive but require a thorough understanding of the acoustic properties of tube arrays. This paper details the development and testing of a small-scale physical model of a tube array to verify the predictions of a theoretical model for acoustic propagation through tube arrays developed by Heckl, Mulholland, and Huang [1-5] as a basis for the consideration of small-scale physical models in the development of non-destructive testing procedures for tube arrays. Their model predicts transmission spectra for plane waves incident on an array of tubes arranged in straight rows. Relative transmission is frequency dependent with bands of high and low attenuation caused by resonances within individual tubes and between tubes in the array. As the number of rows in the array increases the relative transmission spectrum becomes more complex, with increasingly well-defined bands of high and low attenuation. Diffraction of acoustic waves with wavelengths less than the tube spacing is predicted and appears as step reductions in the transmission spectrum at frequencies corresponding to integer multiples of the tube spacing. Experiments with the physical model confirm the principle features of the theoretical treatment.
NASA Technical Reports Server (NTRS)
Horne, William C.; Burnside, Nathan J.
2013-01-01
The AMELIA Cruise-Efficient Short Take-off and Landing (CESTOL) configuration concept was developed to meet future requirements of reduced field length, noise, and fuel burn by researchers at Cal Poly, San Luis Obispo and Georgia Tech Research Institute under sponsorship by the NASA Fundamental Aeronautics Program (FAP), Subsonic Fixed Wing Project. The novel configuration includes leading- and trailing-edge circulation control wing (CCW), over-wing podded turbine propulsion simulation (TPS). Extensive aerodynamic measurements of forces, surfaces pressures, and wing surface skin friction measurements were recently measured over a wide range of test conditions in the Arnold Engineering Development Center(AEDC) National Full-Scale Aerodynamics Complex (NFAC) 40- by 80-Ft Wind Tunnel. Acoustic measurements of the model were also acquired for each configuration with 7 fixed microphones on a line under the left wing, and with a 48-element, 40-inch diameter phased microphone array under the right wing. This presentation will discuss acoustic characteristics of the CCW system for a variety of tunnel speeds (0 to 120 kts), model configurations (leading edge(LE) and/or trailing-edge(TE) slot blowing, and orientations (incidence and yaw) based on acoustic measurements acquired concurrently with the aerodynamic measurements. The flow coefficient, Cmu= mVSLOT/qSW varied from 0 to 0.88 at 40 kts, and from 0 to 0.15 at 120 kts. Here m is the slot mass flow rate, VSLOT is the slot exit velocity, q is dynamic pressure, and SW is wing surface area. Directivities at selected 1/3 octave bands will be compared with comparable measurements of a 2-D wing at GTRI, as will as microphone array near-field measurements of the right wing at maximum flow rate. The presentation will include discussion of acoustic sensor calibrations as well as characterization of the wind tunnel background noise environment.
NASA Astrophysics Data System (ADS)
Gallin, L.; Coulouvrat, F.; Farges, T.; Marchiano, R.; Defer, E.; Rison, W.; Schulz, W.; Nuret, M.
2013-12-01
The goal is to study the transformation of the thunder (amplitude, spectrum) during its travel from the lightning channel towards a detector (microphone, microbarometer), considering propagation distances of less than 50 km and complex local meteorological properties. Inside the European HyMeX project, the SOP1 campaign (2012) took place from September 2012 to November 2012 in South of France. An acoustic station (center: 4.39° E, 44.08° N) composed of a microphone array placed inside a microbarometer array was installed by CEA near city of Uzès. It was located in the center of an LMA network coming with two slow antennas. This network was deployed in France for the first time by the New Mexico Tech and LERMA laboratory. The detections from the European lightning location system EUCLID complete this dataset. During the SOP1 period several storms passed over the station. The post-processings of the records point out days with interesting thunderstorms. Especially during the 26th of October 2012 in the evening (around 8 pm) a thunderstorm passed just over the acoustic station. Not too many lightning strokes are detected by EUCLID, the corresponding flashes are then well characterized by the LMA network. Slow antennas present good electric field measurements. The acoustic records have excellent quality. We present for some selected flashes a comparative study of the different measurements (LMA, slow antenna, EUCLID, microphones, microbarometers): focusing on amplitude and spectrum of the thunder waveforms, and on propagation effects due to the meteorological conditions. To quantify the impact of these meteorological conditions on the propagating thunder (from the lightning sources to the acoustic array), a code named Flhoward is used [Dagrau et al., J. Acoust. Soc. Am., 130, 20-32, 2011][Coulouvrat, Wave Motion, 49, 50--63, 2012]. It is designed to simulate the nonlinear propagation of acoustic shock waves through a realistic atmosphere model (including temperature gradients, rigid ground, and wind flows). The meteorological conditions are extracted from the data calculated by Météo-France weather forecast model AROME-WMED for the chosen days. Some cases where numerical simulation helps to understand the observations are presented.
NASA Technical Reports Server (NTRS)
Brooks, Thomas F.; Humphreys, William M.
2006-01-01
Current processing of acoustic array data is burdened with considerable uncertainty. This study reports an original methodology that serves to demystify array results, reduce misinterpretation, and accurately quantify position and strength of acoustic sources. Traditional array results represent noise sources that are convolved with array beamform response functions, which depend on array geometry, size (with respect to source position and distributions), and frequency. The Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) method removes beamforming characteristics from output presentations. A unique linear system of equations accounts for reciprocal influence at different locations over the array survey region. It makes no assumption beyond the traditional processing assumption of statistically independent noise sources. The full rank equations are solved with a new robust iterative method. DAMAS is quantitatively validated using archival data from a variety of prior high-lift airframe component noise studies, including flap edge/cove, trailing edge, leading edge, slat, and calibration sources. Presentations are explicit and straightforward, as the noise radiated from a region of interest is determined by simply summing the mean-squared values over that region. DAMAS can fully replace existing array processing and presentations methodology in most applications. It appears to dramatically increase the value of arrays to the field of experimental acoustics.
Visualizing Sound Directivity via Smartphone Sensors
NASA Astrophysics Data System (ADS)
Hawley, Scott H.; McClain, Robert E.
2018-02-01
When Yang-Hann Kim received the Rossing Prize in Acoustics Education at the 2015 meeting of the Acoustical Society of America, he stressed the importance of offering visual depictions of sound fields when teaching acoustics. Often visualization methods require specialized equipment such as microphone arrays or scanning apparatus. We present a simple method for visualizing angular dependence in sound fields, made possible via the confluence of sensors available via a new smartphone app that the authors have developed.
Volumetric Acoustic Vector Intensity Probe
NASA Technical Reports Server (NTRS)
Klos, Jacob
2006-01-01
A new measurement tool capable of imaging the acoustic intensity vector throughout a large volume is discussed. This tool consists of an array of fifty microphones that form a spherical surface of radius 0.2m. A simultaneous measurement of the pressure field across all the microphones provides time-domain near-field holograms. Near-field acoustical holography is used to convert the measured pressure into a volumetric vector intensity field as a function of frequency on a grid of points ranging from the center of the spherical surface to a radius of 0.4m. The volumetric intensity is displayed on three-dimensional plots that are used to locate noise sources outside the volume. There is no restriction on the type of noise source that can be studied. The sphere is mobile and can be moved from location to location to hunt for unidentified noise sources. An experiment inside a Boeing 757 aircraft in flight successfully tested the ability of the array to locate low-noise-excited sources on the fuselage. Reference transducers located on suspected noise source locations can also be used to increase the ability of this device to separate and identify multiple noise sources at a given frequency by using the theory of partial field decomposition. The frequency range of operation is 0 to 1400Hz. This device is ideal for the study of noise sources in commercial and military transportation vehicles in air, on land and underwater.
Echolocating bats use future-target information for optimal foraging.
Fujioka, Emyo; Aihara, Ikkyu; Sumiya, Miwa; Aihara, Kazuyuki; Hiryu, Shizuko
2016-04-26
When seeing or listening to an object, we aim our attention toward it. While capturing prey, many animal species focus their visual or acoustic attention toward the prey. However, for multiple prey items, the direction and timing of attention for effective foraging remain unknown. In this study, we adopted both experimental and mathematical methodology with microphone-array measurements and mathematical modeling analysis to quantify the attention of echolocating bats that were repeatedly capturing airborne insects in the field. Here we show that bats select rational flight paths to consecutively capture multiple prey items. Microphone-array measurements showed that bats direct their sonar attention not only to the immediate prey but also to the next prey. In addition, we found that a bat's attention in terms of its flight also aims toward the next prey even when approaching the immediate prey. Numerical simulations revealed a possibility that bats shift their flight attention to control suitable flight paths for consecutive capture. When a bat only aims its flight attention toward its immediate prey, it rarely succeeds in capturing the next prey. These findings indicate that bats gain increased benefit by distributing their attention among multiple targets and planning the future flight path based on additional information of the next prey. These experimental and mathematical studies allowed us to observe the process of decision making by bats during their natural flight dynamics.
NASA Astrophysics Data System (ADS)
Fajkus, Marcel; Nedoma, Jan; Martinek, Radek; Vasinek, Vladimir
2017-10-01
In this article, we describe an innovative non-invasive method of Fetal Phonocardiography (fPCG) using fiber-optic sensors and adaptive algorithm for the measurement of fetal heart rate (fHR). Conventional PCG is based on a noninvasive scanning of acoustic signals by means of a microphone placed on the thorax. As for fPCG, the microphone is placed on the maternal abdomen. Our solution is based on patent pending non-invasive scanning of acoustic signals by means of a fiber-optic interferometer. Fiber-optic sensors are resistant to technical artifacts such as electromagnetic interferences (EMI), thus they can be used in situations where it is impossible to use conventional EFM methods, e.g. during Magnetic Resonance Imaging (MRI) examination or in case of delivery in water. The adaptive evaluation system is based on Recursive least squares (RLS) algorithm. Based on real measurements provided on five volunteers with their written consent, we created a simplified dynamic signal model of a distribution of heartbeat sounds (HS) through the human body. Our created model allows us to verification of the proposed adaptive system RLS algorithm. The functionality of the proposed non-invasive adaptive system was verified by objective parameters such as Sensitivity (S+) and Signal to Noise Ratio (SNR).
Burnett, Greg C [Livermore, CA; Holzrichter, John F [Berkeley, CA; Ng, Lawrence C [Danville, CA
2006-08-08
The present invention is a system and method for characterizing human (or animate) speech voiced excitation functions and acoustic signals, for removing unwanted acoustic noise which often occurs when a speaker uses a microphone in common environments, and for synthesizing personalized or modified human (or other animate) speech upon command from a controller. A low power EM sensor is used to detect the motions of windpipe tissues in the glottal region of the human speech system before, during, and after voiced speech is produced by a user. From these tissue motion measurements, a voiced excitation function can be derived. Further, the excitation function provides speech production information to enhance noise removal from human speech and it enables accurate transfer functions of speech to be obtained. Previously stored excitation and transfer functions can be used for synthesizing personalized or modified human speech. Configurations of EM sensor and acoustic microphone systems are described to enhance noise cancellation and to enable multiple articulator measurements.
Burnett, Greg C.; Holzrichter, John F.; Ng, Lawrence C.
2004-03-23
The present invention is a system and method for characterizing human (or animate) speech voiced excitation functions and acoustic signals, for removing unwanted acoustic noise which often occurs when a speaker uses a microphone in common environments, and for synthesizing personalized or modified human (or other animate) speech upon command from a controller. A low power EM sensor is used to detect the motions of windpipe tissues in the glottal region of the human speech system before, during, and after voiced speech is produced by a user. From these tissue motion measurements, a voiced excitation function can be derived. Further, the excitation function provides speech production information to enhance noise removal from human speech and it enables accurate transfer functions of speech to be obtained. Previously stored excitation and transfer functions can be used for synthesizing personalized or modified human speech. Configurations of EM sensor and acoustic microphone systems are described to enhance noise cancellation and to enable multiple articulator measurements.
Burnett, Greg C.; Holzrichter, John F.; Ng, Lawrence C.
2006-02-14
The present invention is a system and method for characterizing human (or animate) speech voiced excitation functions and acoustic signals, for removing unwanted acoustic noise which often occurs when a speaker uses a microphone in common environments, and for synthesizing personalized or modified human (or other animate) speech upon command from a controller. A low power EM sensor is used to detect the motions of windpipe tissues in the glottal region of the human speech system before, during, and after voiced speech is produced by a user. From these tissue motion measurements, a voiced excitation function can be derived. Further, the excitation function provides speech production information to enhance noise removal from human speech and it enables accurate transfer functions of speech to be obtained. Previously stored excitation and transfer functions can be used for synthesizing personalized or modified human speech. Configurations of EM sensor and acoustic microphone systems are described to enhance noise cancellation and to enable multiple articulator measurements.
Burnett, Greg C.; Holzrichter, John F.; Ng, Lawrence C.
2006-04-25
The present invention is a system and method for characterizing human (or animate) speech voiced excitation functions and acoustic signals, for removing unwanted acoustic noise which often occurs when a speaker uses a microphone in common environments, and for synthesizing personalized or modified human (or other animate) speech upon command from a controller. A low power EM sensor is used to detect the motions of windpipe tissues in the glottal region of the human speech system before, during, and after voiced speech is produced by a user. From these tissue motion measurements, a voiced excitation function can be derived. Further, the excitation function provides speech production information to enhance noise removal from human speech and it enables accurate transfer functions of speech to be obtained. Previously stored excitation and transfer functions can be used for synthesizing personalized or modified human speech. Configurations of EM sensor and acoustic microphone systems are described to enhance noise cancellation and to enable multiple articulator measurements.
Development of unconstrained heartbeat and respiration measurement system with pneumatic flow.
Kurihara, Yosuke; Watanabe, Kajiro
2012-12-01
The management of health through daily monitoring of heartbeat and respiration signals is of major importance for early diagnosis to prevent diseases of the respiratory and circulatory system. However, such daily health monitoring is possible only if the monitoring system is physically and psychologically noninvasive. In this paper, an unconstrained method of measuring heartbeat and respiration signals, by using a thermistor to measure the air flows from the air mattress to an air tube accompanying the subject's heartbeat and respiration, is proposed. The SN ratio with interference by opening and closing of a door as environmental noise was compared with that obtained by the conventional condenser microphone method. As a result, the SN ratios with the condenser microphone method were 26.6 ± 4.2 dB for heartbeat and 27.8 ± 3.0 dB for respiration, whereas with the proposed method they were 34.9 ± 3.1 dB and 42.1 ± 2.5 dB, respectively.
Polymers in solar energy utilization
NASA Technical Reports Server (NTRS)
Liang, R. H.; Coulter, D. R.; Dao, C.; Gupta, A.
1983-01-01
A laser photoacoustic technique (LPAT) has been verified for performing accelerated life testing of outdoor photooxidation of polymeric materials used in solar energy applications. Samples of the material under test are placed in a chamber with a sensitive microphone, then exposed to chopped laser radiation. The sample absorbs the light and converts it to heat by a nonradiative deexcitation process, thereby reducing pressure fluctuations within the cell. The acoustic signal detected by the microphone is directly proportional to the amount of light absorbed by the specimen. Tests were performed with samples of ethylene/methylacrylate copolymer (EMA) reprecipitated from hot cyclohexane, compressed, and molded into thin (25-50 microns) films. The films were exposed outdoors and sampled by LPAT weekly. The linearity of the light absorbed with respect to the acoustic signal was verified.Correlations were established between the photoacoustic behavior of the materials aged outdoors and the same kinds of samples cooled and heated in a controlled environment reactor. The reactor tests were validated for predicting outdoor exosures up to 55 days.
Aircraft Wake Vortex Measurements at Denver International Airport
NASA Technical Reports Server (NTRS)
Dougherty, Robert P.; Wang, Frank Y.; Booth, Earl R.; Watts, Michael E.; Fenichel, Neil; D'Errico, Robert E.
2004-01-01
Airport capacity is constrained, in part, by spacing requirements associated with the wake vortex hazard. NASA's Wake Vortex Avoidance Project has a goal to establish the feasibility of reducing this spacing while maintaining safety. Passive acoustic phased array sensors, if shown to have operational potential, may aid in this effort by detecting and tracking the vortices. During August/September 2003, NASA and the USDOT sponsored a wake acoustics test at the Denver International Airport. The central instrument of the test was a large microphone phased array. This paper describes the test in general terms and gives an overview of the array hardware. It outlines one of the analysis techniques that is being applied to the data and gives sample results. The technique is able to clearly resolve the wake vortices of landing aircraft and measure their separation, height, and sinking rate. These observations permit an indirect estimate of the vortex circulation. The array also provides visualization of the vortex evolution, including the Crow instability.
Cochlear Implant Microphone Location Affects Speech Recognition in Diffuse Noise
Kolberg, Elizabeth R.; Sheffield, Sterling W.; Davis, Timothy J.; Sunderhaus, Linsey W.; Gifford, René H.
2015-01-01
Background Despite improvements in cochlear implants (CIs), CI recipients continue to experience significant communicative difficulty in background noise. Many potential solutions have been proposed to help increase signal-to-noise ratio in noisy environments, including signal processing and external accessories. To date, however, the effect of microphone location on speech recognition in noise has focused primarily on hearing aid users. Purpose The purpose of this study was to (1) measure physical output for the T-Mic as compared with the integrated behind-the-ear(BTE) processor mic for various source azimuths, and (2) to investigate the effect of CI processor mic location for speech recognition in semi-diffuse noise with speech originating from various source azimuths as encountered in everyday communicative environments. Research Design A repeated-measures, within-participant design was used to compare performance across listening conditions. Study Sample A total of 11 adults with Advanced Bionics CIs were recruited for this study. Data Collection and Analysis Physical acoustic output was measured on a Knowles Experimental Mannequin for Acoustic Research (KEMAR) for the T-Mic and BTE mic, with broadband noise presented at 0 and 90° (directed toward the implant processor). In addition to physical acoustic measurements, we also assessed recognition of sentences constructed by researchers at Texas Instruments, the Massachusetts Institute of Technology, and the Stanford Research Institute (TIMIT sentences) at 60 dBA for speech source azimuths of 0, 90, and 270°. Sentences were presented in a semi-diffuse restaurant noise originating from the R-SPACE 8-loudspeaker array. Signal-to-noise ratio was determined individually to achieve approximately 50% correct in the unilateral implanted listening condition with speech at 0°. Performance was compared across the T-Mic, 50/50, and the integrated BTE processor mic. Results The integrated BTE mic provided approximately 5 dB attenuation from 1500–4500 Hz for signals presented at 0° as compared with 90° (directed toward the processor). The T-Mic output was essentially equivalent for sources originating from 0 and 90°. Mic location also significantly affected sentence recognition as a function of source azimuth, with the T-Mic yielding the highest performance for speech originating from 0°. Conclusions These results have clinical implications for (1) future implant processor design with respect to mic location, (2) mic settings for implant recipients, and (3) execution of advanced speech testing in the clinic. PMID:25597460
Wavenumber-domain separation of rail contribution to pass-by noise
NASA Astrophysics Data System (ADS)
Zea, Elias; Manzari, Luca; Squicciarini, Giacomo; Feng, Leping; Thompson, David; Arteaga, Ines Lopez
2017-11-01
In order to counteract the problem of railway noise and its environmental impact, passing trains in Europe must be tested in accordance to a noise legislation that demands the quantification of the noise generated by the vehicle alone. However, for frequencies between about 500 Hz and 1600 Hz, it has been found that a significant part of the measured noise is generated by the rail, which behaves like a distributed source and radiates plane waves as a result of the contact with the train's wheels. Thus the need arises for separating the rail contribution to the pass-by noise in that particular frequency range. To this end, the present paper introduces a wavenumber-domain filtering technique, referred to as wave signature extraction, which requires a line microphone array parallel to the rail, and two accelerometers on the rail in the vertical and lateral direction. The novel contributions of this research are: (i) the introduction and application of wavenumber (or plane-wave) filters to pass-by data measured with a microphone array located in the near-field of the rail, and (ii) the design of such filters without prior information of the structural properties of the rail. The latter is achieved by recording the array pressure, as well as the rail vibrations with the accelerometers, before and after the train pass-by. The performance of the proposed method is investigated with a set of pass-by measurements performed in Germany. The results seem to be promising when compared to reference data from TWINS, and the largest discrepancies occur above 1600 Hz and are attributed to plane waves radiated by the rail that so far have not been accounted for in the design of the filters.
Investigation of laser Doppler anemometry in developing a velocity-based measurement technique
NASA Astrophysics Data System (ADS)
Jung, Ki Won
2009-12-01
Acoustic properties, such as the characteristic impedance and the complex propagation constant, of porous materials have been traditionally characterized based on pressure-based measurement techniques using microphones. Although the microphone techniques have evolved since their introduction, the most general form of the microphone technique employs two microphones in characterizing the acoustic field for one continuous medium. The shortcomings of determining the acoustic field based on only two microphones can be overcome by using numerous microphones. However, the use of a number of microphones requires a careful and intricate calibration procedure. This dissertation uses laser Doppler anemometry (LDA) to establish a new measurement technique which can resolve issues that microphone techniques have: First, it is based on a single sensor, thus the calibration is unnecessary when only overall ratio of the acoustic field is required for the characterization of a system. This includes the measurements of the characteristic impedance and the complex propagation constant of a system. Second, it can handle multiple positional measurements without calibrating the signal at each position. Third, it can measure three dimensional components of velocity even in a system with a complex geometry. Fourth, it has a flexible adaptability which is not restricted to a certain type of apparatus only if the apparatus is transparent. LDA is known to possess several disadvantages, such as the requirement of a transparent apparatus, high cost, and necessity of seeding particles. The technique based on LDA combined with a curvefitting algorithm is validated through measurements on three systems. First, the complex propagation constant of the air is measured in a rigidly terminated cylindrical pipe which has very low dissipation. Second, the radiation impedance of an open-ended pipe is measured. These two parameters can be characterized by the ratio of acoustic field measured at multiple locations. Third, the power dissipated in a variable RLC load is measured. The three experiments validate the LDA technique proposed. The utility of the LDA method is then extended to the measurement of the complex propagation constant of the air inside a 100 ppi reticulated vitreous carbon (RVC) sample. Compared to measurements in the available studies, the measurement with the 100 ppi RVC sample supports the LDA technique in that it can achieve a low uncertainty in the determined quantity. This dissertation concludes with using the LDA technique for modal decomposition of the plane wave mode and the (1,1) mode that are driven simultaneously. This modal decomposition suggests that the LDA technique surpasses microphone-based techniques, because they are unable to determine the acoustic field based on an acoustic model with unconfined propagation constants for each modal component.
Noise propagation from a four-engine, propeller-driven airplane
NASA Technical Reports Server (NTRS)
Willshire, William L., Jr.
1987-01-01
A flight experiment was conducted to investigate the propagation of periodic low-frequency noise from a propeller-driven airplane. The test airplane was a large four-engine, propeller-driven airplane flown at altitudes from 15 to 500 m over the end of an 1800-m-long, 22-element microphone array. The acoustic data were reduced by a one-third octave-band analysis. The primary propagation quantities computed were lateral attenuation and ground effects, both of which become significant at shallow elevation angles. Scatter in the measured results largely obscured the physics of the low-frequency noise propagation. Variability of the noise source, up to 9.5 dB over a 2-sec interval, was the major contributor to the data scatter. The microphones mounted at ground level produced more consistent results with less scatter than those mounted 1.2 m above ground. The ground noise levels were found to be greater on the port side than on the starboard side.
Engine Installation Effects of Four Civil Transport Airplanes: Wallops Flight Facility Study
NASA Technical Reports Server (NTRS)
Fleming, Gregg G.; Senzig, David A.; McCurdy, David A.; Roof, Christopher J.; Rapoza, Amanda S.
2003-01-01
The National Aeronautics and Space Administration (NASA), Langley Research Center (LaRC), the Environmental Measurement and Modeling Division of the United States Department of Transportation s John A. Volpe National Transportation Systems Center (Volpe), and several other organizations (see Appendix A for a complete list of participating organizations and individuals) conducted a noise measurement study at NASA s Wallops Flight Facility (Wallops) near Chincoteague, Virginia during September 2000. This test was intended to determine engine installation effects on four civil transport airplanes: a Boeing 767-400, a McDonnell-Douglas DC9, a Dassault Falcon 2000, and a Beechcraft King Air. Wallops was chosen for this study because of the relatively low ambient noise of the site and the degree of control over airplane operating procedures enabled by operating over a runway closed to other uses during the test period. Measurements were conducted using a twenty microphone U-shaped array oriented perpendicular to the flight path; microphones were mounted such that ground effects were minimized and low elevation angles were observed.
Evaluating the Capability of High-Altitude Infrasound Platforms to Cover Gaps in Existing Networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Daniel
A variety of Earth surface and atmospheric sources generate low frequency sound waves that can travel great distances. Despite a rich history of ground-based sensor studies, very few experiments have investigated the prospects of free floating microphone arrays at high altitudes. However, recent initiatives have shown that such networks have very low background noise and may sample an acoustic wave field that is fundamentally different than that at the Earth's surface. The experiments have been limited to at most two stations at altitude, limiting their utility in acoustic event detection and localization. We describe the deployment of five drifting microphonemore » stations at altitudes between 21 and 24 km above sea level. The stations detected one of two regional ground-based explosions as well as the ocean microbarom while traveling almost 500 km across the American Southwest. The explosion signal consisted of multiple arrivals; signal amplitudes did not correlate with sensor elevation or source range. A sparse network method that employed curved wave front corrections was able to determine the backazimuth from the free flying network to the acoustic source. Episodic broad band signals similar to those seen on previous flights in the same region were noted as well, but their source remains unclear. Background noise levels were commensurate with those on infrasound stations in the International Monitoring System (IMS) below 2 seconds, but sensor self noise appears to dominate at higher frequencies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Daniel C.; Albert, Sarah A.
We present that a variety of Earth surface and atmospheric sources generate low frequency sound waves that can travel great distances. Despite a rich history of ground-based sensor studies, very few experiments have investigated the prospects of free floating microphone arrays at high altitudes. However, recent initiatives have shown that such networks have very low background noise and may sample an acoustic wave field that is fundamentally different than that at Earth’s surface. The experiments have been limited to at most two stations at altitude, making acoustic event detection and localization difficult. We describe the deployment of four drifting microphonemore » stations at altitudes between 21 and 24 km above sea level. The stations detected one of two regional ground-based chemical explosions as well as the ocean microbarom while traveling almost 500 km across the American Southwest. The explosion signal consisted of multiple arrivals; signal amplitudes did not correlate with sensor elevation or source range. The waveforms and propagation patterns suggest interactions with gravity waves in the 35-45 km altitude. A sparse network method that employed curved wave front corrections was able to determine the backazimuth from the free flying network to the acoustic source. Episodic signals similar to those seen on previous flights in the same region were noted, but their source remains unclear. Lastly, background noise levels were commensurate with those on infrasound stations in the International Monitoring System below 2 seconds.« less
Bowman, Daniel C.; Albert, Sarah A.
2018-02-22
We present that a variety of Earth surface and atmospheric sources generate low frequency sound waves that can travel great distances. Despite a rich history of ground-based sensor studies, very few experiments have investigated the prospects of free floating microphone arrays at high altitudes. However, recent initiatives have shown that such networks have very low background noise and may sample an acoustic wave field that is fundamentally different than that at Earth’s surface. The experiments have been limited to at most two stations at altitude, making acoustic event detection and localization difficult. We describe the deployment of four drifting microphonemore » stations at altitudes between 21 and 24 km above sea level. The stations detected one of two regional ground-based chemical explosions as well as the ocean microbarom while traveling almost 500 km across the American Southwest. The explosion signal consisted of multiple arrivals; signal amplitudes did not correlate with sensor elevation or source range. The waveforms and propagation patterns suggest interactions with gravity waves in the 35-45 km altitude. A sparse network method that employed curved wave front corrections was able to determine the backazimuth from the free flying network to the acoustic source. Episodic signals similar to those seen on previous flights in the same region were noted, but their source remains unclear. Lastly, background noise levels were commensurate with those on infrasound stations in the International Monitoring System below 2 seconds.« less
NASA Astrophysics Data System (ADS)
Fernandes, Rigel P.; Ramos, António L. L.; Apolinário, José A.
2017-05-01
Shooter localization systems have been subject of a growing attention lately owing to its wide span of possible applications, e.g., civil protection, law enforcement, and support to soldiers in missions where snipers might pose a serious threat. These devices are based on the processing of electromagnetic or acoustic signatures associated with the firing of a gun. This work is concerned with the latter, where the shooter's position can be obtained based on the estimation of the direction-of-arrival (DoA) of the acoustic components of a gunshot signal (muzzle blast and shock wave). A major limitation of current commercially available acoustic sniper localization systems is the impossibility of finding the shooter's position when one of these acoustic signatures is not detected. This is very likely to occur in real-life situations, especially when the microphones are not in the field of view of the shockwave or when the presence of obstacles like buildings can prevent a direct-path to sensors. This work addresses the problem of DoA estimation of the muzzle blast using a planar array of sensors deployed in a drone. Results supported by actual gunshot data from a realistic setup are very promising and pave the way for the development of enhanced sniper localization systems featuring two main advantages over stationary ones: (1) wider surveillance area; and (2) increased likelihood of a direct-path detection of at least one of the gunshot signals, thereby adding robustness and reliability to the system.
Fault detection in rotating machines with beamforming: Spatial visualization of diagnosis features
NASA Astrophysics Data System (ADS)
Cardenas Cabada, E.; Leclere, Q.; Antoni, J.; Hamzaoui, N.
2017-12-01
Rotating machines diagnosis is conventionally related to vibration analysis. Sensors are usually placed on the machine to gather information about its components. The recorded signals are then processed through a fault detection algorithm allowing the identification of the failing part. This paper proposes an acoustic-based diagnosis method. A microphone array is used to record the acoustic field radiated by the machine. The main advantage over vibration-based diagnosis is that the contact between the sensors and the machine is no longer required. Moreover, the application of acoustic imaging makes possible the identification of the sources of acoustic radiation on the machine surface. The display of information is then spatially continuous while the accelerometers only give it discrete. Beamforming provides the time-varying signals radiated by the machine as a function of space. Any fault detection tool can be applied to the beamforming output. Spectral kurtosis, which highlights the impulsiveness of a signal as function of frequency, is used in this study. The combination of spectral kurtosis with acoustic imaging makes possible the mapping of the impulsiveness as a function of space and frequency. The efficiency of this approach lays on the source separation in the spatial and frequency domains. These mappings make possible the localization of such impulsive sources. The faulty components of the machine have an impulsive behavior and thus will be highlighted on the mappings. The study presents experimental validations of the method on rotating machines.
Phased Acoustic Array Measurements of a 5.75 Percent Hybrid Wing Body Aircraft
NASA Technical Reports Server (NTRS)
Burnside, Nathan J.; Horne, William C.; Elmer, Kevin R.; Cheng, Rui; Brusniak, Leon
2016-01-01
Detailed acoustic measurements of the noise from the leading-edge Krueger flap of a 5.75 percent Hybrid Wing Body (HWB) aircraft model were recently acquired with a traversing phased microphone array in the AEDC NFAC (Arnold Engineering Development Complex, National Full Scale Aerodynamics Complex) 40- by 80-Foot Wind Tunnel at NASA Ames Research Center. The spatial resolution of the array was sufficient to distinguish between individual support brackets over the full-scale frequency range of 100 to 2875 Hertz. For conditions representative of landing and take-off configuration, the noise from the brackets dominated other sources near the leading edge. Inclusion of flight-like brackets for select conditions highlights the importance of including the correct number of leading-edge high-lift device brackets with sufficient scale and fidelity. These measurements will support the development of new predictive models.
A Study of New Pulse Auscultation System
Chen, Ying-Yun; Chang, Rong-Seng
2015-01-01
This study presents a new type of pulse auscultation system, which uses a condenser microphone to measure pulse sound waves on the wrist, captures the microphone signal for filtering, amplifies the useful signal and outputs it to an oscilloscope in analog form for waveform display and storage and delivers it to a computer to perform a Fast Fourier Transform (FFT) and convert the pulse sound waveform into a heartbeat frequency. Furthermore, it also uses an audio signal amplifier to deliver the pulse sound by speaker. The study observed the principles of Traditional Chinese Medicine’s pulsing techniques, where pulse signals at places called “cun”, “guan” and “chi” of the left hand were measured during lifting (100 g), searching (125 g) and pressing (150 g) actions. Because the system collects the vibration sound caused by the pulse, the sensor itself is not affected by the applied pressure, unlike current pulse piezoelectric sensing instruments, therefore, under any kind of pulsing pressure, it displays pulse changes and waveforms with the same accuracy. We provide an acquired pulse and waveform signal suitable for Chinese Medicine practitioners’ objective pulse diagnosis, thus providing a scientific basis for this Traditional Chinese Medicine practice. This study also presents a novel circuit design using an active filtering method. An operational amplifier with its differential features eliminates the interference from external signals, including the instant high-frequency noise. In addition, the system has the advantages of simple circuitry, cheap cost and high precision. PMID:25875192
A study of new pulse auscultation system.
Chen, Ying-Yun; Chang, Rong-Seng
2015-04-14
This study presents a new type of pulse auscultation system, which uses a condenser microphone to measure pulse sound waves on the wrist, captures the microphone signal for filtering, amplifies the useful signal and outputs it to an oscilloscope in analog form for waveform display and storage and delivers it to a computer to perform a Fast Fourier Transform (FFT) and convert the pulse sound waveform into a heartbeat frequency. Furthermore, it also uses an audio signal amplifier to deliver the pulse sound by speaker. The study observed the principles of Traditional Chinese Medicine's pulsing techniques, where pulse signals at places called "cun", "guan" and "chi" of the left hand were measured during lifting (100 g), searching (125 g) and pressing (150 g) actions. Because the system collects the vibration sound caused by the pulse, the sensor itself is not affected by the applied pressure, unlike current pulse piezoelectric sensing instruments, therefore, under any kind of pulsing pressure, it displays pulse changes and waveforms with the same accuracy. We provide an acquired pulse and waveform signal suitable for Chinese Medicine practitioners' objective pulse diagnosis, thus providing a scientific basis for this Traditional Chinese Medicine practice. This study also presents a novel circuit design using an active filtering method. An operational amplifier with its differential features eliminates the interference from external signals, including the instant high-frequency noise. In addition, the system has the advantages of simple circuitry, cheap cost and high precision.
Huber, Rainer; Bisitz, Thomas; Gerkmann, Timo; Kiessling, Jürgen; Meister, Hartmut; Kollmeier, Birger
2018-06-01
The perceived qualities of nine different single-microphone noise reduction (SMNR) algorithms were to be evaluated and compared in subjective listening tests with normal hearing and hearing impaired (HI) listeners. Speech samples added with traffic noise or with party noise were processed by the SMNR algorithms. Subjects rated the amount of speech distortions, intrusiveness of background noise, listening effort and overall quality, using a simplified MUSHRA (ITU-R, 2003 ) assessment method. 18 normal hearing and 18 moderately HI subjects participated in the study. Significant differences between the rating behaviours of the two subject groups were observed: While normal hearing subjects clearly differentiated between different SMNR algorithms, HI subjects rated all processed signals very similarly. Moreover, HI subjects rated speech distortions of the unprocessed, noisier signals as being more severe than the distortions of the processed signals, in contrast to normal hearing subjects. It seems harder for HI listeners to distinguish between additive noise and speech distortions or/and they might have a different understanding of the term "speech distortion" than normal hearing listeners have. The findings confirm that the evaluation of SMNR schemes for hearing aids should always involve HI listeners.
Speaker diarization system on the 2007 NIST rich transcription meeting recognition evaluation
NASA Astrophysics Data System (ADS)
Sun, Hanwu; Nwe, Tin Lay; Koh, Eugene Chin Wei; Bin, Ma; Li, Haizhou
2007-09-01
This paper presents a speaker diarization system developed at the Institute for Infocomm Research (I2R) for NIST Rich Transcription 2007 (RT-07) evaluation task. We describe in details our primary approaches for the speaker diarization on the Multiple Distant Microphones (MDM) conditions in conference room scenario. Our proposed system consists of six modules: 1). Least-mean squared (NLMS) adaptive filter for the speaker direction estimate via Time Difference of Arrival (TDOA), 2). An initial speaker clustering via two-stage TDOA histogram distribution quantization approach, 3). Multiple microphone speaker data alignment via GCC-PHAT Time Delay Estimate (TDE) among all the distant microphone channel signals, 4). A speaker clustering algorithm based on GMM modeling approach, 5). Non-speech removal via speech/non-speech verification mechanism and, 6). Silence removal via "Double-Layer Windowing"(DLW) method. We achieves error rate of 31.02% on the 2006 Spring (RT-06s) MDM evaluation task and a competitive overall error rate of 15.32% for the NIST Rich Transcription 2007 (RT-07) MDM evaluation task.
A comparison between swallowing sounds and vibrations in patients with dysphagia
Movahedi, Faezeh; Kurosu, Atsuko; Coyle, James L.; Perera, Subashan
2017-01-01
The cervical auscultation refers to the observation and analysis of sounds or vibrations captured during swallowing using either a stethoscope or acoustic/vibratory detectors. Microphones and accelerometers have recently become two common sensors used in modern cervical auscultation methods. There are open questions about whether swallowing signals recorded by these two sensors provide unique or complementary information about swallowing function; or whether they present interchangeable information. The aim of this study is to present a broad comparison of swallowing signals recorded by a microphone and a tri-axial accelerometer from 72 patients (mean age 63.94 ± 12.58 years, 42 male, 30 female), who underwent videofluoroscopic examination. The participants swallowed one or more boluses of thickened liquids of different consistencies, including thin liquids, nectar-thick liquids, and pudding. A comfortable self-selected volume from a cup or a controlled volume by the examiner from a 5ml spoon was given to the participants. A comprehensive set of features was extracted in time, information-theoretic, and frequency domains from each of 881 swallows presented in this study. The swallowing sounds exhibited significantly higher frequency content and kurtosis values than the swallowing vibrations. In addition, the Lempel-Ziv complexity was lower for swallowing sounds than those for swallowing vibrations. To conclude, information provided by microphones and accelerometers about swallowing function are unique and these two transducers are not interchangeable. Consequently, the selection of transducer would be a vital step in future studies. PMID:28495001
Microphones and Educational Media.
ERIC Educational Resources Information Center
Page, Marilyn
This paper describes the types of microphones that are available for use in media production. Definitions of 16 words and phrases used to describe microphones are followed by detailed descriptions of the two kinds of microphones as classified by mode of operation, i.e., velocity, or ribbon microphones, and pressure operated microphones, which…
NASA Astrophysics Data System (ADS)
Walker, K. T.
2013-12-01
The detection of nuclear explosions depends on the signal-to-noise ratio of recorded signals. Cross-correlation-based array processing algorithms, such as that used by the International Data Center for nuclear monitoring, locks onto the dominant signal, masking weaker signals from potential sources of interest along different back azimuths. Microbaroms and microseisms are continuous sources of acoustic and seismic noise in the ~0.1 to 0.5 Hz range radiated by areas in the ocean where two opposing wave sets with the same period coexist. These sources of energy travel tens of thousands of kilometers and routinely dominate the recorded spectra. For any given time, this noise may render useless several arrays that are otherwise in a good position to detect an explosion. It would therefore be useful to know in real-time where such noise is expected to cause problems in event detection and location. In this presentation, I show that there is a potential to use the NOAA Wave Watch 3 (NWW3) modeling program to routinely output in real-time a prediction of the global distribution of microbarom and microseism sources. I do this by presenting a study of the detailed analysis of 12 microphone arrays around the North Pacific that recorded microbaroms during 2010. For this analysis, a time-progressive, frequency-domain beamforming approach is implemented. It is assumed that microbarom sources are illuminated by a high density of intersecting dominant microbarom back azimuths. Common pelagic sources move around the North Pacific during the boreal winter. Summertime North Pacific sources are only observed by western Pacific arrays, presumably a result of weaker microbarom radiation and westward stratospheric winds. A well-defined source is resolved ˜2000 km off the coast of California in January 2011 that moves closer to land over several days. The source locations are corrected for deflection by horizontal winds using acoustic ray trace modeling with range-dependent atmospheric specifications provided by publicly available NOAA/NASA models. The observed source locations do not correlate with anomalies in NWW3 model field data. However, application of the opposing-wave, microbarom source model of Waxler and Gilbert (2006) to the NWW3 directional wave height spectra output at buoy locations within 1100 km of the western North America coastline predicts microbarom radiation in locations that correlate with observed microbarom locations. Therefore, the availability of microbarom source strength maps as a real-time product from NWW3 is simply dependent on an additional, minor code revision. Success in such a revision has recently been reported by Drs. Fabrice Ardhuin and Justin Stopa.
Preliminary Study on Acoustic Detection of Faults Experienced by a High-Bypass Turbofan Engine
NASA Technical Reports Server (NTRS)
Boyle, Devin K.
2014-01-01
The vehicle integrated propulsion research (VIPR) effort conducted by NASA and several partners provided an unparalleled opportunity to test a relatively low TRL concept regarding the use of far field acoustics to identify faults occurring in a high bypass turbofan engine. Though VIPR Phase II ground based aircraft installed engine testing wherein a multitude of research sensors and methods were evaluated, an array of acoustic microphones was used to determine the viability of such an array to detect failures occurring in a commercially representative high bypass turbofan engine. The failures introduced during VIPR testing included commanding the engine's low pressure compressor (LPC) exit and high pressure compressor (HPC) 14th stage bleed values abruptly to their failsafe positions during steady state
Aeroacoustic Experiments in the NASA Langley Low-Turbulence Pressure Tunnel
NASA Technical Reports Server (NTRS)
Choudhari, Meelan M.; Lockard, David P.; Macaraeg, Michele G.; Singer, Bart A.; Streett, Craig L.; Neubert, Guy R.; Stoker, Robert W.; Underbrink, James R.; Berkman, Mert E.; Khorrami, Mehdi R.
2002-01-01
A phased microphone array was used in the NASA Langley Low-Turbulence Pressure Tunnel to obtain acoustic data radiating from high-lift wing configurations. The data included noise localization plots and acoustic spectra. The tests were performed at Reynolds numbers based on the cruise-wing chord, ranging from 3.6 x 10(exp 6) to 19.2 x 10(exp 6). The effects of Reynolds number were small and monotonic for Reynolds numbers above 7.2 x 10(exp 6).
Wavevector-Frequency Analysis with Applications to Acoustics
1994-01-01
Turbulent Boundary Layer Pressure Measured by Microphone Arrays," Journal of the Acoustical Society of America, vol. 49, no. 3, March 1971 , pp. 862-877. 1...ARplications of Green’s FuntionsinScie,.-and Enginlering, Prentice-Hall, Inc., Englewood Hills, NJ, 1971 . 9. 3. Ffowcs-Williams et al., Modern Methods for...variables of a random process are kalled Joint w.merit ,. The m,n-th joint moment of the random variables, v and w, iz flefined by E ,N 1 f (aB) do d- where
Preliminary evaluation of a novel bone-conduction device for single-sided deafness.
Popelka, Gerald R; Derebery, Jennifer; Blevins, Nikolas H; Murray, Michael; Moore, Brian C J; Sweetow, Robert W; Wu, Ben; Katsis, Mina
2010-04-01
A new intraoral bone-conduction device has advantages over existing bone-conduction devices for reducing the auditory deficits associated with single-sided deafness (SSD). Existing bone-conduction devices effectively mitigate auditory deficits from single-sided deafness but have suboptimal microphone locations, limited frequency range, and/or require invasive surgery. A new device has been designed to improve microphone placement (in the ear canal of the deaf ear), provide a wider frequency range, and eliminate surgery by delivering bone-conduction signals to the teeth via a removable oral appliance. Forces applied by the oral appliance were compared with forces typically experienced by the teeth from normal functions such as mastication or from other appliances. Tooth surface changes were measured on extracted teeth, and transducer temperature was measured under typical use conditions. Dynamic operating range, including gain, bandwidth, and maximum output limits, were determined from uncomfortable loudness levels and vibrotactile thresholds, and speech recognition scores were measured using normal-hearing subjects. Auditory performance in noise (Hearing in Noise Test) was measured in a limited sample of SSD subjects. Overall comfort, ease of insertion, and removal and visibility of the oral appliance in comparison with traditional hearing aids were measured using a rating scale. The oral appliance produces forces that are far below those experienced by the teeth from normal functions or conventional dental appliances. The bone-conduction signal level can be adjusted to prevent tactile perception yet provide sufficient gain and output at frequencies from 250 to 12,000 Hz. The device does not damage tooth surfaces nor produce heat, can be inserted and removed easily, and is as comfortable to wear as traditional hearing aids. The new microphone location has advantages for reducing the auditory deficits caused by SSD, including the potential to provide spatial cues introduced by reflections from the pinna, compared with microphone locations for existing devices. A new approach for SSD has been proposed that optimizes microphone location and delivers sound by bone conduction through a removable oral appliance. Measures in the laboratory using normal-hearing subjects indicate that the device provides useful gain and output for SSD patients, is comfortable, does not seem to have detrimental effects on oral function or oral health, and has several advantages over existing devices. Specifically, microphone placement is optimized for reducing the auditory deficit caused by SSD, frequency bandwidth is much greater, and the system does not require surgical placement. Auditory performance in a small sample of SSD subjects indicated a substantial advantage compared with not wearing the device. Future studies will involve performance measures on SSD patients wearing the device for longer periods.
Location of space debris by infrasound
NASA Astrophysics Data System (ADS)
Asming, Vladimir; Vinogradov, Yuri
2013-04-01
After an exhausted stage has separated from a rocket it comes back to the dense atmosphere. It burns and divides into many pieces moving separately. Ballisticians can calculate an approximate trace of a falling stage and outline a supposed area where the debris can fall (target ellipse). Such ellipses are usually rather big in sizes (something like 60 x 100 km). For safety reasons all local inhabitants should be evacuated from a target area during rocket's launch. One of problems is that the ballistician can not compute the traces and areas exactly. There were many cases when debris had fallen outside the areas. Rescue teams must check such cases to make changes in rockets. The largest pieces can contain remains of toxic rocket fuel and therefore must be found and deactivated. That is why the problem of debris location is of significant importance for overland fall areas. It is more or less solved in Kazakhstan where large fragments of 1st stages can be seen in the Steppe but it is very difficult to find fragments of 2nd stages in Altai, Tomsk region and Komi republic (taiga, mountains, swamps). The rocket debris produces strong infrasonic shock waves during their reentry. Since 2009 the Kola Branch of Geophysical Survey of RAS participates in joint project with Khrunichev Space Center concerning with infrasound debris location. We have developed mobile infrasound arrays consisting of 3 microphones, analog-to-digit converter, GPS and notebook. The aperture is about 200 m, deployment time is less than 1 hour. Currently we have 4 such arrays, one of them is wireless and consists of 3 units comprising a microphone, GPS and radio-transmitter. We have made several field measurements by 3 or 4 such arrays placed around target ellipses of falling rocket stages in Kazakhstan ("Soyuz" rocket 1st stage), Altai and Tomsk region ("Proton" rocket 2nd stages). If was found that a typical 2nd stage divides into hundreds of pieces and each one generates a shock wave. This is a complicated problem to associate signals registered by different arrays. We developed an approach based on modeling of realistic fragment trajectories. We assume that until some time t0 all stage is moving along the predicted theoretical trajectory. At the time t0 (disintegration) the pieces receive different ballistic coefficients and random increments of velocity. We continue the trajectory (solving 2nd order differential equation) using the coordinates at t0 and velocities with random increments as initial conditions and with different ballistic coefficients. Thus we obtain a 'pipe' of trajectories each one can in principle occur in reality. For each trajectory of the pipe we compute theoretical times and azimuths of shock wave arrivals to the arrays. If they are in agreement with the measured arrivals we consider that the trajectory has occurred in reality and its end is the landing place of a rocket fragment. The experiment of "Soyuz" 1st stage location in Kazakhstan has shown that errors of such location are less than 2 km that is acceptable to use the method in practice.
Soft-talker: a sound level monitor for the hard-of-hearing using an improved tactile transducer.
Walker, J R; Fenn, G; Smith, B Z
1987-04-01
We describe a small wearable device which enables deaf people to monitor the volume of their voices; it consists of a microphone, amplifier, signal rectifier, smoothing and a level detector connected to a wrist-worn vibrator, and provides vibrotactile feedback of voice level.
Objective Measure of Nasal Air Emission Using Nasal Accelerometry
ERIC Educational Resources Information Center
Cler, Meredith J.; Lien, Yu-An, S.; Braden, Maia N.; Mittleman, Talia; Downing, Kerri; Stepp, Cara, E.
2016-01-01
Purpose: This article describes the development and initial validation of an objective measure of nasal air emission (NAE) using nasal accelerometry. Method: Nasal acceleration and nasal airflow signals were simultaneously recorded while an expert speech language pathologist modeled NAEs at a variety of severity levels. In addition, microphone and…
Balloon Borne Infrasound Platforms for Remote Monitoring of Natural Hazards
NASA Astrophysics Data System (ADS)
Lees, J. M.; Bowman, D. C.
2016-12-01
In the last three years several NASA supported balloon launches were instrumented with infrasound sensors to monitor acoustic wavefields in the stratosphere. Such high altitude platforms may detect geoacoustic phenomena at much greater ranges than equivalent ground stations, and perhaps record sound waves that rarely reach the Earth's surface. Since acoustic waves are a key diagnostic for several natural hazards (volcanic eruptions, severe storms, and tsunamis, for example), the increased range and spatial coverage of balloon borne arrays promise greater quantification and perhaps early warning of such events. Before this can be accomplished, the performance of stratospheric arrays must be compared to tthat of those on the ground. Here, we show evidence for 0.2 Hz infrasound associated with oceanic oscillations recorded during night time hours of the flights, consistent with concurrent ground recordings on the east and west coasts of North America. We also report numerous narrow band acoustic signals (5-30 Hz) that resemble recordings made in in the 1960's, the last time microphones were lofted into the stratosphere. Theoretical and ground based observational data from Rind(1977) indicate loss of acoustic energy in the thermosphere, where heating of the upper atmosphere is predicted to be on the order of 30-40 degrees Kelvin per day. We propose testing these ideas by using extensive ground arrays recently deployed in North America in conjunction with airborne platforms installed in the mid-stratosphere. New experiments scheduled for 2016 include circumnavigation of Antarctica (collected in June) as well as two proposed flights in New Mexico in September. The flights are designed to both capture known acoustic sources as well as events of opportunity.
Anderson, Melinda C; Arehart, Kathryn H; Souza, Pamela E
2018-02-01
Current guidelines for adult hearing aid fittings recommend the use of a prescriptive fitting rationale with real-ear verification that considers the audiogram for the determination of frequency-specific gain and ratios for wide dynamic range compression. However, the guidelines lack recommendations for how other common signal-processing features (e.g., noise reduction, frequency lowering, directional microphones) should be considered during the provision of hearing aid fittings and fine-tunings for adult patients. The purpose of this survey was to identify how audiologists make clinical decisions regarding common signal-processing features for hearing aid provision in adults. An online survey was sent to audiologists across the United States. The 22 survey questions addressed four primary topics including demographics of the responding audiologists, factors affecting selection of hearing aid devices, the approaches used in the fitting of signal-processing features, and the strategies used in the fine-tuning of these features. A total of 251 audiologists who provide hearing aid fittings to adults completed the electronically distributed survey. The respondents worked in a variety of settings including private practice, physician offices, university clinics, and hospitals/medical centers. Data analysis was based on a qualitative analysis of the question responses. The survey results for each of the four topic areas (demographics, device selection, hearing aid fitting, and hearing aid fine-tuning) are summarized descriptively. Survey responses indicate that audiologists vary in the procedures they use in fitting and fine-tuning based on the specific feature, such that the approaches used for the fitting of frequency-specific gain differ from other types of features (i.e., compression time constants, frequency lowering parameters, noise reduction strength, directional microphones, feedback management). Audiologists commonly rely on prescriptive fitting formulas and probe microphone measures for the fitting of frequency-specific gain and rely on manufacturers' default settings and recommendations for both the initial fitting and the fine-tuning of signal-processing features other than frequency-specific gain. The survey results are consistent with a lack of published protocols and guidelines for fitting and adjusting signal-processing features beyond frequency-specific gain. To streamline current practice, a transparent evidence-based tool that enables clinicians to prescribe the setting of other features from individual patient characteristics would be desirable. American Academy of Audiology
Hearing aid malfunction detection system
NASA Technical Reports Server (NTRS)
Kessinger, R. L. (Inventor)
1977-01-01
A malfunction detection system for detecting malfunctions in electrical signal processing circuits is disclosed. Malfunctions of a hearing aid in the form of frequency distortion and/or inadequate amplification by the hearing aid amplifier, as well as weakening of the hearing aid power supply are detectable. A test signal is generated and a timed switching circuit periodically applies the test signal to the input of the hearing aid amplifier in place of the input signal from the microphone. The resulting amplifier output is compared with the input test signal used as a reference signal. The hearing aid battery voltage is also periodically compared to a reference voltage. Deviations from the references beyond preset limits cause a warning system to operate.
NASA Astrophysics Data System (ADS)
Kaiser, Zachary David Epping
Documenting the presence of rare bat species can be difficult. The current summer survey protocol for the federally endangered Indiana bat ( Myotis sodalis) requires passive acoustic sampling with directional microphones (e.g., Anabats), but there are still questions about best practices for choosing survey sites and appropriate detector models. Indiana bats are capable of foraging in an array of cover types, including structurally-complex, interior forests. Further, data acquisition among different commercially available bat detectors is likely highly variable, due to the use of proprietary microphones with different frequency responses, sensitivities, and directionality. We paired omnidirectional Wildlife Acoustic SM2BAT+ (SM2) and directional Titley Scientific Anabat SD2 (Anabat) detectors at 71 random points near Indianapolis, Indiana from May-August 2012-2013 to compare data acquisition by phonic group (low, mid, Myotis) and to determine what factors affect probability of detection and site occupancy for Indiana bats when sampling with acoustics near an active maternity colony (0.20--8.39 km away). Weatherproofing for Anabat microphones was 45° angle PVC tubes and for SM2 microphones was their foam shielding; microphones were paired at 2 m and 5 m heights. Habitat and landscape covariates were measured in the field or via ArcGIS. We adjusted file parameters to make SM2 and Anabat data comparable. Files were identified using Bat Call ID software, with visual inspection of Indiana bat calls. The effects of detector type, phonic group, height, and their interactions on mean files recorded per site were assessed using generalized estimating equations and LSD pairwise comparisons. We reduced probability of detection (p) and site occupancy (ψ) model covariates with Pearson's correlation and PCA. We used Presence 6.1 software and Akaike's Information Criteria to assess models for p and ψ. Anabats and SM2s did not perform equally. Anabats recorded more low and midrange files, but fewer Myotis files per site than SM2s. When comparing the same model of detectors, deployment height did not impact data acquisition. Weatherproofing may limit the ability of Anabats to record Myotis, but Anabat microphones may have greater detection ranges for low and midrange bats. Indiana bat detections were low for both detector types, representing only 4.4% of identifiable bat files recorded by SM2s. We detected Indiana bats at 43.7% of sampled sites and on 31.4% of detector-nights; detectability increased as "forest closure" and mean nightly temperature increased, likely due to reduced clutter and increased bat activity, respectively. Proximity to colony trees and specific cover types generally did not affect occupancy, suggesting that Indiana bats use a variety of cover types in this landscape. Omnidirectional SMX-US microphones may be more appropriate for Indiana bat surveys than directional Anabat microphones. However, we conclude that 2 nights of passive acoustic sampling per site may be insufficient for reliably detecting this species when it is present. In turn, the use of acoustic monitoring as a means to document presence or probable absence should be reassessed.
Sonic Booms on Big Structures (SonicBOBS) Phase I Database; NASA Dryden Sensors
NASA Technical Reports Server (NTRS)
Haering, Edward A., Jr.; Arnac, Sarah Renee
2010-01-01
This DVD contains 13 channels of microphone and up to 22 channels of pressure transducer data collected in September, 2009 around several buildings located at Edwards Air Force Base. These data were recorded by NASA Dryden. Not included are data taken by NASA Langley and Gulfstream. Each day's data is in a separate folder and each pass is in a file beginning with "SonicBOBS_" (for microphone data) or "SonicBOBSBB_" (for BADS and BASS data) followed by the month, day, year as two digits each, followed by the hour, minute, sec after midnight GMT. The filename time given is for the END time of the raw recording file. In the case of the microphone data, this time may be several minutes after the sonic boom, and is according to the PC's uncalibrated clock. The Matlab data files have the actual time as provided by a GPS-based IRIG-B signal recorded concurrently with the data. Microphone data is given for 5 seconds prior to 20 seconds after the sonic boom. BADS and BASS data is given for the full recording, 6 seconds for the BADS and 10 seconds for the BASS. As an example of the naming convention, file "SonicBOBS_091209154618.mat" is from September 12, 2009 at 15:46:18 GMT. Note that data taken on September 12, 2009 prior to 01:00:00 GMT was of the Space Shuttle Discovery (a sonic boom of opportunity), which was on September 11, 2009 in local Pacific Daylight Time.
Conceptual Sound System Design for Clifford Odets' "GOLDEN BOY"
NASA Astrophysics Data System (ADS)
Yang, Yen Chun
There are two different aspects in the process of sound design, "Arts" and "Science". In my opinion, the sound design should engage both aspects strongly and in interaction with each other. I started the process of designing the sound for GOLDEN BOY by building the city soundscape of New York City in 1937. The scenic design for this piece is designed in the round, putting the audience all around the stage; this gave me a great opportunity to use surround and specialization techniques to transform the space into a different sonic world. My specialization design is composed of two subsystems -- one is the four (4) speakers center cluster diffusing towards the four (4) sections of audience, and the other is the four (4) speakers on the four (4) corners of the theatre. The outside ring provides rich sound source localization and the inside ring provides more support for control of the specialization details. In my design four (4) lavalier microphones are hung under the center iron cage from the four (4) corners of the stage. Each microphone is ten (10) feet above the stage. The signal for each microphone is sent to the two (2) center speakers in the cluster diagonally opposite the microphone. With the appropriate level adjustment of the microphones, the audience will not notice the amplification of the voices; however, through my specialization system, the presence and location of the voices of all actors are preserved for all audiences clearly. With such vocal reinforcements provided by the microphones, I no longer need to worry about overwhelming the dialogue on stage by the underscoring. A successful sound system design should not only provide a functional system, but also take the responsibility of bringing actors' voices to the audience and engaging the audience with the world that we create on stage. By designing a system which reinforces the actors' voices while at the same time providing control over localization of movement of sound effects, I was able not only to make the text present and clear for the audiences, but also to support the storyline strongly through my composed music, environmental soundscapes, and underscoring.
Development of respiratory rhythms in perinatal chick embryos.
Chiba, Y; Khandoker, A H; Nobuta, M; Moriya, K; Akiyama, R; Tazawa, H
2002-04-01
In chick embryos, gas exchange takes place via the chorioallantoic membrane (CAM) and the lungs at approximately 1 day prior to hatching. The present study was designed to elucidate the development of respiratory rhythms in the chick embryo during the whole pipping (perinatal) period with a condenser-microphone measuring system. The microphone was hermetically attached on the eggshell over the air cell on day 18 of incubation. It first detected a cardiogenic signal (i.e. acoustocardiogram), and then beak clapping and breathing signals (acoustorespirogram, ARG). The first signals of lung ventilation appeared intermittently and irregularly approximately once per 5 s among the clapping signals after the embryo penetrated its beak into the air cell (internal pipping, IP). The respiratory rhythm then developed irregularly, with a subsequent more regular rate. The envelope pattern of breathing from the onset of IP through external pipping (EP) to hatching was constructed by a specially devised procedure, which eliminated external and internal noises. The envelope patterns indicated that the IP, EP and whole perinatal periods of 10 embryos were 14.1+/-6.4 (S.D.), 13.6+/-4.0 and 27.6+/-5.4 h, respectively. In addition, they also indicated the period of embryonic hatching activity (i.e. climax) which was 48+/-19 min. The development of respiratory rhythm was also shown by the instantaneous respiratory rate (IRR) which was designated as an inverse value of two adjacent ARG waves.
High sensitivity capacitive MEMS microphone with spring supported diaphragm
NASA Astrophysics Data System (ADS)
Mohamad, Norizan; Iovenitti, Pio; Vinay, Thurai
2007-12-01
Capacitive microphones (condenser microphones) work on a principle of variable capacitance and voltage by the movement of its electrically charged diaphragm and back plate in response to sound pressure. There has been considerable research carried out to increase the sensing performance of microphones while reducing their size to cater for various modern applications such as mobile communication and hearing aid devices. This paper reviews the development and current performance of several condenser MEMS microphone designs, and introduces a microphone with spring supported diaphragm to further improve condenser microphone performance. The numerical analysis using Coventor FEM software shows that this new microphone design has a higher mechanical sensitivity compared to the existing edge clamped flat diaphragm condenser MEMS microphone. The spring supported diaphragm is shown to have a flat frequency response up to 7 kHz and more stable under the variations of the diaphragm residual stress. The microphone is designed to be easily fabricated using the existing silicon fabrication technology and the stability against the residual stress increases its reproducibility.
A Comparative Study of Simulated and Measured Main Landing Gear Noise for Large Civil Transports
NASA Technical Reports Server (NTRS)
Konig, Benedikt; Fares, Ehab; Ravetta, Patricio; Khorrami, Mehdi R.
2017-01-01
Computational results for the NASA 26%-scale model of a six-wheel main landing gear with and without a toboggan-shaped noise reduction fairing are presented. The model is a high-fidelity representation of a Boeing 777-200 aircraft main landing gear. A lattice Boltzmann method was used to simulate the unsteady flow around the model in isolation. The computations were conducted in free-air at a Mach number of 0.17, matching a recent acoustic test of the same gear model in the Virginia Tech Stability Wind Tunnel in its anechoic configuration. Results obtained on a set of grids with successively finer spatial resolution demonstrate the challenge in resolving/capturing the flow field for the smaller components of the gear and their associated interactions, and the resulting effects on the high-frequency segment of the farfield noise spectrum. Farfield noise spectra were computed based on an FWH integral approach, with simulated pressures on the model solid surfaces or flow-field data extracted on a set of permeable surfaces enclosing the model as input. Comparison of these spectra with microphone array measurements obtained in the tunnel indicated that, for the present complex gear model, the permeable surfaces provide a more accurate representation of farfield noise, suggesting that volumetric effects are not negligible. The present study also demonstrates that good agreement between simulated and measured farfield noise can be achieved if consistent post-processing is applied to both physical and synthetic pressure records at array microphone locations.
Auroral Infrasound Observed at I53US at Fairbanks, Alaska
NASA Astrophysics Data System (ADS)
Wilson, C. R.; Olson, J. V.
2003-12-01
In this presentation we will describe two different types of auroral infrasound recently observed at Fairbanks, Alaska in the pass band from 0.015 to 0.10 Hz. Infrasound signals associated with auroral activity (AIW) have been observed in Fairbanks over the past 30 years with infrasonic microphone arrays. The installation of the new CTBT/IMS infrasonic array, I53US, at Fairbanks has resulted in a greatly increased quality of the infrasonic data with which to study natural sources of infrasound. In the historical data at Fairbanks all the auroral infrasonic waves (AIW) detected were found to be the result of bow waves that are generated by supersonic motion of auroral arcs that contain strong electrojet currents. This infrasound is highly anisotropic, moving in the same direction as that of the auroral arc. AIW bow waves observed in 2003 at I53US will be described. Recently at I53US we have observed many events of very high trace velocity that are comprised of continuous, highly coherent wave trains. These waves occur in the morning hours at times of strong auroral activity. This new type of very high trace velocity AIW appears to be associated with pulsating auroral displays. Pulsating auroras occur predominantly after magnetic midnight (10:00 UT at Fairbanks). They are a usual part of the recovery phase of auroral substorms and are produced by energetic electrons precipitating into the atmosphere. Given proper dark, cloudless sky conditions during the AIW events, bright pulsating auroral forms were sometimes visible overhead.
Modeling of influencing parameters in active noise control on an enclosure wall
NASA Astrophysics Data System (ADS)
Tarabini, Marco; Roure, Alain
2008-04-01
This paper investigates, by means of a numerical model, the possibility of using an active noise barrier to virtually reduce the acoustic transparency of a partition wall inside an enclosure. The room is modeled with the image method as a rectangular enclosure with a stationary point source; the active barrier is set up by an array of loudspeakers and error microphones and is meant to minimize the squared sound pressure on a wall with the use of a decentralized control. Simulations investigate the effects of the enclosure characteristics and of the barrier geometric parameters on the sound pressure attenuation on the controlled partition, on the whole enclosure potential energy and on the diagonal control stability. Performances are analyzed in a frequency range of 25-300 Hz at discrete 25 Hz steps. Influencing parameters and their effects on the system performances are identified with a statistical inference procedure. Simulation results have shown that it is possible to averagely reduce the sound pressure on the controlled partition. In the investigated configuration, the surface attenuation and the diagonal control stability are mainly driven by the distance between the loudspeakers and the error microphones and by the loudspeakers directivity; minor effects are due to the distance between the error microphones and the wall, by the wall reflectivity and by the active barrier grid meshing. Room dimensions and source position have negligible effects. Experimental results point out the validity of the model and the efficiency of the barrier in the reduction of the wall acoustic transparency.
Background noise levels measured in the NASA Lewis 9- by 15-foot low-speed wind tunnel
NASA Technical Reports Server (NTRS)
Woodward, Richard P.; Dittmar, James H.; Hall, David G.; Kee-Bowling, Bonnie
1994-01-01
The acoustic capability of the NASA Lewis 9 by 15 Foot Low Speed Wind Tunnel has been significantly improved by reducing the background noise levels measured by in-flow microphones. This was accomplished by incorporating streamlined microphone holders having a profile developed by researchers at the NASA Ames Research Center. These new holders were fabricated for fixed mounting on the tunnel wall and for an axially traversing microphone probe which was mounted to the tunnel floor. Measured in-flow noise levels in the tunnel test section were reduced by about 10 dB with the new microphone holders compared with those measured with the older, less refined microphone holders. Wake interference patterns between fixed wall microphones were measured and resulted in preferred placement patterns for these microphones to minimize these effects. Acoustic data from a model turbofan operating in the tunnel test section showed that results for the fixed and translating microphones were equivalent for common azimuthal angles, suggesting that the translating microphone probe, with its significantly greater angular resolution, is preferred for sideline noise measurements. Fixed microphones can provide a local check on the traversing microphone data quality, and record acoustic performance at other azimuthal angles.
Infrasound from Wind Turbines Could Affect Humans
ERIC Educational Resources Information Center
Salt, Alec N.; Kaltenbach, James A.
2011-01-01
Wind turbines generate low-frequency sounds that affect the ear. The ear is superficially similar to a microphone, converting mechanical sound waves into electrical signals, but does this by complex physiologic processes. Serious misconceptions about low-frequency sound and the ear have resulted from a failure to consider in detail how the ear…
Educational Inductive Gravimeter
ERIC Educational Resources Information Center
Nunn, John
2014-01-01
A simple inductive gravimeter constructed from a rigid plastic pipe and insulated copper wire is described. When a magnet is dropped through the vertically mounted pipe it induces small alternating voltages. These small signals are fed to the microphone input of a typical computer and sampled at a typical rate of 44.1 kHz using a custom computer…
Acoustic Measurement Of Periodic Motion Of Levitated Object
NASA Technical Reports Server (NTRS)
Watkins, John L.; Barmatz, Martin B.
1992-01-01
Some internal vibrations, oscillations in position, and rotations of acoustically levitated object measured by use of microphone already installed in typical levitation chamber for tuning chamber to resonance and monitoring operation. Levitating acoustic signal modulated by object motion of lower frequency. Amplitude modulation detected and analyzed spectrally to determine amplitudes and frequencies of motions.
NASA Technical Reports Server (NTRS)
Ventrice, M. B.; Fang, J. C.; Purdy, K. R.
1975-01-01
A system using a hot-wire transducer as an analog of a liquid droplet of propellant was employed to investigate the ingredients of the acoustic instability of liquid-propellant rocket engines. It was assumed that the combustion process was vaporization-limited and that the combustion chamber was acoustically similar to a closed-closed right-circular cylinder. Before studying the hot-wire closed-loop system (the analog system), a microphone closed-loop system, which used the response of a microphone as the source of a linear feedback exciting signal, was investigated to establish the characteristics of self-sustenance of acoustic fields. Self-sustained acoustic fields were found to occur only at resonant frequencies of the chamber. In the hot-wire closed-loop system, the response of hot-wire anemometer was used as the source of the feedback exciting signal. The self-sustained acoustic fields which developed in the system were always found to be harmonically distorted and to have as their fundamental frquency a resonant frequency for which there also existed a second resonant frequency which was approximately twice the fundamental frequency.
NASA Astrophysics Data System (ADS)
Dumoulin, Romain
Despite the fact that noise-induced hearing loss remains the number one occupational disease in developed countries, individual noise exposure levels are still rarely known and infrequently tracked. Indeed, efforts to standardize noise exposure levels present disadvantages such as costly instrumentation and difficulties associated with on site implementation. Given their advanced technical capabilities and widespread daily usage, mobile phones could be used to measure noise levels and make noise monitoring more accessible. However, the use of mobile phones for measuring noise exposure is currently limited due to the lack of formal procedures for their calibration and challenges regarding the measurement procedure. Our research investigated the calibration of mobile phone-based solutions for measuring noise exposure using a mobile phone's built-in microphones and wearable external microphones. The proposed calibration approach integrated corrections that took into account microphone placement error. The corrections were of two types: frequency-dependent, using a digital filter and noise level-dependent, based on the difference between the C-weighted noise level minus A-weighted noise level of the noise measured by the phone. The electro-acoustical limitations and measurement calibration procedure of the mobile phone were investigated. The study also sought to quantify the effect of noise exposure characteristics on the accuracy of calibrated mobile phone measurements. Measurements were carried out in reverberant and semi-anechoic chambers with several mobiles phone units of the same model, two types of external devices (an earpiece and a headset with an in-line microphone) and an acoustical test fixture (ATF). The proposed calibration approach significantly improved the accuracy of the noise level measurements in diffuse and free fields, with better results in the diffuse field and with ATF positions causing little or no acoustic shadowing. Several sources of errors and uncertainties were identified including the errors associated with the inter-unit-variability, the presence of signal saturation and the microphone placement relative to the source and the wearer. The results of the investigations and validation measurements led to recommendations regarding the measurement procedure including the use of external microphones having lower sensitivity and provided the basis for a standardized and unique factory default calibration method intended for implementation in any mobile phone. A user-defined adjustment was proposed to minimize the errors associated with calibration and the acoustical field. Mobile phones implementing the proposed laboratory calibration and used with external microphones showed great potential as noise exposure instruments. Combined with their potential as training and prevention tools, the expansion of their use could significantly help reduce the risks of noise-induced hearing loss.
Aronoff, Justin M.; Freed, Daniel J.; Fisher, Laurel M.; Pal, Ivan; Soli, Sigfrid D.
2011-01-01
Objectives Cochlear implant microphones differ in placement, frequency response, and other characteristics such as whether they are directional. Although normal hearing individuals are often used as controls in studies examining cochlear implant users’ binaural benefits, the considerable differences across cochlear implant microphones make such comparisons potentially misleading. The goal of this study was to examine binaural benefits for speech perception in noise for normal hearing individuals using stimuli processed by head-related transfer functions (HRTFs) based on the different cochlear implant microphones. Design HRTFs were created for different cochlear implant microphones and used to test participants on the Hearing in Noise Test. Experiment 1 tested cochlear implant users and normal hearing individuals with HRTF-processed stimuli and with sound field testing to determine whether the HRTFs adequately simulated sound field testing. Experiment 2 determined the measurement error and performance-intensity function for the Hearing in Noise Test with normal hearing individuals listening to stimuli processed with the various HRTFs. Experiment 3 compared normal hearing listeners’ performance across HRTFs to determine how the HRTFs affected performance. Experiment 4 evaluated binaural benefits for normal hearing listeners using the various HRTFs, including ones that were modified to investigate the contributions of interaural time and level cues. Results The results indicated that the HRTFs adequately simulated sound field testing for the Hearing in Noise Test. They also demonstrated that the test-retest reliability and performance-intensity function were consistent across HRTFs, and that the measurement error for the test was 1.3 dB, with a change in signal-to-noise ratio of 1 dB reflecting a 10% change in intelligibility. There were significant differences in performance when using the various HRTFs, with particularly good thresholds for the HRTF based on the directional microphone when the speech and masker were spatially separated, emphasizing the importance of measuring binaural benefits separately for each HRTF. Evaluation of binaural benefits indicated that binaural squelch and spatial release from masking were found for all HRTFs and binaural summation was found for all but one HRTF, although binaural summation was less robust than the other types of binaural benefits. Additionally, the results indicated that neither interaural time nor level cues dominated binaural benefits for the normal hearing participants. Conclusions This study provides a means to measure the degree to which cochlear implant microphones affect acoustic hearing with respect to speech perception in noise. It also provides measures that can be used to evaluate the independent contributions of interaural time and level cues. These measures provide tools that can aid researchers in understanding and improving binaural benefits in acoustic hearing individuals listening via cochlear implant microphones. PMID:21412155
Detection and tracking of drones using advanced acoustic cameras
NASA Astrophysics Data System (ADS)
Busset, Joël.; Perrodin, Florian; Wellig, Peter; Ott, Beat; Heutschi, Kurt; Rühl, Torben; Nussbaumer, Thomas
2015-10-01
Recent events of drones flying over city centers, official buildings and nuclear installations stressed the growing threat of uncontrolled drone proliferation and the lack of real countermeasure. Indeed, detecting and tracking them can be difficult with traditional techniques. A system to acoustically detect and track small moving objects, such as drones or ground robots, using acoustic cameras is presented. The described sensor, is completely passive, and composed of a 120-element microphone array and a video camera. The acoustic imaging algorithm determines in real-time the sound power level coming from all directions, using the phase of the sound signals. A tracking algorithm is then able to follow the sound sources. Additionally, a beamforming algorithm selectively extracts the sound coming from each tracked sound source. This extracted sound signal can be used to identify sound signatures and determine the type of object. The described techniques can detect and track any object that produces noise (engines, propellers, tires, etc). It is a good complementary approach to more traditional techniques such as (i) optical and infrared cameras, for which the object may only represent few pixels and may be hidden by the blooming of a bright background, and (ii) radar or other echo-localization techniques, suffering from the weakness of the echo signal coming back to the sensor. The distance of detection depends on the type (frequency range) and volume of the noise emitted by the object, and on the background noise of the environment. Detection range and resilience to background noise were tested in both, laboratory environments and outdoor conditions. It was determined that drones can be tracked up to 160 to 250 meters, depending on their type. Speech extraction was also experimentally investigated: the speech signal of a person being 80 to 100 meters away can be captured with acceptable speech intelligibility.
NASA Astrophysics Data System (ADS)
Asgari, Shadnaz; Ali, Andreas M.; Collier, Travis C.; Yao, Yuan; Hudson, Ralph E.; Yao, Kung; Taylor, Charles E.
2007-09-01
The focus of most direction-of-arrival (DOA) estimation problems has been based mainly on a two-dimensional (2D) scenario where we only need to estimate the azimuth angle. But in various practical situations we have to deal with a three-dimensional scenario. The importance of being able to estimate both azimuth and elevation angles with high accuracy and low complexity is of interest. We present the theoretical and the practical issues of DOA estimation using the Approximate-Maximum-Likelihood (AML) algorithm in a 3D scenario. We show that the performance of the proposed 3D AML algorithm converges to the Cramer-Rao Bound. We use the concept of an isotropic array to reduce the complexity of the proposed algorithm by advocating a decoupled 3D version. We also explore a modified version of the decoupled 3D AML algorithm which can be used for DOA estimation with non-isotropic arrays. Various numerical results are presented. We use two acoustic arrays each consisting of 8 microphones to do some field measurements. The processing of the measured data from the acoustic arrays for different azimuth and elevation angles confirms the effectiveness of the proposed methods.
NASA Astrophysics Data System (ADS)
Desloge, Joseph G.; Zimmer, Martin J.; Zurek, Patrick M.
2004-05-01
Adaptive multimicrophone systems are currently used for a variety of noise-cancellation applications (such as hearing aids) to preserve signals arriving from a particular (target) direction while canceling other (jammer) signals in the environment. Although the performance of these systems is known to degrade with increasing reverberation, there are few measurements of adaptive performance in everyday reverberant environments. In this study, adaptive performance was compared to that of a simple, nonadaptive cardioid microphone to determine a measure of adaptive benefit. Both systems used recordings (at an Fs of 22
Binaural segregation in multisource reverberant environments.
Roman, Nicoleta; Srinivasan, Soundararajan; Wang, DeLiang
2006-12-01
In a natural environment, speech signals are degraded by both reverberation and concurrent noise sources. While human listening is robust under these conditions using only two ears, current two-microphone algorithms perform poorly. The psychological process of figure-ground segregation suggests that the target signal is perceived as a foreground while the remaining stimuli are perceived as a background. Accordingly, the goal is to estimate an ideal time-frequency (T-F) binary mask, which selects the target if it is stronger than the interference in a local T-F unit. In this paper, a binaural segregation system that extracts the reverberant target signal from multisource reverberant mixtures by utilizing only the location information of target source is proposed. The proposed system combines target cancellation through adaptive filtering and a binary decision rule to estimate the ideal T-F binary mask. The main observation in this work is that the target attenuation in a T-F unit resulting from adaptive filtering is correlated with the relative strength of target to mixture. A comprehensive evaluation shows that the proposed system results in large SNR gains. In addition, comparisons using SNR as well as automatic speech recognition measures show that this system outperforms standard two-microphone beamforming approaches and a recent binaural processor.
NASA Astrophysics Data System (ADS)
Park, Nam In; Kim, Seon Man; Kim, Hong Kook; Kim, Ji Woon; Kim, Myeong Bo; Yun, Su Won
In this paper, we propose a video-zoom driven audio-zoom algorithm in order to provide audio zooming effects in accordance with the degree of video-zoom. The proposed algorithm is designed based on a super-directive beamformer operating with a 4-channel microphone system, in conjunction with a soft masking process that considers the phase differences between microphones. Thus, the audio-zoom processed signal is obtained by multiplying an audio gain derived from a video-zoom level by the masked signal. After all, a real-time audio-zoom system is implemented on an ARM-CORETEX-A8 having a clock speed of 600 MHz after different levels of optimization are performed such as algorithmic level, C-code, and memory optimizations. To evaluate the complexity of the proposed real-time audio-zoom system, test data whose length is 21.3 seconds long is sampled at 48 kHz. As a result, it is shown from the experiments that the processing time for the proposed audio-zoom system occupies 14.6% or less of the ARM clock cycles. It is also shown from the experimental results performed in a semi-anechoic chamber that the signal with the front direction can be amplified by approximately 10 dB compared to the other directions.
New Perspectives on Assessing Amplification Effects
Souza, Pamela E.; Tremblay, Kelly L.
2006-01-01
Clinicians have long been aware of the range of performance variability with hearing aids. Despite improvements in technology, there remain many instances of well-selected and appropriately fitted hearing aids whereby the user reports minimal improvement in speech understanding. This review presents a multistage framework for understanding how a hearing aid affects performance. Six stages are considered: (1) acoustic content of the signal, (2) modification of the signal by the hearing aid, (3) interaction between sound at the output of the hearing aid and the listener's ear, (4) integrity of the auditory system, (5) coding of available acoustic cues by the listener's auditory system, and (6) correct identification of the speech sound. Within this framework, this review describes methodology and research on 2 new assessment techniques: acoustic analysis of speech measured at the output of the hearing aid and auditory evoked potentials recorded while the listener wears hearing aids. Acoustic analysis topics include the relationship between conventional probe microphone tests and probe microphone measurements using speech, appropriate procedures for such tests, and assessment of signal-processing effects on speech acoustics and recognition. Auditory evoked potential topics include an overview of physiologic measures of speech processing and the effect of hearing loss and hearing aids on cortical auditory evoked potential measurements in response to speech. Finally, the clinical utility of these procedures is discussed. PMID:16959734
Couchoux, Charline; Aubert, Maxime; Garant, Dany; Réale, Denis
2015-05-06
Technological advances can greatly benefit the scientific community by making new areas of research accessible. The study of animal vocal communication, in particular, can gain new insights and knowledge from technological improvements in recording equipment. Our comprehension of the acoustic signals emitted by animals would be greatly improved if we could continuously track the daily natural emissions of individuals in the wild, especially in the context of integrating individual variation into evolutionary ecology research questions. We show here how this can be accomplished using an operational tiny audio recorder that can easily be fitted as an on-board acoustic data-logger on small free-ranging animals. The high-quality 24 h acoustic recording logged on the spy microphone device allowed us to very efficiently collect daylong chipmunk vocalisations, giving us much more detailed data than the classical use of a directional microphone over an entire field season. The recordings also allowed us to monitor individual activity patterns and record incredibly long resting heart rates, and to identify self-scratching events and even whining from pre-emerging pups in their maternal burrow.
Couchoux, Charline; Aubert, Maxime; Garant, Dany; Réale, Denis
2015-01-01
Technological advances can greatly benefit the scientific community by making new areas of research accessible. The study of animal vocal communication, in particular, can gain new insights and knowledge from technological improvements in recording equipment. Our comprehension of the acoustic signals emitted by animals would be greatly improved if we could continuously track the daily natural emissions of individuals in the wild, especially in the context of integrating individual variation into evolutionary ecology research questions. We show here how this can be accomplished using an operational tiny audio recorder that can easily be fitted as an on-board acoustic data-logger on small free-ranging animals. The high-quality 24 h acoustic recording logged on the spy microphone device allowed us to very efficiently collect daylong chipmunk vocalisations, giving us much more detailed data than the classical use of a directional microphone over an entire field season. The recordings also allowed us to monitor individual activity patterns and record incredibly long resting heart rates, and to identify self-scratching events and even whining from pre-emerging pups in their maternal burrow. PMID:25944509
Adaptive Noise Suppression Using Digital Signal Processing
NASA Technical Reports Server (NTRS)
Kozel, David; Nelson, Richard
1996-01-01
A signal to noise ratio dependent adaptive spectral subtraction algorithm is developed to eliminate noise from noise corrupted speech signals. The algorithm determines the signal to noise ratio and adjusts the spectral subtraction proportion appropriately. After spectra subtraction low amplitude signals are squelched. A single microphone is used to obtain both eh noise corrupted speech and the average noise estimate. This is done by determining if the frame of data being sampled is a voiced or unvoiced frame. During unvoice frames an estimate of the noise is obtained. A running average of the noise is used to approximate the expected value of the noise. Applications include the emergency egress vehicle and the crawler transporter.
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Alexandrov, B.
2014-12-01
The identification of the physical sources causing spatial and temporal fluctuations of state variables such as river stage levels and aquifer hydraulic heads is challenging. The fluctuations can be caused by variations in natural and anthropogenic sources such as precipitation events, infiltration, groundwater pumping, barometric pressures, etc. The source identification and separation can be crucial for conceptualization of the hydrological conditions and characterization of system properties. If the original signals that cause the observed state-variable transients can be successfully "unmixed", decoupled physics models may then be applied to analyze the propagation of each signal independently. We propose a new model-free inverse analysis of transient data based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS) coupled with k-means clustering algorithm, which we call NMFk. NMFk is capable of identifying a set of unique sources from a set of experimentally measured mixed signals, without any information about the sources, their transients, and the physical mechanisms and properties controlling the signal propagation through the system. A classical BSS conundrum is the so-called "cocktail-party" problem where several microphones are recording the sounds in a ballroom (music, conversations, noise, etc.). Each of the microphones is recording a mixture of the sounds. The goal of BSS is to "unmix'" and reconstruct the original sounds from the microphone records. Similarly to the "cocktail-party" problem, our model-freee analysis only requires information about state-variable transients at a number of observation points, m, where m > r, and r is the number of unknown unique sources causing the observed fluctuations. We apply the analysis on a dataset from the Los Alamos National Laboratory (LANL) site. We identify and estimate the impact and sources are barometric pressure and water-supply pumping effects. We also estimate the location of the water-supply pumping wells based on the available data. The possible applications of the NMFk algorithm are not limited to hydrology problems; NMFk can be applied to any problem where temporal system behavior is observed at multiple locations and an unknown number of physical sources are causing these fluctuations.
Creative reflections-the strategic use of reflections in multitrack music production
NASA Astrophysics Data System (ADS)
Case, Alexander
2005-09-01
There is a long tradition of deliberately capturing and even synthesizing early reflections to enhance the music intended for loudspeaker playback. The desire to improve or at least alter the quality, audibility, intelligibility, stereo width, and/or uniqueness of the audio signal guides the recording engineer's use of the recording space, influences their microphone selection and placement, and inspires countless signal-processing approaches. This paper reviews contemporary multitrack production techniques that specifically take advantage of reflected sound energy for musical benefit.
NASA Astrophysics Data System (ADS)
Harne, Ryan L.; Lynd, Danielle T.
2016-08-01
Fixed in spatial distribution, arrays of planar, electromechanical acoustic transducers cannot adapt their wave energy focusing abilities unless each transducer is externally controlled, creating challenges for the implementation and portability of such beamforming systems. Recently, planar, origami-based structural tessellations are found to facilitate great versatility in system function and properties through kinematic folding. In this research we bridge the physics of acoustics and origami-based design to discover that the simple topological reconfigurations of a Miura-ori-based acoustic array yield many orders of magnitude worth of reversible change in wave energy focusing: a potential for acoustic field morphing easily obtained through deployable, tessellated architectures. Our experimental and theoretical studies directly translate the roles of folding the tessellated array to the adaptations in spectral and spatial wave propagation sensitivities for far field energy transmission. It is shown that kinematic folding rules and flat-foldable tessellated arrays collectively provide novel solutions to the long-standing challenges of conventional, electronically-steered acoustic beamformers. While our examples consider sound radiation from the foldable array in air, linear acoustic reciprocity dictates that the findings may inspire new innovations for acoustic receivers, e.g. adaptive sound absorbers and microphone arrays, as well as concepts that include water-borne waves.
DAMAS Processing for a Phased Array Study in the NASA Langley Jet Noise Laboratory
NASA Technical Reports Server (NTRS)
Brooks, Thomas F.; Humphreys, William M.; Plassman, Gerald e.
2010-01-01
A jet noise measurement study was conducted using a phased microphone array system for a range of jet nozzle configurations and flow conditions. The test effort included convergent and convergent/divergent single flow nozzles, as well as conventional and chevron dual-flow core and fan configurations. Cold jets were tested with and without wind tunnel co-flow, whereas, hot jets were tested only with co-flow. The intent of the measurement effort was to allow evaluation of new phased array technologies for their ability to separate and quantify distributions of jet noise sources. In the present paper, the array post-processing method focused upon is DAMAS (Deconvolution Approach for the Mapping of Acoustic Sources) for the quantitative determination of spatial distributions of noise sources. Jet noise is highly complex with stationary and convecting noise sources, convecting flows that are the sources themselves, and shock-related and screech noise for supersonic flow. The analysis presented in this paper addresses some processing details with DAMAS, for the array positioned at 90 (normal) to the jet. The paper demonstrates the applicability of DAMAS and how it indicates when strong coherence is present. Also, a new approach to calibrating the array focus and position is introduced and demonstrated.
Analysis of noise from reusable solid rocket motor firings
NASA Astrophysics Data System (ADS)
Jerome, Trevor W.; Gee, Kent L.; Neilsen, Tracianne B.
2012-10-01
As part of investigations into the design of next-generation launch vehicles, near and far-field data were collected during horizontal static firings of reusable solid rocket motors. Spatial variation of overall and one-third octave band pressure levels at sideline and polar arc arrays is analyzed. Spectra at individual microphone locations were analyzed. Positively-skewed pressure waveforms were observed in the probability density functions. Extreme skewness in the first-order estimate of the time derivative was found as a result of the presence of significant acoustic shocks.
How hummingbirds hum: Acoustic holography of hummingbirds during maneuvering flight
NASA Astrophysics Data System (ADS)
Hightower, Ben; Wijnings, Patrick; Ingersoll, Rivers; Chin, Diana; Scholte, Rick; Lentink, David
2017-11-01
Hummingbirds make a characteristic humming sound when they flap their wings. The physics and the biological significance of hummingbird aeroacoustics is still poorly understood. We used acoustic holography and high-speed cameras to determine the acoustic field of six hummingbirds while they either hovered stationary in front of a flower or maneuvered to track flower motion. We used a robotic flower that oscillated either laterally or longitudinally with a linear combination of 20 different frequencies between 0.2 and 20 Hz, a range that encompasses natural flower vibration frequencies in wind. We used high-speed marker tracking to dissect the transfer function between the moving flower, the head, and body of the bird. We also positioned four acoustic arrays equipped with 2176 microphones total above, below, and in front of the hummingbird. Acoustic data from the microphones were back-propagated to planes adjacent to the hummingbird to create the first real-time holograms of the pressure field a hummingbird generates in vivo. Integration of all this data offers insight into how hummingbirds modulate the acoustic field during hovering and maneuvering flight.
Coherent active methods for applications in room acoustics.
Guicking, D; Karcher, K; Rollwage, M
1985-10-01
An adjustment of reverberation time in rooms is often desired, even for low frequencies where passive absorbers fail. Among the active (electroacoustic) systems, incoherent ones permit lengthening of reverberation time only, whereas coherent active methods will allow sound absorption as well. A coherent-active wall lining consists of loudspeakers with microphones in front and adjustable control electronics. The microphones pick up the incident sound and drive the speakers in such a way that the reflection coefficient takes on prescribed values. An experimental device for the one-dimensional case allows reflection coefficients between almost zero and about 1.5 to be realized below 1000 Hz. The extension to three dimensions presents problems, especially by nearfield effects. Experiments with a 3 X 3 loudspeaker array and computer simulations proved that the amplitude reflection coefficient can be adjusted between 10% and 200% for sinusoidal waves at normal and oblique incidence. Future developments have to make the system work with broadband excitation and in more diffuse sound fields. It is also planned to combine the active reverberation control with active diffusion control.
Schul, J; Matt, F; von Helversen, O
2000-01-01
The hearing range of the tettigoniid Phaneropterafalcata for the echolocation calls of freely flying mouseeared bats (Myotis myotis) was determined in the field. The hearing of the insect was monitored using hook electrode recordings from an auditory interneuron, which is as sensitive as the hearing organ for frequencies above 16 kHz. The flight path of the bat relative to the insect's position was tracked by recording the echolocation calls with two microphone arrays, and calculating the bat's position from the arrival time differences of the calls at each microphone. The hearing distances ranged from 13 to 30 m. The large variability appeared both between different insects and between different bat approaches to an individual insect. The escape time of the bushcricket, calculated from the detection distance of the insect and the instantaneous flight speed of the bat, ranged from 1.5 to more than 4s. The hearing ranges of bushcrickets suggest that the insect hears the approaching bat long before the bat can detect an echo from the flying insect. PMID:12233766
Spriet, Ann; Van Deun, Lieselot; Eftaxiadis, Kyriaky; Laneau, Johan; Moonen, Marc; van Dijk, Bas; van Wieringen, Astrid; Wouters, Jan
2007-02-01
This paper evaluates the benefit of the two-microphone adaptive beamformer BEAM in the Nucleus Freedom cochlear implant (CI) system for speech understanding in background noise by CI users. A double-blind evaluation of the two-microphone adaptive beamformer BEAM and a hardware directional microphone was carried out with five adult Nucleus CI users. The test procedure consisted of a pre- and post-test in the lab and a 2-wk trial period at home. In the pre- and post-test, the speech reception threshold (SRT) with sentences and the percentage correct phoneme scores for CVC words were measured in quiet and background noise at different signal-to-noise ratios. Performance was assessed for two different noise configurations (with a single noise source and with three noise sources) and two different noise materials (stationary speech-weighted noise and multitalker babble). During the 2-wk trial period at home, the CI users evaluated the noise reduction performance in different listening conditions by means of the SSQ questionnaire. In addition to the perceptual evaluation, the noise reduction performance of the beamformer was measured physically as a function of the direction of the noise source. Significant improvements of both the SRT in noise (average improvement of 5-16 dB) and the percentage correct phoneme scores (average improvement of 10-41%) were observed with BEAM compared to the standard hardware directional microphone. In addition, the SSQ questionnaire and subjective evaluation in controlled and real-life scenarios suggested a possible preference for the beamformer in noisy environments. The evaluation demonstrates that the adaptive noise reduction algorithm BEAM in the Nucleus Freedom CI-system may significantly increase the speech perception by cochlear implantees in noisy listening conditions. This is the first monolateral (adaptive) noise reduction strategy actually implemented in a mainstream commercial CI.
Theoretical and experimental study of a fiber optic microphone
NASA Technical Reports Server (NTRS)
Hu, Andong; Cuomo, Frank W.; Zuckerwar, Allan J.
1992-01-01
Modifications to condenser microphone theory yield new expressions for the membrane deflections at its center, which provide the basic theory for the fiber optic microphone. The theoretical analysis for the membrane amplitude and the phase response of the fiber optic microphone is given in detail in terms of its basic geometrical quantities. A relevant extension to the original concepts of the optical microphone includes the addition of a backplate with holes similar in design to present condenser microphone technology. This approach generates improved damping characteristics and extended frequency response that were not previously considered. The construction and testing of the improved optical fiber microphone provide experimental data that are in good agreement with the theoretical analysis.
Towards a sub 15-dBA optical micromachined microphone
Kim, Donghwan; Hall, Neal A.
2014-01-01
Micromachined microphones with grating-based optical-interferometric readout have been demonstrated previously. These microphones are similar in construction to bottom-inlet capacitive microelectromechanical-system (MEMS) microphones, with the exception that optoelectronic emitters and detectors are placed inside the microphone's front or back cavity. A potential advantage of optical microphones in designing for low noise level is the use of highly-perforated microphone backplates to enable low-damping and low thermal-mechanical noise levels. This work presents an experimental study of a microphone diaphragm and backplate designed for optical readout and low thermal-mechanical noise. The backplate is 1 mm × 1 mm and is fabricated in a 2-μm-thick epitaxial silicon layer of a silicon-on-insulator wafer and contains a diffraction grating with 4-μm pitch etched at the center. The presented system has a measured thermal-mechanical noise level equal to 22.6 dBA. Through measurement of the electrostatic frequency response and measured noise spectra, a device model for the microphone system is verified. The model is in-turn used to identify design paths towards MEMS microphones with sub 15-dBA noise floors. PMID:24815250
Truck acoustic data analyzer system
Haynes, Howard D.; Akerman, Alfred; Ayers, Curtis W.
2006-07-04
A passive vehicle acoustic data analyzer system having at least one microphone disposed in the acoustic field of a moving vehicle and a computer in electronic communication the microphone(s). The computer detects and measures the frequency shift in the acoustic signature emitted by the vehicle as it approaches and passes the microphone(s). The acoustic signature of a truck driving by a microphone can provide enough information to estimate the truck speed in miles-per-hour (mph), engine speed in rotations-per-minute (RPM), turbocharger speed in RPM, and vehicle weight.
NASA Technical Reports Server (NTRS)
Johnston, G. D.; Coleman, A. D.; Portwood, J. N.; Saunders, J. M.; Porter, A. J.
1985-01-01
Load-cell and acoustic responses indicate bonding condition nondestructively. Signal recorded by load cell direct and instantaneous measure of local stiffness of material at point of impact. Separate and distinctly different measurement that sensed by microphone. Spectrum analysis of pulse obtained from debonded point will only show frequencies below 425 Hz because insulation alone does not have stiffness to support energy at higher frequencies.
47 CFR 15.717 - TVBDs that rely on spectrum sensing.
Code of Federal Regulations, 2013 CFR
2013-10-01
... over a 100 kHz bandwidth; (C) Low power auxiliary, including wireless microphone, signals: -107 dBm, averaged over a 200 kHz bandwidth. (ii) The detection thresholds are referenced to an omnidirectional receive antenna with a gain of 0 dBi. If a receive antenna with a minimum directional gain of less than 0...
47 CFR 15.717 - TVBDs that rely on spectrum sensing.
Code of Federal Regulations, 2014 CFR
2014-10-01
... over a 100 kHz bandwidth; (C) Low power auxiliary, including wireless microphone, signals: -107 dBm, averaged over a 200 kHz bandwidth. (ii) The detection thresholds are referenced to an omnidirectional receive antenna with a gain of 0 dBi. If a receive antenna with a minimum directional gain of less than 0...
47 CFR 15.717 - TVBDs that rely on spectrum sensing.
Code of Federal Regulations, 2011 CFR
2011-10-01
... over a 100 kHz bandwidth; (C) Low power auxiliary, including wireless microphone, signals: -107 dBm, averaged over a 200 kHz bandwidth. (ii) The detection thresholds are referenced to an omnidirectional receive antenna with a gain of 0 dBi. If a receive antenna with a minimum directional gain of less than 0...
47 CFR 15.717 - TVBDs that rely on spectrum sensing.
Code of Federal Regulations, 2012 CFR
2012-10-01
... over a 100 kHz bandwidth; (C) Low power auxiliary, including wireless microphone, signals: -107 dBm, averaged over a 200 kHz bandwidth. (ii) The detection thresholds are referenced to an omnidirectional receive antenna with a gain of 0 dBi. If a receive antenna with a minimum directional gain of less than 0...
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2011-01-01
A previous investigation on the presence of direct and indirect combustion noise for a full-scale turbofan engine using a far-field microphone at 130 is extended by also examining signals obtained at two additional downstream directions using far-field microphones at 110 deg and 160 deg. A generalized cross-correlation function technique is used to study the change in propagation time to the far field of the combined direct and indirect combustion noise signal as a sequence of low-pass filters are applied. The filtering procedure used produces no phase distortion. As the low-pass filter frequency is decreased, the travel time increases because the relative amount of direct combustion noise is reduced. The indirect combustion noise signal travels more slowly because in the combustor entropy fluctuations move with the flow velocity, which is slow compared to the local speed of sound. The indirect combustion noise signal travels at acoustic velocities after reaching the turbine and being converted into an acoustic signal. The direct combustion noise is always propagating at acoustic velocities. The results show that the estimated indirect combustion noise time delay values (post-combustion residence times) measured at each angle are fairly consistent with one another for a relevant range of operating conditions and demonstrate source separation of a mixture of direct and indirect combustion noise. The results may lead to a better idea about the acoustics in the combustor and may help develop and validate improved reduced-order physics-based methods for predicting turbofan engine core noise.
On the ability of consumer electronics microphones for environmental noise monitoring.
Van Renterghem, Timothy; Thomas, Pieter; Dominguez, Frederico; Dauwe, Samuel; Touhafi, Abdellah; Dhoedt, Bart; Botteldooren, Dick
2011-03-01
The massive production of microphones for consumer electronics, and the shift from dedicated processing hardware to PC-based systems, opens the way to build affordable, extensive noise measurement networks. Applications include e.g. noise limit and urban soundscape monitoring, and validation of calculated noise maps. Microphones are the critical components of such a network. Therefore, in a first step, some basic characteristics of 8 microphones, distributed over a wide range of price classes, were measured in a standardized way in an anechoic chamber. In a next step, a thorough evaluation was made of the ability of these microphones to be used for environmental noise monitoring. This was done during a continuous, half-year lasting outdoor experiment, characterized by a wide variety of meteorological conditions. While some microphones failed during the course of this test, it was shown that it is possible to identify cheap microphones that highly correlate to the reference microphone during the full test period. When the deviations are expressed in total A-weighted (road traffic) noise levels, values of less than 1 dBA are obtained, in excess to the deviation amongst reference microphones themselves.
Hands-free device control using sound picked up in the ear canal
NASA Astrophysics Data System (ADS)
Chhatpar, Siddharth R.; Ngia, Lester; Vlach, Chris; Lin, Dong; Birkhimer, Craig; Juneja, Amit; Pruthi, Tarun; Hoffman, Orin; Lewis, Tristan
2008-04-01
Hands-free control of unmanned ground vehicles is essential for soldiers, bomb disposal squads, and first responders. Having their hands free for other equipment and tasks allows them to be safer and more mobile. Currently, the most successful hands-free control devices are speech-command based. However, these devices use external microphones, and in field environments, e.g., war zones and fire sites, their performance suffers because of loud ambient noise: typically above 90dBA. This paper describes the development of technology using the ear as an output source that can provide excellent command recognition accuracy even in noisy environments. Instead of picking up speech radiating from the mouth, this technology detects speech transmitted internally through the ear canal. Discreet tongue movements also create air pressure changes within the ear canal, and can be used for stealth control. A patented earpiece was developed with a microphone pointed into the ear canal that captures these signals generated by tongue movements and speech. The signals are transmitted from the earpiece to an Ultra-Mobile Personal Computer (UMPC) through a wired connection. The UMPC processes the signals and utilizes them for device control. The processing can include command recognition, ambient noise cancellation, acoustic echo cancellation, and speech equalization. Successful control of an iRobot PackBot has been demonstrated with both speech (13 discrete commands) and tongue (5 discrete commands) signals. In preliminary tests, command recognition accuracy was 95% with speech control and 85% with tongue control.
Acoustic behavior of echolocating bats in complex environments
NASA Astrophysics Data System (ADS)
Moss, Cynthia; Ghose, Kaushik; Jensen, Marianne; Surlykke, Annemarie
2004-05-01
The echolocating bat controls the direction of its sonar beam, just as visually dominant animals control the movement of their eyes to foveate targets of interest. The sonar beam aim of the echolocating bat can therefore serve as an index of the animal's attention to objects in the environment. Until recently, spatial attention has not been studied in the context of echolocation, perhaps due to the difficulty in obtaining an objective measure. Here, we describe measurements of the bat's sonar beam aim, serving as an index of acoustic gaze and attention to objects, in tasks that require localization of obstacles and insect prey. Measurements of the bat's sonar beam aim are taken from microphone array recordings of vocal signals produced by a free-flying bat under experimentally controlled conditions. In some situations, the animal relies on spatial memory over reflected sounds, perhaps because its perceptual system cannot easily organize cascades of echoes from obstacles and prey. This highlights the complexity of the bat's orientation behavior, which can alternate between active sensing and spatial memory systems. The bat's use of spatial memory for orientation also will be addressed in this talk. [Work supported by NSF-IBN-0111973 and the Danish Research Council.
Acoustic change detection algorithm using an FM radio
NASA Astrophysics Data System (ADS)
Goldman, Geoffrey H.; Wolfe, Owen
2012-06-01
The U.S. Army is interested in developing low-cost, low-power, non-line-of-sight sensors for monitoring human activity. One modality that is often overlooked is active acoustics using sources of opportunity such as speech or music. Active acoustics can be used to detect human activity by generating acoustic images of an area at different times, then testing for changes among the imagery. A change detection algorithm was developed to detect physical changes in a building, such as a door changing positions or a large box being moved using acoustics sources of opportunity. The algorithm is based on cross correlating the acoustic signal measured from two microphones. The performance of the algorithm was shown using data generated with a hand-held FM radio as a sound source and two microphones. The algorithm could detect a door being opened in a hallway.
En route noise levels from propfan test assessment airplane
NASA Technical Reports Server (NTRS)
Garber, Donald P.; Willshire, William L., Jr.
1994-01-01
The en route noise test was designed to characterize propagation of propfan noise from cruise altitudes to the ground. In-flight measurements of propfan source levels and directional patterns were made by a chase plane flying in formation with the propfan test assessment (PTA) airplane. Ground noise measurements were taken during repeated flights over a distributed microphone array. The microphone array on the ground was used to provide ensemble-averaged estimates of mean flyover noise levels, establish confidence limits for those means, and measure propagation-induced noise variability. Even for identical nominal cruise conditions, peak sound levels for individual overflights varied substantially about the average, particularly when overflights were performed on different days. Large day-to-day variations in peak level measurements appeared to be caused by large day-to-day differences in propagation conditions and tended to obscure small variations arising from operating conditions. A parametric evaluation of the sensitivity of this prediction method to weather measurement and source level uncertainties was also performed. In general, predictions showed good agreement with measurements. However, the method was unable to predict short-term variability of ensemble-averaged data within individual overflights. Although variations in absorption appear to be the dominant factor in variations of peak sound levels recorded on the ground, accurate predictions of those levels require that a complete description of operational conditions be taken into account. The comprehensive and integrated methods presented in this paper have adequately predicted ground-measured sound levels. On average, peak sound levels were predicted within 3 dB for each of the three different cruise conditions.
Preliminary results from the White Sands Missile Range sonic boom propagation experiment
NASA Technical Reports Server (NTRS)
Willshire, William L., Jr.; Devilbiss, David W.
1992-01-01
Sonic boom bow shock amplitude and rise time statistics from a recent sonic boom propagation experiment are presented. Distributions of bow shock overpressure and rise time measured under different atmospheric turbulence conditions for the same test aircraft are quite different. The peak overpressure distributions are skewed positively, indicating a tendency for positive deviations from the mean to be larger than negative deviations. Standard deviations of overpressure distributions measured under moderate turbulence were 40 percent larger than those measured under low turbulence. As turbulence increased, the difference between the median and the mean increased, indicating increased positive overpressure deviations. The effect of turbulence was more readily seen in the rise time distributions. Under moderate turbulence conditions, the rise time distribution means were larger by a factor of 4 and the standard deviations were larger by a factor of 3 from the low turbulence values. These distribution changes resulted in a transition from a peaked appearance of the rise time distribution for the morning to a flattened appearance for the afternoon rise time distributions. The sonic boom propagation experiment consisted of flying three types of aircraft supersonically over a ground-based microphone array with concurrent measurements of turbulence and other meteorological data. The test aircraft were a T-38, an F-15, and an F-111, and they were flown at speeds of Mach 1.2 to 1.3, 30,000 feet above a 16 element, linear microphone array with an inter-element spacing of 200 ft. In two weeks of testing, 57 supersonic passes of the test aircraft were flown from early morning to late afternoon.
NASA Astrophysics Data System (ADS)
Anderson, J.; Johnson, J. B.; Arechiga, R. O.; Edens, H. E.; Thomas, R. J.
2011-12-01
We use radio frequency (VHF) pulse locations mapped with the New Mexico Tech Lightning Mapping Array (LMA) to study the distribution of thunder sources in lightning channels. A least squares inversion is used to fit channel acoustic energy radiation with broadband (0.01 to 500 Hz) acoustic recordings using microphones deployed local (< 10 km) to the lightning. We model the thunder (acoustic) source as a superposition of line segments connecting the LMA VHF pulses. An optimum branching algorithm is used to reconstruct conductive channels delineated by VHF sources, which we discretize as a superposition of finely-spaced (0.25 m) acoustic point sources. We consider total radiated thunder as a weighted superposition of acoustic waves from individual channels, each with a constant current along its length that is presumed to be proportional to acoustic energy density radiated per unit length. Merged channels are considered as a linear sum of current-carrying branches and radiate proportionally greater acoustic energy. Synthetic energy time series for a given microphone location are calculated for each independent channel. We then use a non-negative least squares inversion to solve for channel energy densities to match the energy time series determined from broadband acoustic recordings across a 4-station microphone network. Events analyzed by this method have so far included 300-1000 VHF sources, and correlations as high as 0.5 between synthetic and recorded thunder energy were obtained, despite the presence of wind noise and 10-30 m uncertainty in VHF source locations.
NASA Technical Reports Server (NTRS)
Pickett, G. F.; Wells, R. A.; Love, R. A.
1977-01-01
A computer user's manual describing the operation and the essential features of the microphone location program is presented. The Microphone Location Program determines microphone locations that ensure accurate and stable results from the equation system used to calculate modal structures. As part of the computational procedure for the Microphone Location Program, a first-order measure of the stability of the equation system was indicated by a matrix 'conditioning' number.
Ruscetta, Melissa N; Palmer, Catherine V; Durrant, John D; Grayhack, Judith; Ryan, Carey
2007-10-01
The chief complaint of individuals with hearing impairment is difficulty hearing in noise, with directional microphones emerging as the most capable remediation. Our purpose was to determine the impact of directional microphones on localization disability and concurrent handicap. Fifty-seven individuals participated unaided and then in groups of 19, using omni-directional microphones, directional-microphones, or toggle-switch equipped amplification. The outcome measure was a localization disabilities and handicaps questionnaire. Comparisons between the unaided group versus the aided groups, and the directional-microphone groups versus the other two aided groups revealed no significant differences. None of the microphone schemes either increased or decreased self-perceived localization disability or handicap. Objective measures of localization ability are warranted and if significance is noted, clinicians should caution patients when moving in their environment. If no significant objective differences exist, in light of the subjective findings in this investigation concern over decreases in quality of life and safety with directional microphones need not be considered.
Probe Microphone Measurements: 20 Years of Progress
Mueller, H. Gustav
2001-01-01
Probe-microphone testing was conducted in the laboratory as early as the 1940s (e.g., the classic work of Wiener and Ross, reported in 1946), however, it was not until the late 1970s that a “dispenser friendly” system was available for testing hearing aids in the real ear. In this case, the term “dispenser friendly,” is used somewhat loosely. The 1970s equipment that I'm referring to was first described in a paper that was presented by Earl Harford, Ph.D. in September of 1979 at the International Ear Clinics' Symposium in Minneapolis. At this meeting, Earl reported on his clinical experiences of testing hearing aids in the real ear using a miniature (by 1979 standards) Knowles microphone. The microphone was coupled to an interfacing impedance matching system (developed by David Preves, Ph.D., who at the time worked at Starkey Laboratories) which could be used with existing hearing aid analyzer systems (see Harford, 1980 for review of this early work). Unlike today's probe tube microphone systems, this early method of clinical real-ear measurement involved putting the entire microphone (about 4mm by 5mm by 2mm) in the ear canal down by the eardrum of the patient. If you think cerumen is a problem with probe-mic measurements today, you should have seen the condition of this microphone after a day's work! While this early instrumentation was a bit cumbersome, we quickly learned the advantages that probe-microphone measures provided in the fitting of hearing aids. We frequently ran into calibration and equalization problems, not to mention a yelp or two from the patient, but the resulting information was worth the trouble. Help soon arrived. In the early 1980s, the first computerized probe-tube microphone system, the Rastronics CCI-10 (developed in Denmark by Steen Rasmussen), entered the U.S. market (Nielsen and Rasmussen, 1984). This system had a silicone tube attached to the microphone (the transmission of sound through this tube was part of the calibration process), which eliminated the need to place the microphone itself in the ear canal. By early 1985, three or four different manufactures had introduced this new type of computerized probe-microphone equipment, and this hearing aid verification procedure became part of the standard protocol for many audiology clinics. At his time, the POGO (Prescription Of Gain and Output) and Libby 1/3 prescriptive fitting methods were at the peak of their popularity, and a revised NAL (National Acoustic Laboratories) procedure was just being introduced. All three of these methods were based on functional gain, but insertion gain easily could be substituted, and therefore, manufacturers included calculation of these prescriptive targets as part of the probe-microphone equipment software. Audiologists, frustrated with the tedious and unreliable functional gain procedure they had been using, soon developed a fascination with matching real-ear results to prescriptive targets on a computer monitor. In some ways, not a lot has changed since those early days of probe-microphone measurements. Most people who use this equipment simply run a gain curve for a couple inputs and see if it's close to prescriptive target—something that could be accomplished using the equipment from 1985. Contrary to the predictions of many, probe-mic measures have not become the “standard hearing aid verification procedure.” (Mueller and Strouse, 1995). There also has been little or no increase in the use of this equipment in recent years. In 1998, I reported on a survey that was conducted by The Hearing Journal regarding the use of probe-microphone measures (Mueller, 1998). We first looked at what percent of people dispensing hearing aids own (or have immediate access to) probe-microphone equipment. Our results showed that 23% of hearing instrument specialists and 75% of audiologists have this equipment. Among audiologists, ownership varied among work settings: 91% for hospitals/clinics, 73% for audiologists working for physicians, and 69% for audiologists in private practice. But more importantly, and a bit puzzling, was the finding that showed that nearly one half of the people who fit hearing aids and have access to this equipment, seldom or never use it. I doubt that the use rate of probe-microphone equipment has changed much in the last three years, and if anything, I suspect it has gone down. Why do I say that? As programmable hearing aids have become the standard fitting in many clinics, it is tempting to become enamoured with the simulated gain curves on the fitting screen, somehow believing that this is what really is happening in the real ear. Additionally, some dispensers have been told that you can't do reliable probe-mic testing with modern hearing aids—this of course is not true, and we'll address this issue in the Frequently Asked Questions portion of this paper. The infrequent use of probe-mic testing among dispensers is discouraging, and let's hope that probe-mic equipment does not suffer the fate of the rowing machine stored in your garage. A lot has changed over the years with the equipment itself, and there are also expanded clinical applications and procedures. We have new manufacturers, procedures, acronyms and noises. We have test procedures that allow us to accurately predict the output of a hearing aid in an infant's ear. We now have digital hearing aids, which provide us the opportunity to conduct real-ear measures of the effects of digital noise reduction, speech enhancement, adaptive feedback, expansion, and all the other features. Directional microphone hearing aids have grown in popularity and what better way to assess the real-ear directivity than with probe-mic measures? The array of assistive listening devices has expanded, and so has the role of the real-ear assessment of these products. And finally, with today's PC -based systems, we can program our hearing aids and simultaneously observe the resulting real-ear effects on the same fitting screen, or even conduct an automated target fitting using earcanal monitoring of the output. There have been a lot of changes, and we'll talk about all of them in this issue of Trends. PMID:25425897
Practical considerations for a second-order directional hearing aid microphone system
NASA Astrophysics Data System (ADS)
Thompson, Stephen C.
2003-04-01
First-order directional microphone systems for hearing aids have been available for several years. Such a system uses two microphones and has a theoretical maximum free-field directivity index (DI) of 6.0 dB. A second-order microphone system using three microphones could provide a theoretical increase in free-field DI to 9.5 dB. These theoretical maximum DI values assume that the microphones have exactly matched sensitivities at all frequencies of interest. In practice, the individual microphones in the hearing aid always have slightly different sensitivities. For the small microphone separation necessary to fit in a hearing aid, these sensitivity matching errors degrade the directivity from the theoretical values, especially at low frequencies. This paper shows that, for first-order systems the directivity degradation due to sensitivity errors is relatively small. However, for second-order systems with practical microphone sensitivity matching specifications, the directivity degradation below 1 kHz is not tolerable. A hybrid order directive system is proposed that uses first-order processing at low frequencies and second-order directive processing at higher frequencies. This hybrid system is suggested as an alternative that could provide improved directivity index in the frequency regions that are important to speech intelligibility.
Wagner, Randall P.; Guthrie, William F.
2015-01-01
The devices calibrated most frequently by the acoustical measurement services at the National Institute of Standards and Technology (NIST) over the 50-year period from 1963 to 20121 were one-inch condenser microphones of three specific standard types: LS1Pn, LS1Po, and WS1P. Due to its long history of providing calibrations of such microphones to customers, NIST is in a unique position to analyze data concerning the long-term stability of these devices. This long history has enabled NIST to acquire and aggregate a substantial amount of repeat calibration data for a large number of microphones that belong to various other standards and calibration laboratories. In addition to determining microphone sensitivities at the time of calibration, it is important to have confidence that the microphones do not typically undergo significant drift as compared to the calibration uncertainty during the periods between calibrations. For each of the three microphone types, an average drift rate and approximate 95 % confidence interval were computed by two different statistical methods, and the results from the two methods were found to differ insignificantly in each case. These results apply to typical microphones of these types that are used in a suitable environment and handled with care. The average drift rate for Type LS1Pn microphones was −0.004 dB/year to 0.003 dB/year. The average drift rate for Type LS1Po microphones was −0.016 dB/year to 0.008 dB/year. The average drift rate for Type WS1P microphones was −0.004 dB/year to 0.018 dB/year. For each of these microphone types, the average drift rate is not significantly different from zero. This result is consistent with the performance expected of condenser microphones designed for use as transfer standards. In addition, the values that bound the confidence intervals are well within the limits specified for long-term stability in international standards. Even though these results show very good long-term stability historically for these microphone types, it is expected that periodic calibrations will always be done to track the calibration history of individual microphones and check for anomalies indicative of shifts in sensitivity. PMID:26958445