Sample records for sound source location

  1. Judging sound rotation when listeners and sounds rotate: Sound source localization is a multisystem process.

    PubMed

    Yost, William A; Zhong, Xuan; Najam, Anbar

    2015-11-01

    In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.

  2. Noise Source Identification in a Reverberant Field Using Spherical Beamforming

    NASA Astrophysics Data System (ADS)

    Choi, Young-Chul; Park, Jin-Ho; Yoon, Doo-Byung; Kwon, Hyu-Sang

    Identification of noise sources, their locations and strengths, has been taken great attention. The method that can identify noise sources normally assumes that noise sources are located at a free field. However, the sound in a reverberant field consists of that coming directly from the source plus sound reflected or scattered by the walls or objects in the field. In contrast to the exterior sound field, reflections are added to sound field. Therefore, the source location estimated by the conventional methods may give unacceptable error. In this paper, we explain the effects of reverberant field on interior source identification process and propose the method that can identify noise sources in the reverberant field.

  3. Ejectable underwater sound source recovery assembly

    NASA Technical Reports Server (NTRS)

    Irick, S. C. (Inventor)

    1974-01-01

    An underwater sound source is described that may be ejectably mounted on any mobile device that travels over water, to facilitate in the location and recovery of the device when submerged. A length of flexible line maintains a connection between the mobile device and the sound source. During recovery, the sound source is located be particularly useful in the recovery of spent rocket motors that bury in the ocean floor upon impact.

  4. Localizing the sources of two independent noises: Role of time varying amplitude differences

    PubMed Central

    Yost, William A.; Brown, Christopher A.

    2013-01-01

    Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region. PMID:23556597

  5. Localizing the sources of two independent noises: role of time varying amplitude differences.

    PubMed

    Yost, William A; Brown, Christopher A

    2013-04-01

    Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region.

  6. Psychophysical investigation of an auditory spatial illusion in cats: the precedence effect.

    PubMed

    Tollin, Daniel J; Yin, Tom C T

    2003-10-01

    The precedence effect (PE) describes several spatial perceptual phenomena that occur when similar sounds are presented from two different locations and separated by a delay. The mechanisms that produce the effect are thought to be responsible for the ability to localize sounds in reverberant environments. Although the physiological bases for the PE have been studied, little is known about how these sounds are localized by species other than humans. Here we used the search coil technique to measure the eye positions of cats trained to saccade to the apparent locations of sounds. To study the PE, brief broadband stimuli were presented from two locations, with a delay between their onsets; the delayed sound meant to simulate a single reflection. Although the cats accurately localized single sources, the apparent locations of the paired sources depended on the delay. First, the cats exhibited summing localization, the perception of a "phantom" sound located between the sources, for delays < +/-400 micros for sources positioned in azimuth along the horizontal plane, but not for sources positioned in elevation along the sagittal plane. Second, consistent with localization dominance, for delays from 400 micros to about 10 ms, the cats oriented toward the leading source location only, with little influence of the lagging source, both for horizontally and vertically placed sources. Finally, the echo threshold was reached for delays >10 ms, where the cats first began to orient to the lagging source on some trials. These data reveal that cats experience the PE phenomena similarly to humans.

  7. Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology

    NASA Astrophysics Data System (ADS)

    Niwa, Kenta; Nishino, Takanori; Takeda, Kazuya

    A sound field reproduction method is proposed that uses blind source separation and a head-related transfer function. In the proposed system, multichannel acoustic signals captured at distant microphones are decomposed to a set of location/signal pairs of virtual sound sources based on frequency-domain independent component analysis. After estimating the locations and the signals of the virtual sources by convolving the controlled acoustic transfer functions with each signal, the spatial sound is constructed at the selected point. In experiments, a sound field made by six sound sources is captured using 48 distant microphones and decomposed into sets of virtual sound sources. Since subjective evaluation shows no significant difference between natural and reconstructed sound when six virtual sources and are used, the effectiveness of the decomposing algorithm as well as the virtual source representation are confirmed.

  8. An integrated system for dynamic control of auditory perspective in a multichannel sound field

    NASA Astrophysics Data System (ADS)

    Corey, Jason Andrew

    An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to create a perceptually strong impression of source location and movement within a simulated space.

  9. Egocentric and allocentric representations in auditory cortex

    PubMed Central

    Brimijoin, W. Owen; Bizley, Jennifer K.

    2017-01-01

    A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position. PMID:28617796

  10. Selective attention to sound location or pitch studied with event-related brain potentials and magnetic fields.

    PubMed

    Degerman, Alexander; Rinne, Teemu; Särkkä, Anna-Kaisa; Salmi, Juha; Alho, Kimmo

    2008-06-01

    Event-related brain potentials (ERPs) and magnetic fields (ERFs) were used to compare brain activity associated with selective attention to sound location or pitch in humans. Sixteen healthy adults participated in the ERP experiment, and 11 adults in the ERF experiment. In different conditions, the participants focused their attention on a designated sound location or pitch, or pictures presented on a screen, in order to detect target sounds or pictures among the attended stimuli. In the Attend Location condition, the location of sounds varied randomly (left or right), while their pitch (high or low) was kept constant. In the Attend Pitch condition, sounds of varying pitch (high or low) were presented at a constant location (left or right). Consistent with previous ERP results, selective attention to either sound feature produced a negative difference (Nd) between ERPs to attended and unattended sounds. In addition, ERPs showed a more posterior scalp distribution for the location-related Nd than for the pitch-related Nd, suggesting partially different generators for these Nds. The ERF source analyses found no source distribution differences between the pitch-related Ndm (the magnetic counterpart of the Nd) and location-related Ndm in the superior temporal cortex (STC), where the main sources of the Ndm effects are thought to be located. Thus, the ERP scalp distribution differences between the location-related and pitch-related Nd effects may have been caused by activity of areas outside the STC, perhaps in the inferior parietal regions.

  11. Evolutionary trends in directional hearing

    PubMed Central

    Carr, Catherine E.; Christensen-Dalsgaard, Jakob

    2016-01-01

    Tympanic hearing is a true evolutionary novelty that arose in parallel within early tetrapods. We propose that in these tetrapods, selection for sound localization in air acted upon pre-existing directionally sensitive brainstem circuits, similar to those in fishes. Auditory circuits in birds and lizards resemble this ancestral, directionally sensitive framework. Despite this anatomically similarity, coding of sound source location differs between birds and lizards. In birds, brainstem circuits compute sound location from interaural cues. Lizards, however, have coupled ears, and do not need to compute source location in the brain. Thus their neural processing of sound direction differs, although all show mechanisms for enhancing sound source directionality. Comparisons with mammals reveal similarly complex interactions between coding strategies and evolutionary history. PMID:27448850

  12. Cross-correlation, triangulation, and curved-wavefront focusing of coral reef sound using a bi-linear hydrophone array.

    PubMed

    Freeman, Simon E; Buckingham, Michael J; Freeman, Lauren A; Lammers, Marc O; D'Spain, Gerald L

    2015-01-01

    A seven element, bi-linear hydrophone array was deployed over a coral reef in the Papahãnaumokuãkea Marine National Monument, Northwest Hawaiian Islands, in order to investigate the spatial, temporal, and spectral properties of biological sound in an environment free of anthropogenic influences. Local biological sound sources, including snapping shrimp and other organisms, produced curved-wavefront acoustic arrivals at the array, allowing source location via focusing to be performed over an area of 1600 m(2). Initially, however, a rough estimate of source location was obtained from triangulation of pair-wise cross-correlations of the sound. Refinements to these initial source locations, and source frequency information, were then obtained using two techniques, conventional and adaptive focusing. It was found that most of the sources were situated on or inside the reef structure itself, rather than over adjacent sandy areas. Snapping-shrimp-like sounds, all with similar spectral characteristics, originated from individual sources predominantly in one area to the east of the array. To the west, the spectral and spatial distributions of the sources were more varied, suggesting the presence of a multitude of heterogeneous biological processes. In addition to the biological sounds, some low-frequency noise due to distant breaking waves was received from end-fire north of the array.

  13. The effect of spatial distribution on the annoyance caused by simultaneous sounds

    NASA Astrophysics Data System (ADS)

    Vos, Joos; Bronkhorst, Adelbert W.; Fedtke, Thomas

    2004-05-01

    A considerable part of the population is exposed to simultaneous and/or successive environmental sounds from different sources. In many cases, these sources are different with respect to their locations also. In a laboratory study, it was investigated whether the annoyance caused by the multiple sounds is affected by the spatial distribution of the sources. There were four independent variables: (1) sound category (stationary or moving), (2) sound type (stationary: lawn-mower, leaf-blower, and chain saw; moving: road traffic, railway, and motorbike), (3) spatial location (left, right, and combinations), and (4) A-weighted sound exposure level (ASEL of single sources equal to 50, 60, or 70 dB). In addition to the individual sounds in isolation, various combinations of two or three different sources within each sound category and sound level were presented for rating. The annoyance was mainly determined by sound level and sound source type. In most cases there were neither significant main effects of spatial distribution nor significant interaction effects between spatial distribution and the other variables. It was concluded that for rating the spatially distrib- uted sounds investigated, the noise dose can simply be determined by a summation of the levels for the left and right channels. [Work supported by CEU.

  14. Sound Source Localization Using Non-Conformal Surface Sound Field Transformation Based on Spherical Harmonic Wave Decomposition

    PubMed Central

    Zhang, Lanyue; Ding, Dandan; Yang, Desen; Wang, Jia; Shi, Jie

    2017-01-01

    Spherical microphone arrays have been paid increasing attention for their ability to locate a sound source with arbitrary incident angle in three-dimensional space. Low-frequency sound sources are usually located by using spherical near-field acoustic holography. The reconstruction surface and holography surface are conformal surfaces in the conventional sound field transformation based on generalized Fourier transform. When the sound source is on the cylindrical surface, it is difficult to locate by using spherical surface conformal transform. The non-conformal sound field transformation by making a transfer matrix based on spherical harmonic wave decomposition is proposed in this paper, which can achieve the transformation of a spherical surface into a cylindrical surface by using spherical array data. The theoretical expressions of the proposed method are deduced, and the performance of the method is simulated. Moreover, the experiment of sound source localization by using a spherical array with randomly and uniformly distributed elements is carried out. Results show that the non-conformal surface sound field transformation from a spherical surface to a cylindrical surface is realized by using the proposed method. The localization deviation is around 0.01 m, and the resolution is around 0.3 m. The application of the spherical array is extended, and the localization ability of the spherical array is improved. PMID:28489065

  15. Estimation of multiple sound sources with data and model uncertainties using the EM and evidential EM algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Xun; Quost, Benjamin; Chazot, Jean-Daniel; Antoni, Jérôme

    2016-01-01

    This paper considers the problem of identifying multiple sound sources from acoustical measurements obtained by an array of microphones. The problem is solved via maximum likelihood. In particular, an expectation-maximization (EM) approach is used to estimate the sound source locations and strengths, the pressure measured by a microphone being interpreted as a mixture of latent signals emitted by the sources. This work also considers two kinds of uncertainties pervading the sound propagation and measurement process: uncertain microphone locations and uncertain wavenumber. These uncertainties are transposed to the data in the belief functions framework. Then, the source locations and strengths can be estimated using a variant of the EM algorithm, known as the Evidential EM (E2M) algorithm. Eventually, both simulation and real experiments are shown to illustrate the advantage of using the EM in the case without uncertainty and the E2M in the case of uncertain measurement.

  16. Position-dependent hearing in three species of bushcrickets (Tettigoniidae, Orthoptera)

    PubMed Central

    Lakes-Harlan, Reinhard; Scherberich, Jan

    2015-01-01

    A primary task of auditory systems is the localization of sound sources in space. Sound source localization in azimuth is usually based on temporal or intensity differences of sounds between the bilaterally arranged ears. In mammals, localization in elevation is possible by transfer functions at the ear, especially the pinnae. Although insects are able to locate sound sources, little attention is given to the mechanisms of acoustic orientation to elevated positions. Here we comparatively analyse the peripheral hearing thresholds of three species of bushcrickets in respect to sound source positions in space. The hearing thresholds across frequencies depend on the location of a sound source in the three-dimensional hearing space in front of the animal. Thresholds differ for different azimuthal positions and for different positions in elevation. This position-dependent frequency tuning is species specific. Largest differences in thresholds between positions are found in Ancylecha fenestrata. Correspondingly, A. fenestrata has a rather complex ear morphology including cuticular folds covering the anterior tympanal membrane. The position-dependent tuning might contribute to sound source localization in the habitats. Acoustic orientation might be a selective factor for the evolution of morphological structures at the bushcricket ear and, speculatively, even for frequency fractioning in the ear. PMID:26543574

  17. Position-dependent hearing in three species of bushcrickets (Tettigoniidae, Orthoptera).

    PubMed

    Lakes-Harlan, Reinhard; Scherberich, Jan

    2015-06-01

    A primary task of auditory systems is the localization of sound sources in space. Sound source localization in azimuth is usually based on temporal or intensity differences of sounds between the bilaterally arranged ears. In mammals, localization in elevation is possible by transfer functions at the ear, especially the pinnae. Although insects are able to locate sound sources, little attention is given to the mechanisms of acoustic orientation to elevated positions. Here we comparatively analyse the peripheral hearing thresholds of three species of bushcrickets in respect to sound source positions in space. The hearing thresholds across frequencies depend on the location of a sound source in the three-dimensional hearing space in front of the animal. Thresholds differ for different azimuthal positions and for different positions in elevation. This position-dependent frequency tuning is species specific. Largest differences in thresholds between positions are found in Ancylecha fenestrata. Correspondingly, A. fenestrata has a rather complex ear morphology including cuticular folds covering the anterior tympanal membrane. The position-dependent tuning might contribute to sound source localization in the habitats. Acoustic orientation might be a selective factor for the evolution of morphological structures at the bushcricket ear and, speculatively, even for frequency fractioning in the ear.

  18. Quantitative measurement of pass-by noise radiated by vehicles running at high speeds

    NASA Astrophysics Data System (ADS)

    Yang, Diange; Wang, Ziteng; Li, Bing; Luo, Yugong; Lian, Xiaomin

    2011-03-01

    It has been a challenge in the past to accurately locate and quantify the pass-by noise source radiated by the running vehicles. A system composed of a microphone array is developed in our current work to do this work. An acoustic-holography method for moving sound sources is designed to handle the Doppler effect effectively in the time domain. The effective sound pressure distribution is reconstructed on the surface of a running vehicle. The method has achieved a high calculation efficiency and is able to quantitatively measure the sound pressure at the sound source and identify the location of the main sound source. The method is also validated by the simulation experiments and the measurement tests with known moving speakers. Finally, the engine noise, tire noise, exhaust noise and wind noise of the vehicle running at different speeds are successfully identified by this method.

  19. Localizing nearby sound sources in a classroom: Binaural room impulse responses

    NASA Astrophysics Data System (ADS)

    Shinn-Cunningham, Barbara G.; Kopco, Norbert; Martin, Tara J.

    2005-05-01

    Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy. .

  20. Localizing nearby sound sources in a classroom: binaural room impulse responses.

    PubMed

    Shinn-Cunningham, Barbara G; Kopco, Norbert; Martin, Tara J

    2005-05-01

    Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy.

  1. Directional Hearing and Sound Source Localization in Fishes.

    PubMed

    Sisneros, Joseph A; Rogers, Peter H

    2016-01-01

    Evidence suggests that the capacity for sound source localization is common to mammals, birds, reptiles, and amphibians, but surprisingly it is not known whether fish locate sound sources in the same manner (e.g., combining binaural and monaural cues) or what computational strategies they use for successful source localization. Directional hearing and sound source localization in fishes continues to be important topics in neuroethology and in the hearing sciences, but the empirical and theoretical work on these topics have been contradictory and obscure for decades. This chapter reviews the previous behavioral work on directional hearing and sound source localization in fishes including the most recent experiments on sound source localization by the plainfin midshipman fish (Porichthys notatus), which has proven to be an exceptional species for fish studies of sound localization. In addition, the theoretical models of directional hearing and sound source localization for fishes are reviewed including a new model that uses a time-averaged intensity approach for source localization that has wide applicability with regard to source type, acoustic environment, and time waveform.

  2. Approaches to the study of neural coding of sound source location and sound envelope in real environments

    PubMed Central

    Kuwada, Shigeyuki; Bishop, Brian; Kim, Duck O.

    2012-01-01

    The major functions of the auditory system are recognition (what is the sound) and localization (where is the sound). Although each of these has received considerable attention, rarely are they studied in combination. Furthermore, the stimuli used in the bulk of studies did not represent sound location in real environments and ignored the effects of reverberation. Another ignored dimension is the distance of a sound source. Finally, there is a scarcity of studies conducted in unanesthetized animals. We illustrate a set of efficient methods that overcome these shortcomings. We use the virtual auditory space method (VAS) to efficiently present sounds at different azimuths, different distances and in different environments. Additionally, this method allows for efficient switching between binaural and monaural stimulation and alteration of acoustic cues singly or in combination to elucidate neural mechanisms underlying localization and recognition. Such procedures cannot be performed with real sound field stimulation. Our research is designed to address the following questions: Are IC neurons specialized to process what and where auditory information? How does reverberation and distance of the sound source affect this processing? How do IC neurons represent sound source distance? Are neural mechanisms underlying envelope processing binaural or monaural? PMID:22754505

  3. Design of laser monitoring and sound localization system

    NASA Astrophysics Data System (ADS)

    Liu, Yu-long; Xu, Xi-ping; Dai, Yu-ming; Qiao, Yang

    2013-08-01

    In this paper, a novel design of laser monitoring and sound localization system is proposed. It utilizes laser to monitor and locate the position of the indoor conversation. In China most of the laser monitors no matter used in labor in an instrument uses photodiode or phototransistor as a detector at present. At the laser receivers of those facilities, light beams are adjusted to ensure that only part of the window in photodiodes or phototransistors received the beams. The reflection would deviate from its original path because of the vibration of the detected window, which would cause the changing of imaging spots in photodiode or phototransistor. However, such method is limited not only because it could bring in much stray light in receivers but also merely single output of photocurrent could be obtained. Therefore a new method based on quadrant detector is proposed. It utilizes the relation of the optical integral among quadrants to locate the position of imaging spots. This method could eliminate background disturbance and acquired two-dimensional spots vibrating data pacifically. The principle of this whole system could be described as follows. Collimated laser beams are reflected from vibrate-window caused by the vibration of sound source. Therefore reflected beams are modulated by vibration source. Such optical signals are collected by quadrant detectors and then are processed by photoelectric converters and corresponding circuits. Speech signals are eventually reconstructed. In addition, sound source localization is implemented by the means of detecting three different reflected light sources simultaneously. Indoor mathematical models based on the principle of Time Difference Of Arrival (TDOA) are established to calculate the twodimensional coordinate of sound source. Experiments showed that this system is able to monitor the indoor sound source beyond 15 meters with a high quality of speech reconstruction and to locate the sound source position accurately.

  4. A Tool for Low Noise Procedures Design and Community Noise Impact Assessment: The Rotorcraft Noise Model (RNM)

    NASA Technical Reports Server (NTRS)

    Conner, David A.; Page, Juliet A.

    2002-01-01

    To improve aircraft noise impact modeling capabilities and to provide a tool to aid in the development of low noise terminal area operations for rotorcraft and tiltrotors, the Rotorcraft Noise Model (RNM) was developed by the NASA Langley Research Center and Wyle Laboratories. RNM is a simulation program that predicts how sound will propagate through the atmosphere and accumulate at receiver locations located on flat ground or varying terrain, for single and multiple vehicle flight operations. At the core of RNM are the vehicle noise sources, input as sound hemispheres. As the vehicle "flies" along its prescribed flight trajectory, the source sound propagation is simulated and accumulated at the receiver locations (single points of interest or multiple grid points) in a systematic time-based manner. These sound signals at the receiver locations may then be analyzed to obtain single event footprints, integrated noise contours, time histories, or numerous other features. RNM may also be used to generate spectral time history data over a ground mesh for the creation of single event sound animation videos. Acoustic properties of the noise source(s) are defined in terms of sound hemispheres that may be obtained from theoretical predictions, wind tunnel experimental results, flight test measurements, or a combination of the three. The sound hemispheres may contain broadband data (source levels as a function of one-third octave band) and pure-tone data (in the form of specific frequency sound pressure levels and phase). A PC executable version of RNM is publicly available and has been adopted by a number of organizations for Environmental Impact Assessment studies of rotorcraft noise. This paper provides a review of the required input data, the theoretical framework of RNM's propagation model and the output results. Code validation results are provided from a NATO helicopter noise flight test as well as a tiltrotor flight test program that used the RNM as a tool to aid in the development of low noise approach profiles.

  5. Comparison of sound reproduction using higher order loudspeakers and equivalent line arrays in free-field conditions.

    PubMed

    Poletti, Mark A; Betlehem, Terence; Abhayapala, Thushara D

    2014-07-01

    Higher order sound sources of Nth order can radiate sound with 2N + 1 orthogonal radiation patterns, which can be represented as phase modes or, equivalently, amplitude modes. This paper shows that each phase mode response produces a spiral wave front with a different spiral rate, and therefore a different direction of arrival of sound. Hence, for a given receiver position a higher order source is equivalent to a linear array of 2N + 1 monopole sources. This interpretation suggests performance similar to a circular array of higher order sources can be produced by an array of sources, each of which consists of a line array having monopoles at the apparent source locations of the corresponding phase modes. Simulations of higher order arrays and arrays of equivalent line sources are presented. It is shown that the interior fields produced by the two arrays are essentially the same, but that the exterior fields differ because the higher order sources produces different equivalent source locations for field positions outside the array. This work provides an explanation of the fact that an array of L Nth order sources can reproduce sound fields whose accuracy approaches the performance of (2N + 1)L monopoles.

  6. Complete data listings for CSEM soundings on Kilauea Volcano, Hawaii

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kauahikaua, J.; Jackson, D.B.; Zablocki, C.J.

    1983-01-01

    This document contains complete data from a controlled-source electromagnetic (CSEM) sounding/mapping project at Kilauea volcano, Hawaii. The data were obtained at 46 locations about a fixed-location, horizontal, polygonal loop source in the summit area of the volcano. The data consist of magnetic field amplitudes and phases at excitation frequencies between 0.04 and 8 Hz. The vector components were measured in a cylindrical coordinate system centered on the loop source. 5 references.

  7. On the Possible Detection of Lightning Storms by Elephants

    PubMed Central

    Kelley, Michael C.; Garstang, Michael

    2013-01-01

    Simple Summary We use data similar to that taken by the International Monitoring System for the detection of nuclear explosions, to determine whether elephants might be capable of detecting and locating the source of sounds generated by thunderstorms. Knowledge that elephants might be capable of responding to such storms, particularly at the end of the dry season when migrations are initiated, is of considerable interest to management and conservation. Abstract Theoretical calculations suggest that sounds produced by thunderstorms and detected by a system similar to the International Monitoring System (IMS) for the detection of nuclear explosions at distances ≥100 km, are at sound pressure levels equal to or greater than 6 × 10−3 Pa. Such sound pressure levels are well within the range of elephant hearing. Frequencies carrying these sounds might allow for interaural time delays such that adult elephants could not only hear but could also locate the source of these sounds. Determining whether it is possible for elephants to hear and locate thunderstorms contributes to the question of whether elephant movements are triggered or influenced by these abiotic sounds. PMID:26487406

  8. Locating arbitrarily time-dependent sound sources in three dimensional space in real time.

    PubMed

    Wu, Sean F; Zhu, Na

    2010-08-01

    This paper presents a method for locating arbitrarily time-dependent acoustic sources in a free field in real time by using only four microphones. This method is capable of handling a wide variety of acoustic signals, including broadband, narrowband, impulsive, and continuous sound over the entire audible frequency range, produced by multiple sources in three dimensional (3D) space. Locations of acoustic sources are indicated by the Cartesian coordinates. The underlying principle of this method is a hybrid approach that consists of modeling of acoustic radiation from a point source in a free field, triangulation, and de-noising to enhance the signal to noise ratio (SNR). Numerical simulations are conducted to study the impacts of SNR, microphone spacing, source distance and frequency on spatial resolution and accuracy of source localizations. Based on these results, a simple device that consists of four microphones mounted on three mutually orthogonal axes at an optimal distance, a four-channel signal conditioner, and a camera is fabricated. Experiments are conducted in different environments to assess its effectiveness in locating sources that produce arbitrarily time-dependent acoustic signals, regardless whether a sound source is stationary or moves in space, even toward behind measurement microphones. Practical limitations on this method are discussed.

  9. A precedence effect resolves phantom sound source illusions in the parasitoid fly Ormia ochracea

    PubMed Central

    Lee, Norman; Elias, Damian O.; Mason, Andrew C.

    2009-01-01

    Localizing individual sound sources under reverberant environmental conditions can be a challenge when the original source and its acoustic reflections arrive at the ears simultaneously from different paths that convey ambiguous directional information. The acoustic parasitoid fly Ormia ochracea (Diptera: Tachinidae) relies on a pair of ears exquisitely sensitive to sound direction to localize the 5-kHz tone pulsatile calling song of their host crickets. In nature, flies are expected to encounter a complex sound field with multiple sources and their reflections from acoustic clutter potentially masking temporal information relevant to source recognition and localization. In field experiments, O. ochracea were lured onto a test arena and subjected to small random acoustic asymmetries between 2 simultaneous sources. Most flies successfully localize a single source but some localize a ‘phantom’ source that is a summed effect of both source locations. Such misdirected phonotaxis can be elicited reliably in laboratory experiments that present symmetric acoustic stimulation. By varying onset delay between 2 sources, we test whether hyperacute directional hearing in O. ochracea can function to exploit small time differences to determine source location. Selective localization depends on both the relative timing and location of competing sources. Flies preferred phonotaxis to a forward source. With small onset disparities within a 10-ms temporal window of attention, flies selectively localize the leading source while the lagging source has minimal influence on orientation. These results demonstrate the precedence effect as a mechanism to overcome phantom source illusions that arise from acoustic reflections or competing sources. PMID:19332794

  10. Difference in precedence effect between children and adults signifies development of sound localization abilities in complex listening tasks

    PubMed Central

    Litovsky, Ruth Y.; Godar, Shelly P.

    2010-01-01

    The precedence effect refers to the fact that humans are able to localize sound in reverberant environments, because the auditory system assigns greater weight to the direct sound (lead) than the later-arriving sound (lag). In this study, absolute sound localization was studied for single source stimuli and for dual source lead-lag stimuli in 4–5 year old children and adults. Lead-lag delays ranged from 5–100 ms. Testing was conducted in free field, with pink noise bursts emitted from loudspeakers positioned on a horizontal arc in the frontal field. Listeners indicated how many sounds were heard and the perceived location of the first- and second-heard sounds. Results suggest that at short delays (up to 10 ms), the lead dominates sound localization strongly at both ages, and localization errors are similar to those with single-source stimuli. At longer delays errors can be large, stemming from over-integration of the lead and lag, interchanging of perceived locations of the first-heard and second-heard sounds due to temporal order confusion, and dominance of the lead over the lag. The errors are greater for children than adults. Results are discussed in the context of maturation of auditory and non-auditory factors. PMID:20968369

  11. The effect of brain lesions on sound localization in complex acoustic environments.

    PubMed

    Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg

    2014-05-01

    Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.

  12. Leak detection utilizing analog binaural (VLSI) techniques

    NASA Technical Reports Server (NTRS)

    Hartley, Frank T. (Inventor)

    1995-01-01

    A detection method and system utilizing silicon models of the traveling wave structure of the human cochlea to spatially and temporally locate a specific sound source in the presence of high noise pandemonium. The detection system combines two-dimensional stereausis representations, which are output by at least three VLSI binaural hearing chips, to generate a three-dimensional stereausis representation including both binaural and spectral information which is then used to locate the sound source.

  13. SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization

    PubMed Central

    Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah

    2014-01-01

    Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431

  14. 3D Sound Techniques for Sound Source Elevation in a Loudspeaker Listening Environment

    NASA Astrophysics Data System (ADS)

    Kim, Yong Guk; Jo, Sungdong; Kim, Hong Kook; Jang, Sei-Jin; Lee, Seok-Pil

    In this paper, we propose several 3D sound techniques for sound source elevation in stereo loudspeaker listening environments. The proposed method integrates a head-related transfer function (HRTF) for sound positioning and early reflection for adding reverberant circumstance. In addition, spectral notch filtering and directional band boosting techniques are also included for increasing elevation perception capability. In order to evaluate the elevation performance of the proposed method, subjective listening tests are conducted using several kinds of sound sources such as white noise, sound effects, speech, and music samples. It is shown from the tests that the degrees of perceived elevation by the proposed method are around the 17º to 21º when the stereo loudspeakers are located on the horizontal plane.

  15. Sound source tracking device for telematic spatial sound field reproduction

    NASA Astrophysics Data System (ADS)

    Cardenas, Bruno

    This research describes an algorithm that localizes sound sources for use in telematic applications. The localization algorithm is based on amplitude differences between various channels of a microphone array of directional shotgun microphones. The amplitude differences will be used to locate multiple performers and reproduce their voices, which were recorded at close distance with lavalier microphones, spatially corrected using a loudspeaker rendering system. In order to track multiple sound sources in parallel the information gained from the lavalier microphones will be utilized to estimate the signal-to-noise ratio between each performer and the concurrent performers.

  16. Short-Latency, Goal-Directed Movements of the Pinnae to Sounds That Produce Auditory Spatial Illusions

    PubMed Central

    McClaine, Elizabeth M.; Yin, Tom C. T.

    2010-01-01

    The precedence effect (PE) is an auditory spatial illusion whereby two identical sounds presented from two separate locations with a delay between them are perceived as a fused single sound source whose position depends on the value of the delay. By training cats using operant conditioning to look at sound sources, we have previously shown that cats experience the PE similarly to humans. For delays less than ±400 μs, cats exhibit summing localization, the perception of a “phantom” sound located between the sources. Consistent with localization dominance, for delays from 400 μs to ∼10 ms, cats orient toward the leading source location only, with little influence of the lagging source. Finally, echo threshold was reached for delays >10 ms, where cats first began to orient to the lagging source. It has been hypothesized by some that the neural mechanisms that produce facets of the PE, such as localization dominance and echo threshold, must likely occur at cortical levels. To test this hypothesis, we measured both pinnae position, which were not under any behavioral constraint, and eye position in cats and found that the pinnae orientations to stimuli that produce each of the three phases of the PE illusion was similar to the gaze responses. Although both eye and pinnae movements behaved in a manner that reflected the PE, because the pinnae moved with strikingly short latencies (∼30 ms), these data suggest a subcortical basis for the PE and that the cortex is not likely to be directly involved. PMID:19889848

  17. Short-latency, goal-directed movements of the pinnae to sounds that produce auditory spatial illusions.

    PubMed

    Tollin, Daniel J; McClaine, Elizabeth M; Yin, Tom C T

    2010-01-01

    The precedence effect (PE) is an auditory spatial illusion whereby two identical sounds presented from two separate locations with a delay between them are perceived as a fused single sound source whose position depends on the value of the delay. By training cats using operant conditioning to look at sound sources, we have previously shown that cats experience the PE similarly to humans. For delays less than +/-400 mus, cats exhibit summing localization, the perception of a "phantom" sound located between the sources. Consistent with localization dominance, for delays from 400 mus to approximately 10 ms, cats orient toward the leading source location only, with little influence of the lagging source. Finally, echo threshold was reached for delays >10 ms, where cats first began to orient to the lagging source. It has been hypothesized by some that the neural mechanisms that produce facets of the PE, such as localization dominance and echo threshold, must likely occur at cortical levels. To test this hypothesis, we measured both pinnae position, which were not under any behavioral constraint, and eye position in cats and found that the pinnae orientations to stimuli that produce each of the three phases of the PE illusion was similar to the gaze responses. Although both eye and pinnae movements behaved in a manner that reflected the PE, because the pinnae moved with strikingly short latencies ( approximately 30 ms), these data suggest a subcortical basis for the PE and that the cortex is not likely to be directly involved.

  18. Localization of sound sources in a room with one microphone

    NASA Astrophysics Data System (ADS)

    Peić Tukuljac, Helena; Lissek, Hervé; Vandergheynst, Pierre

    2017-08-01

    Estimation of the location of sound sources is usually done using microphone arrays. Such settings provide an environment where we know the difference between the received signals among different microphones in the terms of phase or attenuation, which enables localization of the sound sources. In our solution we exploit the properties of the room transfer function in order to localize a sound source inside a room with only one microphone. The shape of the room and the position of the microphone are assumed to be known. The design guidelines and limitations of the sensing matrix are given. Implementation is based on the sparsity in the terms of voxels in a room that are occupied by a source. What is especially interesting about our solution is that we provide localization of the sound sources not only in the horizontal plane, but in the terms of the 3D coordinates inside the room.

  19. Sound source localization and segregation with internally coupled ears: the treefrog model

    PubMed Central

    Christensen-Dalsgaard, Jakob

    2016-01-01

    Acoustic signaling plays key roles in mediating many of the reproductive and social behaviors of anurans (frogs and toads). Moreover, acoustic signaling often occurs at night, in structurally complex habitats, such as densely vegetated ponds, and in dense breeding choruses characterized by high levels of background noise and acoustic clutter. Fundamental to anuran behavior is the ability of the auditory system to determine accurately the location from where sounds originate in space (sound source localization) and to assign specific sounds in the complex acoustic milieu of a chorus to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating directions for future research on these animals that will require the collaborative efforts of biologists, physicists, and roboticists. PMID:27730384

  20. Modelling of human low frequency sound localization acuity demonstrates dominance of spatial variation of interaural time difference and suggests uniform just-noticeable differences in interaural time difference.

    PubMed

    Smith, Rosanna C G; Price, Stephen R

    2014-01-01

    Sound source localization is critical to animal survival and for identification of auditory objects. We investigated the acuity with which humans localize low frequency, pure tone sounds using timing differences between the ears. These small differences in time, known as interaural time differences or ITDs, are identified in a manner that allows localization acuity of around 1° at the midline. Acuity, a relative measure of localization ability, displays a non-linear variation as sound sources are positioned more laterally. All species studied localize sounds best at the midline and progressively worse as the sound is located out towards the side. To understand why sound localization displays this variation with azimuthal angle, we took a first-principles, systemic, analytical approach to model localization acuity. We calculated how ITDs vary with sound frequency, head size and sound source location for humans. This allowed us to model ITD variation for previously published experimental acuity data and determine the distribution of just-noticeable differences in ITD. Our results suggest that the best-fit model is one whereby just-noticeable differences in ITDs are identified with uniform or close to uniform sensitivity across the physiological range. We discuss how our results have several implications for neural ITD processing in different species as well as development of the auditory system.

  1. Advanced Systems for Monitoring Underwater Sounds

    NASA Technical Reports Server (NTRS)

    Lane, Michael; Van Meter, Steven; Gilmore, Richard Grant; Sommer, Keith

    2007-01-01

    The term "Passive Acoustic Monitoring System" (PAMS) describes a developmental sensing-and-data-acquisition system for recording underwater sounds. The sounds (more precisely, digitized and preprocessed versions from acoustic transducers) are subsequently analyzed by a combination of data processing and interpretation to identify and/or, in some cases, to locate the sources of those sounds. PAMS was originally designed to locate the sources such as fish of species that one knows or seeks to identify. The PAMS unit could also be used to locate other sources, for example, marine life, human divers, and/or vessels. The underlying principles of passive acoustic sensing and analyzing acoustic-signal data in conjunction with temperature and salinity data are not new and not unique to PAMS. Part of the uniqueness of the PAMS design is that it is the first deep-sea instrumentation design to provide a capability for studying soniferous marine animals (especially fish) over the wide depth range described below. The uniqueness of PAMS also lies partly in a synergistic combination of advanced sensing, packaging, and data-processing design features with features adapted from proven marine instrumentation systems. This combination affords a versatility that enables adaptation to a variety of undersea missions using a variety of sensors. The interpretation of acoustic data can include visual inspection of power-spectrum plots for identification of spectral signatures of known biological species or artificial sources. Alternatively or in addition, data analysis could include determination of relative times of arrival of signals at different acoustic sensors arrayed at known locations. From these times of arrival, locations of acoustic sources (and errors in those locations) can be estimated. Estimates of relative locations of sources and sensors can be refined through analysis of the attenuation of sound in the intervening water in combination with water-temperature and salinity data acquired by instrumentation systems other than PAMS. A PAMS is packaged as a battery-powered unit, mated with external sensors, that can operate in the ocean at any depth from 2 m to 1 km. A PAMS includes a pressure housing, a deep-sea battery, a hydrophone (which is one of the mating external sensors), and an external monitor and keyboard box. In addition to acoustic transducers, external sensors can include temperature probes and, potentially, underwater cameras. The pressure housing contains a computer that includes a hard drive, DC-to- DC power converters, a post-amplifier board, a sound card, and a universal serial bus (USB) 4-port hub.

  2. Developing a system for blind acoustic source localization and separation

    NASA Astrophysics Data System (ADS)

    Kulkarni, Raghavendra

    This dissertation presents innovate methodologies for locating, extracting, and separating multiple incoherent sound sources in three-dimensional (3D) space; and applications of the time reversal (TR) algorithm to pinpoint the hyper active neural activities inside the brain auditory structure that are correlated to the tinnitus pathology. Specifically, an acoustic modeling based method is developed for locating arbitrary and incoherent sound sources in 3D space in real time by using a minimal number of microphones, and the Point Source Separation (PSS) method is developed for extracting target signals from directly measured mixed signals. Combining these two approaches leads to a novel technology known as Blind Sources Localization and Separation (BSLS) that enables one to locate multiple incoherent sound signals in 3D space and separate original individual sources simultaneously, based on the directly measured mixed signals. These technologies have been validated through numerical simulations and experiments conducted in various non-ideal environments where there are non-negligible, unspecified sound reflections and reverberation as well as interferences from random background noise. Another innovation presented in this dissertation is concerned with applications of the TR algorithm to pinpoint the exact locations of hyper-active neurons in the brain auditory structure that are directly correlated to the tinnitus perception. Benchmark tests conducted on normal rats have confirmed the localization results provided by the TR algorithm. Results demonstrate that the spatial resolution of this source localization can be as high as the micrometer level. This high precision localization may lead to a paradigm shift in tinnitus diagnosis, which may in turn produce a more cost-effective treatment for tinnitus than any of the existing ones.

  3. Hearing in three dimensions

    NASA Astrophysics Data System (ADS)

    Shinn-Cunningham, Barbara

    2003-04-01

    One of the key functions of hearing is to help us monitor and orient to events in our environment (including those outside the line of sight). The ability to compute the spatial location of a sound source is also important for detecting, identifying, and understanding the content of a sound source, especially in the presence of competing sources from other positions. Determining the spatial location of a sound source poses difficult computational challenges; however, we perform this complex task with proficiency, even in the presence of noise and reverberation. This tutorial will review the acoustic, psychoacoustic, and physiological processes underlying spatial auditory perception. First, the tutorial will examine how the many different features of the acoustic signals reaching a listener's ears provide cues for source direction and distance, both in anechoic and reverberant space. Then we will discuss psychophysical studies of three-dimensional sound localization in different environments and the basic neural mechanisms by which spatial auditory cues are extracted. Finally, ``virtual reality'' approaches for simulating sounds at different directions and distances under headphones will be reviewed. The tutorial will be structured to appeal to a diverse audience with interests in all fields of acoustics and will incorporate concepts from many areas, such as psychological and physiological acoustics, architectural acoustics, and signal processing.

  4. Effect of Blast Injury on Auditory Localization in Military Service Members.

    PubMed

    Kubli, Lina R; Brungart, Douglas; Northern, Jerry

    Among the many advantages of binaural hearing are the abilities to localize sounds in space and to attend to one sound in the presence of many sounds. Binaural hearing provides benefits for all listeners, but it may be especially critical for military personnel who must maintain situational awareness in complex tactical environments with multiple speech and noise sources. There is concern that Military Service Members who have been exposed to one or more high-intensity blasts during their tour of duty may have difficulty with binaural and spatial ability due to degradation in auditory and cognitive processes. The primary objective of this study was to assess the ability of blast-exposed Military Service Members to localize speech sounds in quiet and in multisource environments with one or two competing talkers. Participants were presented with one, two, or three topic-related (e.g., sports, food, travel) sentences under headphones and required to attend to, and then locate the source of, the sentence pertaining to a prespecified target topic within a virtual space. The listener's head position was monitored by a head-mounted tracking device that continuously updated the apparent spatial location of the target and competing speech sounds as the subject turned within the virtual space. Measurements of auditory localization ability included mean absolute error in locating the source of the target sentence, the time it took to locate the target sentence within 30 degrees, target/competitor confusion errors, response time, and cumulative head motion. Twenty-one blast-exposed Active-Duty or Veteran Military Service Members (blast-exposed group) and 33 non-blast-exposed Service Members and beneficiaries (control group) were evaluated. In general, the blast-exposed group performed as well as the control group if the task involved localizing the source of a single speech target. However, if the task involved two or three simultaneous talkers, localization ability was compromised for some participants in the blast-exposed group. Blast-exposed participants were less accurate in their localization responses and required more exploratory head movements to find the location of the target talker. Results suggest that blast-exposed participants have more difficulty than non-blast-exposed participants in localizing sounds in complex acoustic environments. This apparent deficit in spatial hearing ability highlights the need to develop new diagnostic tests using complex listening tasks that involve multiple sound sources that require speech segregation and comprehension.

  5. Dimensional feature weighting utilizing multiple kernel learning for single-channel talker location discrimination using the acoustic transfer function.

    PubMed

    Takashima, Ryoichi; Takiguchi, Tetsuya; Ariki, Yasuo

    2013-02-01

    This paper presents a method for discriminating the location of the sound source (talker) using only a single microphone. In a previous work, the single-channel approach for discriminating the location of the sound source was discussed, where the acoustic transfer function from a user's position is estimated by using a hidden Markov model of clean speech in the cepstral domain. In this paper, each cepstral dimension of the acoustic transfer function is newly weighted, in order to obtain the cepstral dimensions having information that is useful for classifying the user's position. Then, this paper proposes a feature-weighting method for the cepstral parameter using multiple kernel learning, defining the base kernels for each cepstral dimension of the acoustic transfer function. The user's position is trained and classified by support vector machine. The effectiveness of this method has been confirmed by sound source (talker) localization experiments performed in different room environments.

  6. Forced sound transmission through a finite-sized single leaf panel subject to a point source excitation.

    PubMed

    Wang, Chong

    2018-03-01

    In the case of a point source in front of a panel, the wavefront of the incident wave is spherical. This paper discusses spherical sound waves transmitting through a finite sized panel. The forced sound transmission performance that predominates in the frequency range below the coincidence frequency is the focus. Given the point source located along the centerline of the panel, forced sound transmission coefficient is derived through introducing the sound radiation impedance for spherical incident waves. It is found that in addition to the panel mass, forced sound transmission loss also depends on the distance from the source to the panel as determined by the radiation impedance. Unlike the case of plane incident waves, sound transmission performance of a finite sized panel does not necessarily converge to that of an infinite panel, especially when the source is away from the panel. For practical applications, the normal incidence sound transmission loss expression of plane incident waves can be used if the distance between the source and panel d and the panel surface area S satisfy d/S>0.5. When d/S ≈0.1, the diffuse field sound transmission loss expression may be a good approximation. An empirical expression for d/S=0  is also given.

  7. The Coast Artillery Journal. Volume 65, Number 4, October 1926

    DTIC Science & Technology

    1926-10-01

    sound. a. Sound location of airplanes by binaural observation in all antiaircraft regiments. b. Sound ranging on report of enemy guns, together with...Direction finding by binaural observation. [Subparagraphs 30 a and 30 c (l).J This applies to continuous sounds such as pro- pellor noises. b. Point...impacts. 32. The so-called binaural sense is our means of sensing the direc- tion of a sound source. When we hear a sound we judge the approxi- mate

  8. Object localization using a biosonar beam: how opening your mouth improves localization.

    PubMed

    Arditi, G; Weiss, A J; Yovel, Y

    2015-08-01

    Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions.

  9. Object localization using a biosonar beam: how opening your mouth improves localization

    PubMed Central

    Arditi, G.; Weiss, A. J.; Yovel, Y.

    2015-01-01

    Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions. PMID:26361552

  10. Three dimensional volcano-acoustic source localization at Karymsky Volcano, Kamchatka, Russia

    NASA Astrophysics Data System (ADS)

    Rowell, Colin

    We test two methods of 3-D acoustic source localization on volcanic explosions and small-scale jetting events at Karymsky Volcano, Kamchatka, Russia. Recent infrasound studies have provided evidence that volcanic jets produce low-frequency aerodynamic sound (jet noise) similar to that from man-made jet engines. Man-made jets are known to produce sound through turbulence along the jet axis, but discrimination of sources along the axis of a volcanic jet requires a network of sufficient topographic relief to attain resolution in the vertical dimension. At Karymsky Volcano, the topography of an eroded edifice adjacent to the active cone provided a platform for the atypical deployment of five infrasound sensors with intra-network relief of ˜600 m in July 2012. A novel 3-D inverse localization method, srcLoc, is tested and compared against a more common grid-search semblance technique. Simulations using synthetic signals indicate that srcLoc is capable of determining vertical source locations for this network configuration to within +/-150 m or better. However, srcLoc locations for explosions and jetting at Karymsky Volcano show a persistent overestimation of source elevation and underestimation of sound speed by an average of ˜330 m and 25 m/s, respectively. The semblance method is able to produce more realistic source locations by fixing the sound speed to expected values of 335 - 340 m/s. The consistency of location errors for both explosions and jetting activity over a wide range of wind and temperature conditions points to the influence of topography. Explosion waveforms exhibit amplitude relationships and waveform distortion strikingly similar to those theorized by modeling studies of wave diffraction around the crater rim. We suggest delay of signals and apparent elevated source locations are due to altered raypaths and/or crater diffraction effects. Our results suggest the influence of topography in the vent region must be accounted for when attempting 3-D volcano acoustic source localization. Though the data presented here are insufficient to resolve noise sources for these jets, which are much smaller in scale than those of previous volcanic jet noise studies, similar techniques may be successfully applied to large volcanic jets in the future.

  11. By the sound of it. An ERP investigation of human action sound processing in 7-month-old infants

    PubMed Central

    Geangu, Elena; Quadrelli, Ermanno; Lewis, James W.; Macchi Cassia, Viola; Turati, Chiara

    2015-01-01

    Recent evidence suggests that human adults perceive human action sounds as a distinct category from human vocalizations, environmental, and mechanical sounds, activating different neural networks (Engel et al., 2009; Lewis et al., 2011). Yet, little is known about the development of such specialization. Using event-related potentials (ERP), this study investigated neural correlates of 7-month-olds’ processing of human action (HA) sounds in comparison to human vocalizations (HV), environmental (ENV), and mechanical (MEC) sounds. Relative to the other categories, HA sounds led to increased positive amplitudes between 470 and 570 ms post-stimulus onset at left anterior temporal locations, while HV led to increased negative amplitudes at the more posterior temporal locations in both hemispheres. Collectively, human produced sounds (HA + HV) led to significantly different response profiles compared to non-living sound sources (ENV + MEC) at parietal and frontal locations in both hemispheres. Overall, by 7 months of age human action sounds are being differentially processed in the brain, consistent with a dichotomy for processing living versus non-living things. This provides novel evidence regarding the typical categorical processing of socially relevant sounds. PMID:25732377

  12. Acoustic device and method for measuring gas densities

    NASA Technical Reports Server (NTRS)

    Shakkottai, Parthasarathy (Inventor); Kwack, Eug Y. (Inventor); Back, Lloyd (Inventor)

    1992-01-01

    Density measurements can be made in a gas contained in a flow through enclosure by measuring the sound pressure level at a receiver or microphone located near a dipole sound source which is driven at constant velocity amplitude at low frequencies. Analytical results, which are provided in terms of geometrical parameters, wave numbers, and sound source type for systems of this invention, agree well with published data. The relatively simple designs feature a transmitter transducer at the closed end of a small tube and a receiver transducer on the circumference of the small tube located a small distance away from the transmitter. The transmitter should be a dipole operated at low frequency with the kL value preferable less that about 0.3.

  13. Blind people are more sensitive than sighted people to binaural sound-location cues, particularly inter-aural level differences.

    PubMed

    Nilsson, Mats E; Schenkman, Bo N

    2016-02-01

    Blind people use auditory information to locate sound sources and sound-reflecting objects (echolocation). Sound source localization benefits from the hearing system's ability to suppress distracting sound reflections, whereas echolocation would benefit from "unsuppressing" these reflections. To clarify how these potentially conflicting aspects of spatial hearing interact in blind versus sighted listeners, we measured discrimination thresholds for two binaural location cues: inter-aural level differences (ILDs) and inter-aural time differences (ITDs). The ILDs or ITDs were present in single clicks, in the leading component of click pairs, or in the lagging component of click pairs, exploiting processes related to both sound source localization and echolocation. We tested 23 blind (mean age = 54 y), 23 sighted-age-matched (mean age = 54 y), and 42 sighted-young (mean age = 26 y) listeners. The results suggested greater ILD sensitivity for blind than for sighted listeners. The blind group's superiority was particularly evident for ILD-lag-click discrimination, suggesting not only enhanced ILD sensitivity in general but also increased ability to unsuppress lagging clicks. This may be related to the blind person's experience of localizing reflected sounds, for which ILDs may be more efficient than ITDs. On the ITD-discrimination tasks, the blind listeners performed better than the sighted age-matched listeners, but not better than the sighted young listeners. ITD sensitivity declines with age, and the equal performance of the blind listeners compared to a group of substantially younger listeners is consistent with the notion that blind people's experience may offset age-related decline in ITD sensitivity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Independence of Echo-Threshold and Echo-Delay in the Barn Owl

    PubMed Central

    Nelson, Brian S.; Takahashi, Terry T.

    2008-01-01

    Despite their prevalence in nature, echoes are not perceived as events separate from the sounds arriving directly from an active source, until the echo's delay is long. We measured the head-saccades of barn owls and the responses of neurons in their auditory space-maps while presenting a long duration noise-burst and a simulated echo. Under this paradigm, there were two possible stimulus segments that could potentially signal the location of the echo. One was at the onset of the echo; the other, after the offset of the direct (leading) sound, when only the echo was present. By lengthening the echo's duration, independently of its delay, spikes and saccades were evoked by the source of the echo even at delays that normally evoked saccades to only the direct source. An echo's location thus appears to be signaled by the neural response evoked after the offset of the direct sound. PMID:18974886

  15. Database of Inlet and Exhaust Noise Shielding for Wedge-Shaped Airframe

    NASA Technical Reports Server (NTRS)

    Gerhold, Carl H.; Clark, Lorenzo R.

    2001-01-01

    An experiment to measure the noise shielding of the blended wing body design concept was developed using a simplified wedge-shaped airframe. The experimental study was conducted in the Langley Anechoic Noise Research Facility. A wideband, omnidirective sound source in a simulated engine nacelle was held at locations representative of a range of engine locations above the wing. The sound field around the model was measured with the airframe and source in place and with source alone, using an-array of microphones on a rotating hoop that is also translated along an axis parallel to the airframe axis. The insertion loss was determined from the difference between the two resulting contours. Although no attempt was made to simulate the noise characteristics of a particular engine, the broadband noise source radiated sound over a range of scaled frequencies encompassing 1 and 2 times the blade passage frequency representative of a large, high-bypass-ratio turbofan engine. The measured data show that significant shielding of the inlet-radiated noise is obtained in the area beneath and upstream of the model. The data show the sensitivity of insertion loss to engine location.

  16. Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants

    PubMed Central

    Zheng, Yi; Godar, Shelly P.; Litovsky, Ruth Y.

    2015-01-01

    Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs) sound localization is known to improve when bilateral CIs (BiCIs) are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users. PMID:26288142

  17. Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants.

    PubMed

    Zheng, Yi; Godar, Shelly P; Litovsky, Ruth Y

    2015-01-01

    Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs) sound localization is known to improve when bilateral CIs (BiCIs) are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users.

  18. Masking release by combined spatial and masker-fluctuation effects in the open sound field.

    PubMed

    Middlebrooks, John C

    2017-12-01

    In a complex auditory scene, signals of interest can be distinguished from masking sounds by differences in source location [spatial release from masking (SRM)] and by differences between masker-alone and masker-plus-signal envelopes. This study investigated interactions between those factors in release of masking of 700-Hz tones in an open sound field. Signal and masker sources were colocated in front of the listener, or the signal source was shifted 90° to the side. In Experiment 1, the masker contained a 25-Hz-wide on-signal band plus flanking bands having envelopes that were either mutually uncorrelated or were comodulated. Comodulation masking release (CMR) was largely independent of signal location at a higher masker sound level, but at a lower level CMR was reduced for the lateral signal location. In Experiment 2, a brief signal was positioned at the envelope maximum (peak) or minimum (dip) of a 50-Hz-wide on-signal masker. Masking was released in dip more than in peak conditions only for the 90° signal. Overall, open-field SRM was greater in magnitude than binaural masking release reported in comparable closed-field studies, and envelope-related release was somewhat weaker. Mutual enhancement of masking release by spatial and envelope-related effects tended to increase with increasing masker level.

  19. Experimental validation study of an analytical model of discrete frequency sound propagation in closed-test-section wind tunnels

    NASA Technical Reports Server (NTRS)

    Mosher, Marianne

    1990-01-01

    The principal objective is to assess the adequacy of linear acoustic theory with an impedence wall boundary condition to model the detailed sound field of an acoustic source in a duct. Measurements and calculations are compared of a simple acoustic source in a rectangular concrete duct lined with foam on the walls and anechoic end terminations. Measurement of acoustic pressure for twelve wave numbers provides variation in frequency and absorption characteristics of the duct walls. Close to the source, where the interference of wall reflections is minimal, correlation is very good. Away from the source, correlation degrades, especially for the lower frequencies. Sensitivity studies show little effect on the predicted results for changes in impedance boundary condition values, source location, measurement location, temperature, and source model for variations spanning the expected measurement error.

  20. Imaging of heart acoustic based on the sub-space methods using a microphone array.

    PubMed

    Moghaddasi, Hanie; Almasganj, Farshad; Zoroufian, Arezoo

    2017-07-01

    Heart disease is one of the leading causes of death around the world. Phonocardiogram (PCG) is an important bio-signal which represents the acoustic activity of heart, typically without any spatiotemporal information of the involved acoustic sources. The aim of this study is to analyze the PCG by employing a microphone array by which the heart internal sound sources could be localized, too. In this paper, it is intended to propose a modality by which the locations of the active sources in the heart could also be investigated, during a cardiac cycle. In this way, a microphone array with six microphones is employed as the recording set up to be put on the human chest. In the following, the Group Delay MUSIC algorithm which is a sub-space based localization method is used to estimate the location of the heart sources in different phases of the PCG. We achieved to 0.14cm mean error for the sources of first heart sound (S 1 ) simulator and 0.21cm mean error for the sources of second heart sound (S 2 ) simulator with Group Delay MUSIC algorithm. The acoustical diagrams created for human subjects show distinct patterns in various phases of the cardiac cycles such as the first and second heart sounds. Moreover, the evaluated source locations for the heart valves are matched with the ones that are obtained via the 4-dimensional (4D) echocardiography applied, to a real human case. Imaging of heart acoustic map presents a new outlook to indicate the acoustic properties of cardiovascular system and disorders of valves and thereby, in the future, could be used as a new diagnostic tool. Copyright © 2017. Published by Elsevier B.V.

  1. Basilar membrane vibration is not involved in the reverse propagation of otoacoustic emissions

    PubMed Central

    He, W.; Ren, T.

    2013-01-01

    To understand how the inner ear-generated sound, i.e., otoacoustic emission, exits the cochlea, we created a sound source electrically in the second turn and measured basilar membrane vibrations at two longitudinal locations in the first turn in living gerbil cochleae using a laser interferometer. For a given longitudinal location, electrically evoked basilar membrane vibrations showed the same tuning and phase lag as those induced by sounds. For a given frequency, the phase measured at a basal location led that at a more apical location, indicating that either an electrical or an acoustical stimulus evoked a forward travelling wave. Under postmortem conditions, the electrically evoked emissions showed no significant change while the basilar membrane vibration nearly disappeared. The current data indicate that basilar membrane vibration was not involved in the backward propagation of otoacoustic emissions and that sounds exit the cochlea probably through alternative media, such as cochlear fluids. PMID:23695199

  2. Underwater Acoustic Source Localisation Among Blind and Sighted Scuba Divers: Comparative study.

    PubMed

    Cambi, Jacopo; Livi, Ludovica; Livi, Walter

    2017-05-01

    Many blind individuals demonstrate enhanced auditory spatial discrimination or localisation of sound sources in comparison to sighted subjects. However, this hypothesis has not yet been confirmed with regards to underwater spatial localisation. This study therefore aimed to investigate underwater acoustic source localisation among blind and sighted scuba divers. This study took place between February and June 2015 in Elba, Italy, and involved two experimental groups of divers with either acquired (n = 20) or congenital (n = 10) blindness and a control group of 30 sighted divers. Each subject took part in five attempts at an under-water acoustic source localisation task, in which the divers were requested to swim to the source of a sound originating from one of 24 potential locations. The control group had their sight obscured during the task. The congenitally blind divers demonstrated significantly better underwater sound localisation compared to the control group or those with acquired blindness ( P = 0.0007). In addition, there was a significant correlation between years of blindness and underwater sound localisation ( P <0.0001). Congenital blindness was found to positively affect the ability of a diver to recognise the source of a sound in an underwater environment. As the correct localisation of sounds underwater may help individuals to avoid imminent danger, divers should perform sound localisation tests during training sessions.

  3. Compression of auditory space during forward self-motion.

    PubMed

    Teramoto, Wataru; Sakamoto, Shuichi; Furune, Fumimasa; Gyoba, Jiro; Suzuki, Yôiti

    2012-01-01

    Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point). In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial shifts in the auditory receptive field locations driven by afferent signals from vestibular system.

  4. Experimental assessment of theory for refraction of sound by a shear layer

    NASA Technical Reports Server (NTRS)

    Schlinker, R. H.; Amiet, R. K.

    1978-01-01

    The refraction angle and amplitude changes associated with sound transmission through a circular, open-jet shear layer were studied in a 0.91 m diameter open jet acoustic research tunnel. Free stream Mach number was varied from 0.1 to 0.4. Good agreement between refraction angle correction theory and experiment was obtained over the test Mach number, frequency and angle measurement range for all on-axis acoustic source locations. For off-axis source positions, good agreement was obtained at a source-to-shear layer separation distance greater than the jet radius. Measureable differences between theory and experiment occurred at a source-to-shear layer separation distance less than one jet radius. A shear layer turbulence scattering experiment was conducted at 90 deg to the open jet axis for the same free stream Mach numbers and axial source locations used in the refraction study. Significant discrete tone spectrum broadening and tone amplitude changes were observed at open jet Mach numbers above 0.2 and at acoustic source frequencies greater than 5 kHz. More severe turbulence scattering was observed for downstream source locations.

  5. Rotorcraft Noise Model

    NASA Technical Reports Server (NTRS)

    Lucas, Michael J.; Marcolini, Michael A.

    1997-01-01

    The Rotorcraft Noise Model (RNM) is an aircraft noise impact modeling computer program being developed for NASA-Langley Research Center which calculates sound levels at receiver positions either on a uniform grid or at specific defined locations. The basic computational model calculates a variety of metria. Acoustic properties of the noise source are defined by two sets of sound pressure hemispheres, each hemisphere being centered on a noise source of the aircraft. One set of sound hemispheres provides the broadband data in the form of one-third octave band sound levels. The other set of sound hemispheres provides narrowband data in the form of pure-tone sound pressure levels and phase. Noise contours on the ground are output graphically or in tabular format, and are suitable for inclusion in Environmental Impact Statements or Environmental Assessments.

  6. The influence of underwater data transmission sounds on the displacement behaviour of captive harbour seals (Phoca vitulina).

    PubMed

    Kastelein, Ronald A; van der Heul, Sander; Verboom, Willem C; Triesscheijn, Rob J V; Jennings, Nancy V

    2006-02-01

    To prevent grounding of ships and collisions between ships in shallow coastal waters, an underwater data collection and communication network (ACME) using underwater sounds to encode and transmit data is currently under development. Marine mammals might be affected by ACME sounds since they may use sound of a similar frequency (around 12 kHz) for communication, orientation, and prey location. If marine mammals tend to avoid the vicinity of the acoustic transmitters, they may be kept away from ecologically important areas by ACME sounds. One marine mammal species that may be affected in the North Sea is the harbour seal (Phoca vitulina). No information is available on the effects of ACME-like sounds on harbour seals, so this study was carried out as part of an environmental impact assessment program. Nine captive harbour seals were subjected to four sound types, three of which may be used in the underwater acoustic data communication network. The effect of each sound was judged by comparing the animals' location in a pool during test periods to that during baseline periods, during which no sound was produced. Each of the four sounds could be made into a deterrent by increasing its amplitude. The seals reacted by swimming away from the sound source. The sound pressure level (SPL) at the acoustic discomfort threshold was established for each of the four sounds. The acoustic discomfort threshold is defined as the boundary between the areas that the animals generally occupied during the transmission of the sounds and the areas that they generally did not enter during transmission. The SPLs at the acoustic discomfort thresholds were similar for each of the sounds (107 dB re 1 microPa). Based on this discomfort threshold SPL, discomfort zones at sea for several source levels (130-180 dB re 1 microPa) of the sounds were calculated, using a guideline sound propagation model for shallow water. The discomfort zone is defined as the area around a sound source that harbour seals are expected to avoid. The definition of the discomfort zone is based on behavioural discomfort, and does not necessarily coincide with the physical discomfort zone. Based on these results, source levels can be selected that have an acceptable effect on harbour seals in particular areas. The discomfort zone of a communication sound depends on the sound, the source level, and the propagation characteristics of the area in which the sound system is operational. The source level of the communication system should be adapted to each area (taking into account the width of a sea arm, the local sound propagation, and the importance of an area to the affected species). The discomfort zone should not coincide with ecologically important areas (for instance resting, breeding, suckling, and feeding areas), or routes between these areas.

  7. Sound Source Localization through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network.

    PubMed

    Beck, Christoph; Garreau, Guillaume; Georgiou, Julius

    2016-01-01

    Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.

  8. Determination of Jet Noise Radiation Source Locations using a Dual Sideline Cross-Correlation/Spectrum Technique

    NASA Technical Reports Server (NTRS)

    Allen, C. S.; Jaeger, S. M.

    1999-01-01

    The goal of our efforts is to extrapolate nearfield jet noise measurements to the geometric far field where the jet noise sources appear to radiate from a single point. To accomplish this, information about the location of noise sources in the jet plume, the radiation patterns of the noise sources and the sound pressure level distribution of the radiated field must be obtained. Since source locations and radiation patterns can not be found with simple single microphone measurements, a more complicated method must be used.

  9. Measurement of Correlation Between Flow Density, Velocity, and Density*velocity(sup 2) with Far Field Noise in High Speed Jets

    NASA Technical Reports Server (NTRS)

    Panda, Jayanta; Seasholtz, Richard G.; Elam, Kristie A.

    2002-01-01

    To locate noise sources in high-speed jets, the sound pressure fluctuations p', measured at far field locations, were correlated with each of radial velocity v, density rho, and phov(exp 2) fluctuations measured from various points in jet plumes. The experiments follow the cause-and-effect method of sound source identification, where correlation is related to the first, and correlation to the second source terms of Lighthill's equation. Three fully expanded, unheated plumes of Mach number 0.95, 1.4 and 1.8 were studied for this purpose. The velocity and density fluctuations were measured simultaneously using a recently developed, non-intrusive, point measurement technique based on molecular Rayleigh scattering. It was observed that along the jet centerline the density fluctuation spectra S(sub rho) have different shapes than the radial velocity spectra S(sub v), while data obtained from the peripheral shear layer show similarity between the two spectra. Density fluctuations in the jet showed significantly higher correlation, than either rhov(sub 2) or v fluctuations. It is found that a single point correlation from the peak sound emitting region at the end of the potential core can account for nearly 10% of all noise at 30 to the jet axis. The correlation, representing the effectiveness of a longitudinal quadrupole in generating noise 90 to the jet axis, is found to be zero within experimental uncertainty. In contrast rhov(exp 2) fluctuations were better correlated with sound pressure fluctuation at the 30 location. The strongest source of sound is found to lie at the centerline and beyond the end of potential core.

  10. Separation of non-stationary multi-source sound field based on the interpolated time-domain equivalent source method

    NASA Astrophysics Data System (ADS)

    Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng

    2016-05-01

    In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.

  11. Acoustic transducer in system for gas temperature measurement in gas turbine engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeSilva, Upul P.; Claussen, Heiko

    An apparatus for controlling operation of a gas turbine engine including at least one acoustic transmitter/receiver device located on a flow path boundary structure. The acoustic transmitter/receiver device includes an elongated sound passage defined by a surface of revolution having opposing first and second ends and a central axis extending between the first and second ends, an acoustic sound source located at the first end, and an acoustic receiver located within the sound passage between the first and second ends. The boundary structure includes an opening extending from outside the boundary structure to the flow path, and the second endmore » of the surface of revolution is affixed to the boundary structure at the opening for passage of acoustic signals between the sound passage and the flow path.« less

  12. The effects of experimentally induced conductive hearing loss on spectral and temporal aspects of sound transmission through the ear.

    PubMed

    Eric Lupo, J; Koka, Kanthaiah; Thornton, Jennifer L; Tollin, Daniel J

    2011-02-01

    Conductive hearing loss (CHL) is known to produce hearing deficits, including deficits in sound localization ability. The differences in sound intensities and timing experienced between the two tympanic membranes are important cues to sound localization (ILD and ITD, respectively). Although much is known about the effect of CHL on hearing levels, little investigation has been conducted into the actual impact of CHL on sound location cues. This study investigated effects of CHL induced by earplugs on cochlear microphonic (CM) amplitude and timing and their corresponding effect on the ILD and ITD location cues. Acoustic and CM measurements were made in 5 chinchillas before and after earplug insertion, and again after earplug removal using pure tones (500 Hz to 24 kHz). ILDs in the unoccluded condition demonstrated position and frequency dependence where peak far-lateral ILDs approached 30 dB for high frequencies. Unoccluded ear ITD cues demonstrated positional and frequency dependence with increased ITD cue for both decreasing frequency (±420 μs at 500 Hz, ±310 μs for 1-4 kHz) and increasingly lateral sound source locations. Occlusion of the ear canal with foam plugs resulted in a mild, frequency-dependent conductive hearing loss of 10-38 dB (mean 31 ± 3.9 dB) leading to a concomitant frequency dependent increase in ILDs at all source locations. The effective ITDs increased in a frequency dependent manner with ear occlusion as a direct result of the acoustic properties of the plugging material, the latter confirmed via acoustical measurements using a model ear canal with varying volumes of acoustic foam. Upon ear plugging with acoustic foam, a mild CHL is induced. Furthermore, the CHL induced by acoustic foam results in substantial changes in the magnitudes of both the ITD and ILD cues to sound location. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. The effects of experimentally induced conductive hearing loss on spectral and temporal aspects of sound transmission through the ear

    PubMed Central

    Lupo, J. Eric; Koka, Kanthaiah; Thornton, Jennifer L.; Tollin, Daniel J.

    2010-01-01

    Conductive hearing loss (CHL) is known to produce hearing deficits, including deficits in sound localization ability. The differences in sound intensities and timing experienced between the two tympanic membranes are important cues to sound localization (ILD and ITD, respectively). Although much is known about the effect of CHL on hearing levels, little investigation has been conducted into the actual impact of CHL on sound location cues. This study investigated effects of CHL induced by earplugs on cochlear microphonic (CM) amplitude and timing and their corresponding effect on the ILD and ITD location cues. Acoustic and CM measurements were made in 5 chinchillas before and after earplug insertion, and again after earplug removal using pure tones (500 Hz to 24 kHz). ILDs in the unoccluded condition demonstrated position and frequency dependence where peak far-lateral ILDs approached 30 dB for high frequencies. Unoccluded ear ITD cues demonstrated positional and frequency dependence with increased ITD cue for both decreasing frequency (± 420 µs at 500 Hz, ± 310 µs for 1–4 kHz ) and increasingly lateral sound source locations. Occlusion of the ear canal with foam plugs resulted in a mild, frequency-dependent conductive hearing loss of 10–38 dB (mean 31 ± 3.9 dB) leading to a concomitant frequency dependent increase in ILDs at all source locations. The effective ITDs increased in a frequency dependent manner with ear occlusion as a direct result of the acoustic properties of the plugging material, the latter confirmed via acoustical measurements using a model ear canal with varying volumes of acoustic foam. Upon ear plugging with acoustic foam, a mild CHL is induced. Furthermore, the CHL induced by acoustic foam results in substantial changes in the magnitudes of both the ITD and ILD cues to sound location. PMID:21073935

  14. Auditory performance in an open sound field

    NASA Astrophysics Data System (ADS)

    Fluitt, Kim F.; Letowski, Tomasz; Mermagen, Timothy

    2003-04-01

    Detection and recognition of acoustic sources in an open field are important elements of situational awareness on the battlefield. They are affected by many technical and environmental conditions such as type of sound, distance to a sound source, terrain configuration, meteorological conditions, hearing capabilities of the listener, level of background noise, and the listener's familiarity with the sound source. A limited body of knowledge about auditory perception of sources located over long distances makes it difficult to develop models predicting auditory behavior on the battlefield. The purpose of the present study was to determine the listener's abilities to detect, recognize, localize, and estimate distances to sound sources from 25 to 800 m from the listing position. Data were also collected for meteorological conditions (wind direction and strength, temperature, atmospheric pressure, humidity) and background noise level for each experimental trial. Forty subjects (men and women, ages 18 to 25) participated in the study. Nine types of sounds were presented from six loudspeakers in random order; each series was presented four times. Partial results indicate that both detection and recognition declined at distances greater than approximately 200 m and distance estimation was grossly underestimated by listeners. Specific results will be presented.

  15. Seismic and Biological Sources of Ambient Ocean Sound

    NASA Astrophysics Data System (ADS)

    Freeman, Simon Eric

    Sound is the most efficient radiation in the ocean. Sounds of seismic and biological origin contain information regarding the underlying processes that created them. A single hydrophone records summary time-frequency information from the volume within acoustic range. Beamforming using a hydrophone array additionally produces azimuthal estimates of sound sources. A two-dimensional array and acoustic focusing produce an unambiguous two-dimensional `image' of sources. This dissertation describes the application of these techniques in three cases. The first utilizes hydrophone arrays to investigate T-phases (water-borne seismic waves) in the Philippine Sea. Ninety T-phases were recorded over a 12-day period, implying a greater number of seismic events occur than are detected by terrestrial seismic monitoring in the region. Observation of an azimuthally migrating T-phase suggests that reverberation of such sounds from bathymetric features can occur over megameter scales. In the second case, single hydrophone recordings from coral reefs in the Line Islands archipelago reveal that local ambient reef sound is spectrally similar to sounds produced by small, hard-shelled benthic invertebrates in captivity. Time-lapse photography of the reef reveals an increase in benthic invertebrate activity at sundown, consistent with an increase in sound level. The dominant acoustic phenomenon on these reefs may thus originate from the interaction between a large number of small invertebrates and the substrate. Such sounds could be used to take census of hard-shelled benthic invertebrates that are otherwise extremely difficult to survey. A two-dimensional `map' of sound production over a coral reef in the Hawaiian Islands was obtained using two-dimensional hydrophone array in the third case. Heterogeneously distributed bio-acoustic sources were generally co-located with rocky reef areas. Acoustically dominant snapping shrimp were largely restricted to one location within the area surveyed. This distribution of sources could reveal small-scale spatial ecological limitations, such as the availability of food and shelter. While array-based passive acoustic sensing is well established in seismoacoustics, the technique is little utilized in the study of ambient biological sound. With the continuance of Moore's law and advances in battery and memory technology, inferring biological processes from ambient sound may become a more accessible tool in underwater ecological evaluation and monitoring.

  16. Calibration of International Space Station (ISS) Node 1 Vibro-Acoustic Model-Report 2

    NASA Technical Reports Server (NTRS)

    Zhang, Weiguo; Raveendra, Ravi

    2014-01-01

    Reported here is the capability of the Energy Finite Element Method (E-FEM) to predict the vibro-acoustic sound fields within the International Space Station (ISS) Node 1 and to compare the results with simulated leak sounds. A series of electronically generated structural ultrasonic noise sources were created in the pressure wall to emulate leak signals at different locations of the Node 1 STA module during its period of storage at Stennis Space Center (SSC). The exact sound source profiles created within the pressure wall at the source were unknown, but were estimated from the closest sensor measurement. The E-FEM method represents a reverberant sound field calculation, and of importance to this application is the requirement to correctly handle the direct field effect of the sound generation. It was also important to be able to compute the sound energy fields in the ultrasonic frequency range. This report demonstrates the capability of this technology as applied to this type of application.

  17. Differential presence of anthropogenic compounds dissolved in the marine waters of Puget Sound, WA and Barkley Sound, BC.

    PubMed

    Keil, Richard; Salemme, Keri; Forrest, Brittany; Neibauer, Jaqui; Logsdon, Miles

    2011-11-01

    Organic compounds were evaluated in March 2010 at 22 stations in Barkley Sound, Vancouver Island Canada and at 66 locations in Puget Sound. Of 37 compounds, 15 were xenobiotics, 8 were determined to have an anthropogenic imprint over natural sources, and 13 were presumed to be of natural or mixed origin. The three most frequently detected compounds were salicyclic acid, vanillin and thymol. The three most abundant compounds were diethylhexyl phthalate (DEHP), ethyl vanillin and benzaldehyde (∼600 n g L(-1) on average). Concentrations of xenobiotics were 10-100 times higher in Puget Sound relative to Barkley Sound. Three compound couplets are used to illustrate the influence of human activity on marine waters; vanillin and ethyl vanillin, salicylic acid and acetylsalicylic acid, and cinnamaldehyde and cinnamic acid. Ratios indicate that anthropogenic activities are the predominant source of these chemicals in Puget Sound. Published by Elsevier Ltd.

  18. Consistent modelling of wind turbine noise propagation from source to receiver.

    PubMed

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; Dag, Kaya O; Moriarty, Patrick

    2017-11-01

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. The local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.

  19. Consistent modelling of wind turbine noise propagation from source to receiver

    DOE PAGES

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; ...

    2017-11-28

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less

  20. Consistent modelling of wind turbine noise propagation from source to receiver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less

  1. Room temperature acoustic transducers for high-temperature thermometry

    NASA Astrophysics Data System (ADS)

    Ripple, D. C.; Murdock, W. E.; Strouse, G. F.; Gillis, K. A.; Moldover, M. R.

    2013-09-01

    We have successfully conducted highly-accurate, primary acoustic thermometry at 600 K using a sound source and a sound detector located outside the thermostat, at room temperature. We describe the source, the detector, and the ducts that connected them to our cavity resonator. This transducer system preserved the purity of the argon gas, generated small, predictable perturbations to the acoustic resonance frequencies, and can be used well above 600 K.

  2. Monitoring Anthropogenic Ocean Sound from Shipping Using an Acoustic Sensor Network and a Compressive Sensing Approach.

    PubMed

    Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian

    2016-03-22

    Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches.

  3. Monitoring Anthropogenic Ocean Sound from Shipping Using an Acoustic Sensor Network and a Compressive Sensing Approach †

    PubMed Central

    Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian

    2016-01-01

    Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches. PMID:27011187

  4. Underwater Acoustic Source Localisation Among Blind and Sighted Scuba Divers

    PubMed Central

    Cambi, Jacopo; Livi, Ludovica; Livi, Walter

    2017-01-01

    Objectives Many blind individuals demonstrate enhanced auditory spatial discrimination or localisation of sound sources in comparison to sighted subjects. However, this hypothesis has not yet been confirmed with regards to underwater spatial localisation. This study therefore aimed to investigate underwater acoustic source localisation among blind and sighted scuba divers. Methods This study took place between February and June 2015 in Elba, Italy, and involved two experimental groups of divers with either acquired (n = 20) or congenital (n = 10) blindness and a control group of 30 sighted divers. Each subject took part in five attempts at an under-water acoustic source localisation task, in which the divers were requested to swim to the source of a sound originating from one of 24 potential locations. The control group had their sight obscured during the task. Results The congenitally blind divers demonstrated significantly better underwater sound localisation compared to the control group or those with acquired blindness (P = 0.0007). In addition, there was a significant correlation between years of blindness and underwater sound localisation (P <0.0001). Conclusion Congenital blindness was found to positively affect the ability of a diver to recognise the source of a sound in an underwater environment. As the correct localisation of sounds underwater may help individuals to avoid imminent danger, divers should perform sound localisation tests during training sessions. PMID:28690888

  5. Location, location, location: finding a suitable home among the noise

    PubMed Central

    Stanley, Jenni A.; Radford, Craig A.; Jeffs, Andrew G.

    2012-01-01

    While sound is a useful cue for guiding the onshore orientation of larvae because it travels long distances underwater, it also has the potential to convey valuable information about the quality and type of the habitat at the source. Here, we provide, to our knowledge, the first evidence that settlement-stage coastal crab species can interpret and show a strong settlement and metamorphosis response to habitat-related differences in natural underwater sound. Laboratory- and field-based experiments demonstrated that time to metamorphosis in the settlement-stage larvae of common coastal crab species varied in response to different underwater sound signatures produced by different habitat types. The megalopae of five species of both temperate and tropical crabs showed a significant decrease in time to metamorphosis, when exposed to sound from their optimal settlement habitat type compared with other habitat types. These results indicate that sounds emanating from specific underwater habitats may play a major role in determining spatial patterns of recruitment in coastal crab species. PMID:22673354

  6. Control of boundary layer transition location and plate vibration in the presence of an external acoustic field

    NASA Technical Reports Server (NTRS)

    Maestrello, L.; Grosveld, F. W.

    1991-01-01

    The experiment is aimed at controlling the boundary layer transition location and the plate vibration when excited by a flow and an upstream sound source. Sound has been found to affect the flow at the leading edge and the response of a flexible plate in a boundary layer. Because the sound induces early transition, the panel vibration is acoustically coupled to the turbulent boundary layer by the upstream radiation. Localized surface heating at the leading edge delays the transition location downstream of the flexible plate. The response of the plate excited by a turbulent boundary layer (without sound) shows that the plate is forced to vibrate at different frequencies and with different amplitudes as the flow velocity changes indicating that the plate is driven by the convective waves of the boundary layer. The acoustic disturbances induced by the upstream sound dominate the response of the plate when the boundary layer is either turbulent or laminar. Active vibration control was used to reduce the sound induced displacement amplitude of the plate.

  7. Spatial hearing ability of the pigmented Guinea pig (Cavia porcellus): Minimum audible angle and spatial release from masking in azimuth.

    PubMed

    Greene, Nathaniel T; Anbuhl, Kelsey L; Ferber, Alexander T; DeGuzman, Marisa; Allen, Paul D; Tollin, Daniel J

    2018-08-01

    Despite the common use of guinea pigs in investigations of the neural mechanisms of binaural and spatial hearing, their behavioral capabilities in spatial hearing tasks have surprisingly not been thoroughly investigated. To begin to fill this void, we tested the spatial hearing of adult male guinea pigs in several experiments using a paradigm based on the prepulse inhibition (PPI) of the acoustic startle response. In the first experiment, we presented continuous broadband noise from one speaker location and switched to a second speaker location (the "prepulse") along the azimuth prior to presenting a brief, ∼110 dB SPL startle-eliciting stimulus. We found that the startle response amplitude was systematically reduced for larger changes in speaker swap angle (i.e., greater PPI), indicating that using the speaker "swap" paradigm is sufficient to assess stimulus detection of spatially separated sounds. In a second set of experiments, we swapped low- and high-pass noise across the midline to estimate their ability to utilize interaural time- and level-difference cues, respectively. The results reveal that guinea pigs can utilize both binaural cues to discriminate azimuthal sound sources. A third set of experiments examined spatial release from masking using a continuous broadband noise masker and a broadband chirp signal, both presented concurrently at various speaker locations. In general, animals displayed an increase in startle amplitude (i.e., lower PPI) when the masker was presented at speaker locations near that of the chirp signal, and reduced startle amplitudes (increased PPI) indicating lower detection thresholds when the noise was presented from more distant speaker locations. In summary, these results indicate that guinea pigs can: 1) discriminate changes in source location within a hemifield as well as across the midline, 2) discriminate sources of low- and high-pass sounds, demonstrating that they can effectively utilize both low-frequency interaural time and high-frequency level difference sound localization cues, and 3) utilize spatial release from masking to discriminate sound sources. This report confirms the guinea pig as a suitable spatial hearing model and reinforces prior estimates of guinea pig hearing ability from acoustical and physiological measurements. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Effectiveness of Interaural Delays Alone as Cues During Dynamic Sound Localization

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The contribution of interaural time differences (ITDs) to the localization of virtual sound sources with and without head motion was examined. Listeners estimated the apparent azimuth, elevation and distance of virtual sources presented over headphones. Stimuli (3 sec., white noise) were synthesized from minimum-phase representations of nonindividualized head-related transfer functions (HRTFs); binaural magnitude spectra were derived from the minimum phase estimates and ITDs were represented as a pure delay. During dynamic conditions, listeners were encouraged to move their heads; head position was tracked and stimuli were synthesized in real time using a Convolvotron to simulate a stationary external sound source. Two synthesis conditions were tested: (1) both interaural level differences (ILDs) and ITDs correctly correlated with source location and head motion, (2) ITDs correct, no ILDs (flat magnitude spectrum). Head movements reduced azimuth confusions primarily when interaural cues were correctly correlated, although a smaller effect was also seen for ITDs alone. Externalization was generally poor for ITD-only conditions and was enhanced by head motion only for normal HRTFs. Overall the data suggest that, while ITDs alone can provide a significant cue for azimuth, the errors most commonly associated with virtual sources are reduced by location-dependent magnitude cues.

  9. Nonlinear theory of shocked sound propagation in a nearly choked duct flow

    NASA Technical Reports Server (NTRS)

    Myers, M. K.; Callegari, A. J.

    1982-01-01

    The development of shocks in the sound field propagating through a nearly choked duct flow is analyzed by extending a quasi-one dimensional theory. The theory is applied to the case in which sound is introduced into the flow by an acoustic source located in the vicinity of a near-sonic throat. Analytical solutions for the field are obtained which illustrate the essential features of the nonlinear interaction between sound and flow. Numerical results are presented covering ranges of variation of source strength, throat Mach number, and frequency. It is found that the development of shocks leads to appreciable attenuation of acoustic power transmitted upstream through the near-sonic flow. It is possible, for example, that the power loss in the fundamental harmonic can be as much as 90% of that introduced at the source.

  10. How Nemo finds home: the neuroecology of dispersal and of population connectivity in larvae of marine fishes.

    PubMed

    Leis, Jeffrey M; Siebeck, Ulrike; Dixson, Danielle L

    2011-11-01

    Nearly all demersal teleost marine fishes have pelagic larval stages lasting from several days to several weeks, during which time they are subject to dispersal. Fish larvae have considerable swimming abilities, and swim in an oriented manner in the sea. Thus, they can influence their dispersal and thereby, the connectivity of their populations. However, the sensory cues marine fish larvae use for orientation in the pelagic environment remain unclear. We review current understanding of these cues and how sensory abilities of larvae develop and are used to achieve orientation with particular emphasis on coral-reef fishes. The use of sound is best understood; it travels well underwater with little attenuation, and is current-independent but location-dependent, so species that primarily utilize sound for orientation will have location-dependent orientation. Larvae of many species and families can hear over a range of ~100-1000 Hz, and can distinguish among sounds. They can localize sources of sounds, but the means by which they do so is unclear. Larvae can hear during much of their pelagic larval phase, and ontogenetically, hearing sensitivity, and frequency range improve dramatically. Species differ in sensitivity to sound and in the rate of improvement in hearing during ontogeny. Due to large differences among-species within families, no significant differences in hearing sensitivity among families have been identified. Thus, distances over which larvae can detect a given sound vary among species and greatly increase ontogenetically. Olfactory cues are current-dependent and location-dependent, so species that primarily utilize olfactory cues will have location-dependent orientation, but must be able to swim upstream to locate sources of odor. Larvae can detect odors (e.g., predators, conspecifics), during most of their pelagic phase, and at least on small scales, can localize sources of odors in shallow water, although whether they can do this in pelagic environments is unknown. Little is known of the ontogeny of olfactory ability or the range over which larvae can localize sources of odors. Imprinting on an odor has been shown in one species of reef-fish. Celestial cues are current- and location-independent, so species that primarily utilize them will have location-independent orientation that can apply over broad scales. Use of sun compass or polarized light for orientation by fish larvae is implied by some behaviors, but has not been proven. Use of neither magnetic fields nor direction of waves for orientation has been shown in marine fish larvae. We highlight research priorities in this area. © The Author 2011. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved.

  11. Developmental vision determines the reference frame for the multisensory control of action.

    PubMed

    Röder, Brigitte; Kusmierek, Anna; Spence, Charles; Schicke, Tobias

    2007-03-13

    Both animal and human studies suggest that action goals are defined in external coordinates regardless of their sensory modality. The present study used an auditory-manual task to test whether the default use of such an external reference frame is innately determined or instead acquired during development because of the increasing dominance of vision over manual control. In Experiment I, congenitally blind, late blind, and age-matched sighted adults had to press a left or right response key depending on the bandwidth of pink noise bursts presented from either the left or right loudspeaker. Although the spatial location of the sounds was entirely task-irrelevant, all groups responded more efficiently with uncrossed hands when the sound was presented from the same side as the responding hand ("Simon effect"). This effect reversed with crossed hands only in the congenitally blind: They responded faster with the hand that was located contralateral to the sound source. In Experiment II, the instruction to the participants was changed: They now had to respond with the hand located next to the sound source. In contrast to Experiment I ("Simon-task"), this task required an explicit matching of the sound's location with the position of the responding hand. In Experiment II, the congenitally blind participants showed a significantly larger crossing deficit than both the sighted and late blind adults. This pattern of results implies that developmental vision induces the default use of an external coordinate frame for multisensory action control; this facilitates not only visual but also auditory-manual control.

  12. The effects of spatially separated call components on phonotaxis in túngara frogs: evidence for auditory grouping.

    PubMed

    Farris, Hamilton E; Rand, A Stanley; Ryan, Michael J

    2002-01-01

    Numerous animals across disparate taxa must identify and locate complex acoustic signals imbedded in multiple overlapping signals and ambient noise. A requirement of this task is the ability to group sounds into auditory streams in which sounds are perceived as emanating from the same source. Although numerous studies over the past 50 years have examined aspects of auditory grouping in humans, surprisingly few assays have demonstrated auditory stream formation or the assignment of multicomponent signals to a single source in non-human animals. In our study, we present evidence for auditory grouping in female túngara frogs. In contrast to humans, in which auditory grouping may be facilitated by the cues produced when sounds arrive from the same location, we show that spatial cues play a limited role in grouping, as females group discrete components of the species' complex call over wide angular separations. Furthermore, we show that once grouped the separate call components are weighted differently in recognizing and locating the call, so called 'what' and 'where' decisions, respectively. Copyright 2002 S. Karger AG, Basel

  13. Vector Acoustics, Vector Sensors, and 3D Underwater Imaging

    NASA Astrophysics Data System (ADS)

    Lindwall, D.

    2007-12-01

    Vector acoustic data has two more dimensions of information than pressure data and may allow for 3D underwater imaging with much less data than with hydrophone data. The vector acoustic sensors measures the particle motions due to passing sound waves and, in conjunction with a collocated hydrophone, the direction of travel of the sound waves. When using a controlled source with known source and sensor locations, the reflection points of the sound field can be determined with a simple trigonometric calculation. I demonstrate this concept with an experiment that used an accelerometer based vector acoustic sensor in a water tank with a short-pulse source and passive scattering targets. The sensor consists of a three-axis accelerometer and a matched hydrophone. The sound source was a standard transducer driven by a short 7 kHz pulse. The sensor was suspended in a fixed location and the hydrophone was moved about the tank by a robotic arm to insonify the tank from many locations. Several floats were placed in the tank as acoustic targets at diagonal ranges of approximately one meter. The accelerometer data show the direct source wave as well as the target scattered waves and reflections from the nearby water surface, tank bottom and sides. Without resorting to the usual methods of seismic imaging, which in this case is only two dimensional and relied entirely on the use of a synthetic source aperture, the two targets, the tank walls, the tank bottom, and the water surface were imaged. A directional ambiguity inherent to vector sensors is removed by using collocated hydrophone data. Although this experiment was in a very simple environment, it suggests that 3-D seismic surveys may be achieved with vector sensors using the same logistics as a 2-D survey that uses conventional hydrophones. This work was supported by the Office of Naval Research, program element 61153N.

  14. Using sounds for making decisions: greater tube-nosed bats prefer antagonistic calls over non-communicative sounds when feeding

    PubMed Central

    Jiang, Tinglei; Long, Zhenyu; Ran, Xin; Zhao, Xue; Xu, Fei; Qiu, Fuyuan; Kanwal, Jagmeet S.

    2016-01-01

    ABSTRACT Bats vocalize extensively within different social contexts. The type and extent of information conveyed via their vocalizations and their perceptual significance, however, remains controversial and difficult to assess. Greater tube-nosed bats, Murina leucogaster, emit calls consisting of long rectangular broadband noise burst (rBNBl) syllables during aggression between males. To experimentally test the behavioral impact of these sounds for feeding, we deployed an approach and place-preference paradigm. Two food trays were placed on opposite sides and within different acoustic microenvironments, created by sound playback, within a specially constructed tent. Specifically, we tested whether the presence of rBNBl sounds at a food source effectively deters the approach of male bats in comparison to echolocation sounds and white noise. In each case, contrary to our expectation, males preferred to feed at a location where rBNBl sounds were present. We propose that the species-specific rBNBl provides contextual information, not present within non-communicative sounds, to facilitate approach towards a food source. PMID:27815241

  15. Sound transmission in ducts containing nearly choked flows

    NASA Technical Reports Server (NTRS)

    Callegari, A. J.; Myers, M. K.

    1979-01-01

    The nonlinear theory previously developed by the authors (1977, 1978) is used to obtain numerical results for sound transmission through a nearly choked throat in a variable-area duct. Parametric studies are performed for different source locations, strengths and frequencies. It is shown that the nonlinear interactions in the throat region generate superharmonics of the fundamental (source) frequency throughout the duct. The amplitudes of these superharmonics increase as the source parameters (frequency and strength) are increased toward values leading to acoustic shocks. For a downstream source, superharmonics carry about 20% of the total acoustic power as shocking conditions are approached. For the source strength levels and frequencies considered, streaming effects are negligible.

  16. Refraction of sound by a shear layer - Experimental assessment

    NASA Technical Reports Server (NTRS)

    Schlinker, R. H.; Amiet, R. K.

    1979-01-01

    An experimental study was conducted to determine the refraction angle and amplitude changes associated with sound transmission through a circular, open jet shear layer. Both on-axis and off-axis acoustic source locations were used. Source frequency varied from 1 kHz to 10 kHz while freestream Mach number varied from 0.1 to 0.4. The experimental results were compared with an existing refraction theory which was extended to account for off-axis source positions. A simple experiment was also conducted to assess the importance of turbulence scattering between 1 kHz and 25 kHz.

  17. Evaluation of discrete frequency sound in closed-test-section wind tunnels

    NASA Technical Reports Server (NTRS)

    Mosher, Marianne

    1990-01-01

    The principal objective of this study is to assess the adequacy of linear acoustic theory with an impedance wall boundary condition for modeling the detailed sound field of an acoustic source in a duct. This study compares measurements and calculations of a simple acoustic source in a rectangular concrete duct lined with foam on the walls and anechoic end terminations. Measuring acoustic pressure for 12 wave numbers provides variation in frequency and absorption characteristics of the duct walls. The cases in this study contain low frequencies and low wall absorptions corresponding to measurements of low-frequency helicopter noise in a lined wind tunnel. This regime is particularly difficult to measure in wind tunnels due to high levels of the reverberant field relatively close to the source. Close to the source, where the interference of wall reflections is minimal, correlation is very good. Away from the source, correlation degrades, especially for the lower frequencies. Sensitivity studies show little effect on the predicted results for changes in impedance boundary condition values, source location, measurement location, temperature, and source model for variations spanning the expected measurement error.

  18. Direct-location versus verbal report methods for measuring auditory distance perception in the far field.

    PubMed

    Etchemendy, Pablo E; Spiousas, Ignacio; Calcagno, Esteban R; Abregú, Ezequiel; Eguia, Manuel C; Vergara, Ramiro O

    2018-06-01

    In this study we evaluated whether a method of direct location is an appropriate response method for measuring auditory distance perception of far-field sound sources. We designed an experimental set-up that allows participants to indicate the distance at which they perceive the sound source by moving a visual marker. We termed this method Cross-Modal Direct Location (CMDL) since the response procedure involves the visual modality while the stimulus is presented through the auditory modality. Three experiments were conducted with sound sources located from 1 to 6 m. The first one compared the perceived distances obtained using either the CMDL device or verbal report (VR), which is the response method more frequently used for reporting auditory distance in the far field, and found differences on response compression and bias. In Experiment 2, participants reported visual distance estimates to the visual marker that were found highly accurate. Then, we asked the same group of participants to report VR estimates of auditory distance and found that the spatial visual information, obtained from the previous task, did not influence their reports. Finally, Experiment 3 compared the same responses that Experiment 1 but interleaving the methods, showing a weak, but complex, mutual influence. However, the estimates obtained with each method remained statistically different. Our results show that the auditory distance psychophysical functions obtained with the CMDL method are less susceptible to previously reported underestimation for distances over 2 m.

  19. Limitations of Phased Array Beamforming in Open Rotor Noise Source Imaging

    NASA Technical Reports Server (NTRS)

    Horvath, Csaba; Envia, Edmane; Podboy, Gary G.

    2013-01-01

    Phased array beamforming results of the F31/A31 historical baseline counter-rotating open rotor blade set were investigated for measurement data taken on the NASA Counter-Rotating Open Rotor Propulsion Rig in the 9- by 15-Foot Low-Speed Wind Tunnel of NASA Glenn Research Center as well as data produced using the LINPROP open rotor tone noise code. The planar microphone array was positioned broadside and parallel to the axis of the open rotor, roughly 2.3 rotor diameters away. The results provide insight as to why the apparent noise sources of the blade passing frequency tones and interaction tones appear at their nominal Mach radii instead of at the actual noise sources, even if those locations are not on the blades. Contour maps corresponding to the sound fields produced by the radiating sound waves, taken from the simulations, are used to illustrate how the interaction patterns of circumferential spinning modes of rotating coherent noise sources interact with the phased array, often giving misleading results, as the apparent sources do not always show where the actual noise sources are located. This suggests that a more sophisticated source model would be required to accurately locate the sources of each tone. The results of this study also have implications with regard to the shielding of open rotor sources by airframe empennages.

  20. The opponent channel population code of sound location is an efficient representation of natural binaural sounds.

    PubMed

    Młynarski, Wiktor

    2015-05-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.

  1. An Inexpensive and Versatile Version of Kundt's Tube for Measuring the Speed of Sound in Air

    NASA Astrophysics Data System (ADS)

    Papacosta, Pangratios; Linscheid, Nathan

    2016-01-01

    Experiments that measure the speed of sound in air are common in high schools and colleges. In the Kundt's tube experiment, a horizontal air column is adjusted until a resonance mode is achieved for a specific frequency of sound. When this happens, the cork dust in the tube is disturbed at the displacement antinode regions. The location of the displacement antinodes enables the measurement of the wavelength of the sound that is being used. This paper describes a design that uses a speaker instead of the traditional aluminum rod as the sound source. This allows the use of multiple sound frequencies that yield a much more accurate speed of sound in air.

  2. Separation of concurrent broadband sound sources by human listeners

    NASA Astrophysics Data System (ADS)

    Best, Virginia; van Schaik, André; Carlile, Simon

    2004-01-01

    The effect of spatial separation on the ability of human listeners to resolve a pair of concurrent broadband sounds was examined. Stimuli were presented in a virtual auditory environment using individualized outer ear filter functions. Subjects were presented with two simultaneous noise bursts that were either spatially coincident or separated (horizontally or vertically), and responded as to whether they perceived one or two source locations. Testing was carried out at five reference locations on the audiovisual horizon (0°, 22.5°, 45°, 67.5°, and 90° azimuth). Results from experiment 1 showed that at more lateral locations, a larger horizontal separation was required for the perception of two sounds. The reverse was true for vertical separation. Furthermore, it was observed that subjects were unable to separate stimulus pairs if they delivered the same interaural differences in time (ITD) and level (ILD). These findings suggested that the auditory system exploited differences in one or both of the binaural cues to resolve the sources, and could not use monaural spectral cues effectively for the task. In experiments 2 and 3, separation of concurrent noise sources was examined upon removal of low-frequency content (and ITDs), onset/offset ITDs, both of these in conjunction, and all ITD information. While onset and offset ITDs did not appear to play a major role, differences in ongoing ITDs were robust cues for separation under these conditions, including those in the envelopes of high-frequency channels.

  3. Active control of sound radiation from a vibrating rectangular panel by sound sources and vibration inputs - An experimental comparison

    NASA Technical Reports Server (NTRS)

    Fuller, C. R.; Hansen, C. H.; Snyder, S. D.

    1991-01-01

    Active control of sound radiation from a rectangular panel by two different methods has been experimentally studied and compared. In the first method a single control force applied directly to the structure is used with a single error microphone located in the radiated acoustic field. Global attenuation of radiated sound was observed to occur by two main mechanisms. For 'on-resonance' excitation, the control force had the effect of increasing the total panel input impedance presented to the nosie source, thus reducing all radiated sound. For 'off-resonance' excitation, the control force tends not significantly to modify the panel total response amplitude but rather to restructure the relative phases of the modes leading to a more complex vibration pattern and a decrease in radiation efficiency. For acoustic control, the second method, the number of acoustic sources required for global reduction was seen to increase with panel modal order. The mechanism in this case was that the acoustic sources tended to create an inverse pressure distribution at the panel surface and thus 'unload' the panel by reducing the panel radiation impedance. In general, control by structural inputs appears more effective than control by acoustic sources for structurally radiated noise.

  4. Working Towards Deep-Ocean Temperature Monitoring by Studying the Acoustic Ambient Noise Field in the South Pacific Ocean

    NASA Astrophysics Data System (ADS)

    Sambell, K.; Evers, L. G.; Snellen, M.

    2017-12-01

    Deriving the deep-ocean temperature is a challenge. In-situ observations and satellite observations are hardly applicable. However, knowledge about changes in the deep ocean temperature is important in relation to climate change. Oceans are filled with low-frequency sound waves created by sources such as underwater volcanoes, earthquakes and seismic surveys. The propagation of these sound waves is temperature dependent and therefore carries valuable information that can be used for temperature monitoring. This phenomenon is investigated by applying interferometry to hydroacoustic data measured in the South Pacific Ocean. The data is measured at hydrophone station H03 which is part of the International Monitoring System (IMS). This network consists of several stations around the world and is in place for the verification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT). The station consists of two arrays located north and south of Robinson Crusoe Island separated by 50 km. Both arrays consist of three hydrophones with an intersensor distance of 2 km located at a depth of 1200 m. This depth is in range of the SOFAR channel. Hydroacoustic data measured at the south station is cross-correlated for the time period 2014-2017. The results are improved by applying one-bit normalization as a preprocessing step. Furthermore, beamforming is applied to the hydroacoustic data in order to characterize ambient noise sources around the array. This shows the presence of a continuous source at a backazimuth between 180 and 200 degrees throughout the whole time period, which is in agreement with the results obtained by cross-correlation. Studies on source strength show a seasonal dependence. This is an indication that the sound is related to acoustic activity in Antarctica. Results on this are supported by acoustic propagation modeling. The normal mode technique is used to study the sound propagation from possible source locations towards station H03.

  5. Study of the Acoustic Effects of Hydrokinetic Tidal Turbines in Admiralty Inlet, Puget Sound

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brian Polagye; Jim Thomson; Chris Bassett

    2012-03-30

    Hydrokinetic turbines will be a source of noise in the marine environment - both during operation and during installation/removal. High intensity sound can cause injury or behavioral changes in marine mammals and may also affect fish and invertebrates. These noise effects are, however, highly dependent on the individual marine animals; the intensity, frequency, and duration of the sound; and context in which the sound is received. In other words, production of sound is a necessary, but not sufficient, condition for an environmental impact. At a workshop on the environmental effects of tidal energy development, experts identified sound produced by turbinesmore » as an area of potentially significant impact, but also high uncertainty. The overall objectives of this project are to improve our understanding of the potential acoustic effects of tidal turbines by: (1) Characterizing sources of existing underwater noise; (2) Assessing the effectiveness of monitoring technologies to characterize underwater noise and marine mammal responsiveness to noise; (3) Evaluating the sound profile of an operating tidal turbine; and (4) Studying the effect of turbine sound on surrogate species in a laboratory environment. This study focuses on a specific case study for tidal energy development in Admiralty Inlet, Puget Sound, Washington (USA), but the methodologies and results are applicable to other turbine technologies and geographic locations. The project succeeded in achieving the above objectives and, in doing so, substantially contributed to the body of knowledge around the acoustic effects of tidal energy development in several ways: (1) Through collection of data from Admiralty Inlet, established the sources of sound generated by strong currents (mobilizations of sediment and gravel) and determined that low-frequency sound recorded during periods of strong currents is non-propagating pseudo-sound. This helped to advance the debate within the marine and hydrokinetics acoustic community as to whether strong currents produce propagating sound. (2) Analyzed data collected from a tidal turbine operating at the European Marine Energy Center to develop a profile of turbine sound and developed a framework to evaluate the acoustic effects of deploying similar devices in other locations. This framework has been applied to Public Utility District No. 1 of Snohomish Country's demonstration project in Admiralty Inlet to inform postinstallation acoustic and marine mammal monitoring plans. (3) Demonstrated passive acoustic techniques to characterize the ambient noise environment at tidal energy sites (fixed, long-term observations recommended) and characterize the sound from anthropogenic sources (drifting, short-term observations recommended). (4) Demonstrated the utility and limitations of instrumentation, including bottom mounted instrumentation packages, infrared cameras, and vessel monitoring systems. In doing so, also demonstrated how this type of comprehensive information is needed to interpret observations from each instrument (e.g., hydrophone data can be combined with vessel tracking data to evaluate the contribution of vessel sound to ambient noise). (5) Conducted a study that suggests harbor porpoise in Admiralty Inlet may be habituated to high levels of ambient noise due to omnipresent vessel traffic. The inability to detect behavioral changes associated with a high intensity source of opportunity (passenger ferry) has informed the approach for post-installation marine mammal monitoring. (6) Conducted laboratory exposure experiments of juvenile Chinook salmon and showed that exposure to a worse than worst case acoustic dose of turbine sound does not result in changes to hearing thresholds or biologically significant tissue damage. Collectively, this means that Chinook salmon may be at a relatively low risk of injury from sound produced by tidal turbines located in or near their migration path. In achieving these accomplishments, the project has significantly advanced the District's goals of developing a demonstration-scale tidal energy project in Admiralty Inlet. Pilot demonstrations of this type are an essential step in the development of commercial-scale tidal energy in the United States. This is a renewable resource capable of producing electricity in a highly predictable manner.« less

  6. Cortical Reorganisation during a 30-Week Tinnitus Treatment Program

    PubMed Central

    McMahon, Catherine M.; Ibrahim, Ronny K.; Mathur, Ankit

    2016-01-01

    Subjective tinnitus is characterised by the conscious perception of a phantom sound. Previous studies have shown that individuals with chronic tinnitus have disrupted sound-evoked cortical tonotopic maps, time-shifted evoked auditory responses, and altered oscillatory cortical activity. The main objectives of this study were to: (i) compare sound-evoked brain responses and cortical tonotopic maps in individuals with bilateral tinnitus and those without tinnitus; and (ii) investigate whether changes in these sound-evoked responses occur with amelioration of the tinnitus percept during a 30-week tinnitus treatment program. Magnetoencephalography (MEG) recordings of 12 bilateral tinnitus participants and 10 control normal-hearing subjects reporting no tinnitus were recorded at baseline, using 500 Hz, 1000 Hz, 2000 Hz, and 4000 Hz tones presented monaurally at 70 dBSPL through insert tube phones. For the tinnitus participants, MEG recordings were obtained at 5-, 10-, 20- and 30- week time points during tinnitus treatment. Results for the 500 Hz and 1000 Hz sources (where hearing thresholds were within normal limits for all participants) showed that the tinnitus participants had a significantly larger and more anteriorly located source strengths when compared to the non-tinnitus participants. During the 30-week tinnitus treatment, the participants’ 500 Hz and 1000 Hz source strengths remained higher than the non-tinnitus participants; however, the source locations shifted towards the direction recorded from the non-tinnitus control group. Further, in the left hemisphere, there was a time-shifted association between the trajectory of change of the individual’s objective (source strength and anterior-posterior source location) and subjective measures (using tinnitus reaction questionnaire, TRQ). The differences in source strength between the two groups suggest that individuals with tinnitus have enhanced central gain which is not significantly influenced by the tinnitus treatment, and may result from the hearing loss per se. On the other hand, the shifts in the tonotopic map towards the non-tinnitus participants’ source location suggests that the tinnitus treatment might reduce the disruptions in the map, presumably produced by the tinnitus percept directly or indirectly. Further, the similarity in the trajectory of change across the objective and subjective parameters after time-shifting the perceptual changes by 5 weeks suggests that during or following treatment, perceptual changes in the tinnitus percept may precede neurophysiological changes. Subgroup analyses conducted by magnitude of hearing loss suggest that there were no differences in the 500 Hz and 1000 Hz source strength amplitudes for the mild-moderate compared with the mild-severe hearing loss subgroup, although the mean source strength was consistently higher for the mild-severe subgroup. Further, the mild-severe subgroup had 500 Hz and 1000 Hz source locations located more anteriorly (i.e., more disrupted compared to the control group) compared to the mild-moderate group, although this was trending towards significance only for the 500Hz left hemisphere source. While the small numbers of participants within the subgroup analyses reduce the statistical power, this study suggests that those with greater magnitudes of hearing loss show greater cortical disruptions with tinnitus and that tinnitus treatment appears to reduce the tonotopic map disruptions but not the source strength (or central gain). PMID:26901425

  7. Behavioral responses of a harbor porpoise (Phocoena phocoena) to playbacks of broadband pile driving sounds.

    PubMed

    Kastelein, Ronald A; van Heerden, Dorianne; Gransier, Robin; Hoek, Lean

    2013-12-01

    The high under-water sound pressure levels (SPLs) produced during pile driving to build offshore wind turbines may affect harbor porpoises. To estimate the discomfort threshold of pile driving sounds, a porpoise in a quiet pool was exposed to playbacks (46 strikes/min) at five SPLs (6 dB steps: 130-154 dB re 1 μPa). The spectrum of the impulsive sound resembled the spectrum of pile driving sound at tens of kilometers from the pile driving location in shallow water such as that found in the North Sea. The animal's behavior during test and baseline periods was compared. At and above a received broadband SPL of 136 dB re 1 μPa [zero-peak sound pressure level: 151 dB re 1 μPa; t90: 126 ms; sound exposure level of a single strike (SELss): 127 dB re 1 μPa(2) s] the porpoise's respiration rate increased in response to the pile driving sounds. At higher levels, he also jumped out of the water more often. Wild porpoises are expected to move tens of kilometers away from offshore pile driving locations; response distances will vary with context, the sounds' source level, parameters influencing sound propagation, and background noise levels. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Spatial hearing in Cope’s gray treefrog: I. Open and closed loop experiments on sound localization in the presence and absence of noise

    PubMed Central

    Caldwell, Michael S.; Bee, Mark A.

    2014-01-01

    The ability to reliably locate sound sources is critical to anurans, which navigate acoustically complex breeding choruses when choosing mates. Yet, the factors influencing sound localization performance in frogs remain largely unexplored. We applied two complementary methodologies, open and closed loop playback trials, to identify influences on localization abilities in Cope’s gray treefrog, Hyla chrysoscelis. We examined localization acuity and phonotaxis behavior of females in response to advertisement calls presented from 12 azimuthal angles, at two signal levels, in the presence and absence of noise, and at two noise levels. Orientation responses were consistent with precise localization of sound sources, rather than binary discrimination between sources on either side of the body (lateralization). Frogs were unable to discriminate between sounds arriving from forward and rearward directions, and accurate localization was limited to forward sound presentation angles. Within this region, sound presentation angle had little effect on localization acuity. The presence of noise and low signal-to-noise ratios also did not strongly impair localization ability in open loop trials, but females exhibited reduced phonotaxis performance consistent with impaired localization during closed loop trials. We discuss these results in light of previous work on spatial hearing in anurans. PMID:24504182

  9. Sound source localization on an axial fan at different operating points

    NASA Astrophysics Data System (ADS)

    Zenger, Florian J.; Herold, Gert; Becker, Stefan; Sarradj, Ennes

    2016-08-01

    A generic fan with unskewed fan blades is investigated using a microphone array method. The relative motion of the fan with respect to the stationary microphone array is compensated by interpolating the microphone data to a virtual rotating array with the same rotational speed as the fan. Hence, beamforming algorithms with deconvolution, in this case CLEAN-SC, could be applied. Sound maps and integrated spectra of sub-components are evaluated for five operating points. At selected frequency bands, the presented method yields sound maps featuring a clear circular source pattern corresponding to the nine fan blades. Depending on the adjusted operating point, sound sources are located on the leading or trailing edges of the fan blades. Integrated spectra show that in most cases leading edge noise is dominant for the low-frequency part and trailing edge noise for the high-frequency part. The shift from leading to trailing edge noise is strongly dependent on the operating point and frequency range considered.

  10. Underwater noise pollution in a coastal tropical environment.

    PubMed

    Bittencourt, L; Carvalho, R R; Lailson-Brito, J; Azevedo, A F

    2014-06-15

    Underwater noise pollution has become a major concern in marine habitats. Guanabara Bay, southeastern Brazil, is an impacted area of economic importance with constant vessel traffic. One hundred acoustic recording sessions took place over ten locations. Sound sources operating within 1 km radius of each location were quantified during recordings. The highest mean sound pressure level near the surface was 111.56±9.0 dB re 1 μPa at the frequency band of 187 Hz. Above 15 kHz, the highest mean sound pressure level was 76.21±8.3 dB re 1 μPa at the frequency 15.89 kHz. Noise levels correlated with number of operating vessels and vessel traffic composition influenced noise profiles. Shipping locations had the highest noise levels, while small vessels locations had the lowest noise levels. Guanabara Bay showed noise pollution similar to that of other impacted coastal regions, which is related to shipping and vessel traffic. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Emission of Sound from Turbulence Convected by a Parallel Mean Flow in the Presence of a Confining Duct

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.; Leib, Stewart J.

    1999-01-01

    An approximate method for calculating the noise generated by a turbulent flow within a semi-infinite duct of arbitrary cross section is developed. It is based on a previously derived high-frequency solution to Lilley's equation, which describes the sound propagation in a transversely-sheared mean flow. The source term is simplified by assuming the turbulence to be axisymmetric about the mean flow direction. Numerical results are presented for the special case of a ring source in a circular duct with an axisymmetric mean flow. They show that the internally generated noise is suppressed at sufficiently large upstream angles in a hard walled duct, and that acoustic liners can significantly reduce the sound radiated in both the upstream and downstream regions, depending upon the source location and Mach number of the flow.

  12. Emission of Sound From Turbulence Convected by a Parallel Mean Flow in the Presence of a Confining Duct

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.; Leib, Stewart J.

    1999-01-01

    An approximate method for calculating the noise generated by a turbulent flow within a semi-infinite duct of arbitrary cross section is developed. It is based on a previously derived high-frequency solution to Lilley's equation, which describes the sound propagation in transversely-sheared mean flow. The source term is simplified by assuming the turbulence to be axisymmetric about the mean flow direction. Numerical results are presented for the special case of a ring source in a circular duct with an axisymmetric mean flow. They show that the internally generated noise is suppressed at sufficiently large upstream angles in a hard walled duct, and that acoustic liners can significantly reduce the sound radiated in both the upstream and downstream regions, depending upon the source location and Mach number of the flow.

  13. Optimum sensor placement for microphone arrays

    NASA Astrophysics Data System (ADS)

    Rabinkin, Daniel V.

    Microphone arrays can be used for high-quality sound pickup in reverberant and noisy environments. Sound capture using conventional single microphone methods suffers severe degradation under these conditions. The beamforming capabilities of microphone array systems allow highly directional sound capture, providing enhanced signal-to-noise ratio (SNR) when compared to single microphone performance. The overall performance of an array system is governed by its ability to locate and track sound sources and its ability to capture sound from desired spatial volumes. These abilities are strongly affected by the spatial placement of microphone sensors. A method is needed to optimize placement for a specified number of sensors in a given acoustical environment. The objective of the optimization is to obtain the greatest average system SNR for sound capture in the region of interest. A two-step sound source location method is presented. In the first step, time delay of arrival (TDOA) estimates for select microphone pairs are determined using a modified version of the Omologo-Svaizer cross-power spectrum phase expression. In the second step, the TDOA estimates are used in a least-mean-squares gradient descent search algorithm to obtain a location estimate. Statistics for TDOA estimate error as a function of microphone pair/sound source geometry and acoustic environment are gathered from a set of experiments. These statistics are used to model position estimation accuracy for a given array geometry. The effectiveness of sound source capture is also dependent on array geometry and the acoustical environment. Simple beamforming and time delay compensation (TDC) methods provide spatial selectivity but suffer performance degradation in reverberant environments. Matched filter array (MFA) processing can mitigate the effects of reverberation. The shape and gain advantage of the capture region for these techniques is described and shown to be highly influenced by the placement of array sensors. A procedure is developed to evaluate a given array configuration based on the above-mentioned metrics. Constrained placement optimizations are performed that maximize SNR for both TDC and MFA capture methods. Results are compared for various acoustic environments and various enclosure sizes. General guidelines are presented for placement strategy and bandwidth dependence, as they relate to reverberation levels, ambient noise, and enclosure geometry. An overall performance function is described based on these metrics. Performance of the microphone array system is also constrained by the design limitations of the supporting hardware. Two newly developed hardware architectures are presented that support the described algorithms. A low- cost 8-channel system with off-the-shelf componentry was designed and its performance evaluated. A massively parallel 512-channel custom-built system is in development-its capabilities and the rationale for its design are described.

  14. Biologically inspired binaural hearing aid algorithms: Design principles and effectiveness

    NASA Astrophysics Data System (ADS)

    Feng, Albert

    2002-05-01

    Despite rapid advances in the sophistication of hearing aid technology and microelectronics, listening in noise remains problematic for people with hearing impairment. To solve this problem two algorithms were designed for use in binaural hearing aid systems. The signal processing strategies are based on principles in auditory physiology and psychophysics: (a) the location/extraction (L/E) binaural computational scheme determines the directions of source locations and cancels noise by applying a simple subtraction method over every frequency band; and (b) the frequency-domain minimum-variance (FMV) scheme extracts a target sound from a known direction amidst multiple interfering sound sources. Both algorithms were evaluated using standard metrics such as signal-to-noise-ratio gain and articulation index. Results were compared with those from conventional adaptive beam-forming algorithms. In free-field tests with multiple interfering sound sources our algorithms performed better than conventional algorithms. Preliminary intelligibility and speech reception results in multitalker environments showed gains for every listener with normal or impaired hearing when the signals were processed in real time with the FMV binaural hearing aid algorithm. [Work supported by NIH-NIDCD Grant No. R21DC04840 and the Beckman Institute.

  15. Adaptive near-field beamforming techniques for sound source imaging.

    PubMed

    Cho, Yong Thung; Roan, Michael J

    2009-02-01

    Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.

  16. Urban sound energy reduction by means of sound barriers

    NASA Astrophysics Data System (ADS)

    Iordache, Vlad; Ionita, Mihai Vlad

    2018-02-01

    In urban environment, various heating ventilation and air conditioning appliances designed to maintain indoor comfort become urban acoustic pollution vectors due to the sound energy produced by these equipment. The acoustic barriers are the recommended method for the sound energy reduction in urban environment. The current sizing method of these acoustic barriers is too difficult and it is not practical for any 3D location of the noisy equipment and reception point. In this study we will develop based on the same method a new simplified tool for acoustic barriers sizing, maintaining the same precision characteristic to the classical method. Abacuses for acoustic barriers sizing are built that can be used for different 3D locations of the source and the reception points, for several frequencies and several acoustic barrier heights. The study case presented in the article represents a confirmation for the rapidity and ease of use of these abacuses in the design of the acoustic barriers.

  17. Acoustical measurements of sound fields between the stage and the orchestra pit inside an historical opera house

    NASA Astrophysics Data System (ADS)

    Sato, Shin-Ichi; Prodi, Nicola; Sakai, Hiroyuki

    2004-05-01

    To clarify the relationship of the sound fields between the stage and the orchestra pit, we conducted acoustical measurements in a typical historical opera house, the Teatro Comunale of Ferrara, Italy. Orthogonal factors based on the theory of subjective preference and other related factors were analyzed. First, the sound fields for a singer on the stage in relation to the musicians in the pit were analyzed. And then, the sound fields for performers in the pit in relation to the singers on the stage were considered. Because physical factors vary depending on the location of the sound source, performers can move on the stage or in the pit to find the preferred sound field.

  18. Acoustic centering of sources measured by surrounding spherical microphone arrays.

    PubMed

    Hagai, Ilan Ben; Pollow, Martin; Vorländer, Michael; Rafaely, Boaz

    2011-10-01

    The radiation patterns of acoustic sources have great significance in a wide range of applications, such as measuring the directivity of loudspeakers and investigating the radiation of musical instruments for auralization. Recently, surrounding spherical microphone arrays have been studied for sound field analysis, facilitating measurement of the pressure around a sphere and the computation of the spherical harmonics spectrum of the sound source. However, the sound radiation pattern may be affected by the location of the source inside the microphone array, which is an undesirable property when aiming to characterize source radiation in a unique manner. This paper presents a theoretical analysis of the spherical harmonics spectrum of spatially translated sources and defines four measures for the misalignment of the acoustic center of a radiating source. Optimization is used to promote optimal alignment based on the proposed measures and the errors caused by numerical and array-order limitations are investigated. This methodology is examined using both simulated and experimental data in order to investigate the performance and limitations of the different alignment methods. © 2011 Acoustical Society of America

  19. Positioning actuators in efficient locations for rendering the desired sound field using inverse approach

    NASA Astrophysics Data System (ADS)

    Cho, Wan-Ho; Ih, Jeong-Guon; Toi, Takeshi

    2015-12-01

    For rendering a desired characteristics of a sound field, a proper conditioning of acoustic actuators in an array are required, but the source condition depends strongly on its position. Actuators located at inefficient positions for control would consume the input power too much or become too much sensitive to disturbing noise. These actuators can be considered redundant, which should be sorted out as far as such elimination does not damage the whole control performance significantly. It is known that the inverse approach based on the acoustical holography concept, employing the transfer matrix between sources and field points as core element, is useful for rendering the desired sound field. By investigating the information indwelling in the transfer matrix between actuators and field points, the linear independency of an actuator from the others in the array can be evaluated. To this end, the square of the right singular vector, which means the radiation contribution from the source, can be used as an indicator. Inefficient position for fulfilling the desired sound field can be determined as one having smallest indicator value among all possible actuator positions. The elimination process continues one by one, or group by group, until the remaining number of actuators meets the preset number. Control examples of exterior and interior spaces are taken for the validation. The results reveal that the present method for choosing least dependent actuators, for a given number of actuators and field condition, is quite effective in realizing the desired sound field with a noisy input condition, and in minimizing the required input power.

  20. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    PubMed Central

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  1. A temporal and spatial analysis of anthropogenic noise sources affecting SNMR

    NASA Astrophysics Data System (ADS)

    Dalgaard, E.; Christiansen, P.; Larsen, J. J.; Auken, E.

    2014-11-01

    One of the biggest challenges when using the surface nuclear magnetic resonance (SNMR) method in urban areas is a relatively low signal level compared to a high level of background noise. To understand the temporal and spatial behavior of anthropogenic noise sources like powerlines and electric fences, we have developed a multichannel instrument, noiseCollector (nC), which measures the full noise spectrum up to 10 kHz. Combined with advanced signal processing we can interpret the noise as seen by a SNMR instrument and also obtain insight into the more fundamental behavior of the noise. To obtain a specified acceptable noise level for a SNMR sounding the stack size can be determined by quantifying the different noise sources. Two common noise sources, electromagnetic fields stemming from powerlines and fences are analyzed and show a 1/r2 dependency in agreement with theoretical relations. A typical noise map, obtained with the nC instrument prior to a SNMR field campaign, clearly shows the location of noise sources, and thus we can efficiently determine the optimal location for the SNMR sounding from a noise perspective.

  2. Angle-Dependent Distortions in the Perceptual Topology of Acoustic Space

    PubMed Central

    2018-01-01

    By moving sounds around the head and asking listeners to report which ones moved more, it was found that sound sources at the side of a listener must move at least twice as much as ones in front to be judged as moving the same amount. A relative expansion of space in the front and compression at the side has consequences for spatial perception of moving sounds by both static and moving listeners. An accompanying prediction that the apparent location of static sound sources ought to also be distorted agrees with previous work and suggests that this is a general perceptual phenomenon that is not limited to moving signals. A mathematical model that mimics the measured expansion of space can be used to successfully capture several previous findings in spatial auditory perception. The inverse of this function could be used alongside individualized head-related transfer functions and motion tracking to produce hyperstable virtual acoustic environments. PMID:29764312

  3. Investigation of the sound generation mechanisms for in-duct orifice plates.

    PubMed

    Tao, Fuyang; Joseph, Phillip; Zhang, Xin; Stalnov, Oksana; Siercke, Matthias; Scheel, Henning

    2017-08-01

    Sound generation due to an orifice plate in a hard-walled flow duct which is commonly used in air distribution systems (ADS) and flow meters is investigated. The aim is to provide an understanding of this noise generation mechanism based on measurements of the source pressure distribution over the orifice plate. A simple model based on Curle's acoustic analogy is described that relates the broadband in-duct sound field to the surface pressure cross spectrum on both sides of the orifice plate. This work describes careful measurements of the surface pressure cross spectrum over the orifice plate from which the surface pressure distribution and correlation length is deduced. This information is then used to predict the radiated in-duct sound field. Agreement within 3 dB between the predicted and directly measured sound fields is obtained, providing direct confirmation that the surface pressure fluctuations acting over the orifice plates are the main noise sources. Based on the developed model, the contributions to the sound field from different radial locations of the orifice plate are calculated. The surface pressure is shown to follow a U 3.9 velocity scaling law and the area over which the surface sources are correlated follows a U 1.8 velocity scaling law.

  4. Interdependent encoding of pitch, timbre and spatial location in auditory cortex

    PubMed Central

    Bizley, Jennifer K.; Walker, Kerry M. M.; Silverman, Bernard W.; King, Andrew J.; Schnupp, Jan W. H.

    2009-01-01

    Because we can perceive the pitch, timbre and spatial location of a sound source independently, it seems natural to suppose that cortical processing of sounds might separate out spatial from non-spatial attributes. Indeed, recent studies support the existence of anatomically segregated ‘what’ and ‘where’ cortical processing streams. However, few attempts have been made to measure the responses of individual neurons in different cortical fields to sounds that vary simultaneously across spatial and non-spatial dimensions. We recorded responses to artificial vowels presented in virtual acoustic space to investigate the representations of pitch, timbre and sound source azimuth in both core and belt areas of ferret auditory cortex. A variance decomposition technique was used to quantify the way in which altering each parameter changed neural responses. Most units were sensitive to two or more of these stimulus attributes. Whilst indicating that neural encoding of pitch, location and timbre cues is distributed across auditory cortex, significant differences in average neuronal sensitivity were observed across cortical areas and depths, which could form the basis for the segregation of spatial and non-spatial cues at higher cortical levels. Some units exhibited significant non-linear interactions between particular combinations of pitch, timbre and azimuth. These interactions were most pronounced for pitch and timbre and were less commonly observed between spatial and non-spatial attributes. Such non-linearities were most prevalent in primary auditory cortex, although they tended to be small compared with stimulus main effects. PMID:19228960

  5. Neural Correlates of Sound Localization in Complex Acoustic Environments

    PubMed Central

    Zündorf, Ida C.; Lewald, Jörg; Karnath, Hans-Otto

    2013-01-01

    Listening to and understanding people in a “cocktail-party situation” is a remarkable feature of the human auditory system. Here we investigated the neural correlates of the ability to localize a particular sound among others in an acoustically cluttered environment with healthy subjects. In a sound localization task, five different natural sounds were presented from five virtual spatial locations during functional magnetic resonance imaging (fMRI). Activity related to auditory stream segregation was revealed in posterior superior temporal gyrus bilaterally, anterior insula, supplementary motor area, and frontoparietal network. Moreover, the results indicated critical roles of left planum temporale in extracting the sound of interest among acoustical distracters and the precuneus in orienting spatial attention to the target sound. We hypothesized that the left-sided lateralization of the planum temporale activation is related to the higher specialization of the left hemisphere for analysis of spectrotemporal sound features. Furthermore, the precuneus − a brain area known to be involved in the computation of spatial coordinates across diverse frames of reference for reaching to objects − seems to be also a crucial area for accurately determining locations of auditory targets in an acoustically complex scene of multiple sound sources. The precuneus thus may not only be involved in visuo-motor processes, but may also subserve related functions in the auditory modality. PMID:23691185

  6. 3D Audio System

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Ames Research Center research into virtual reality led to the development of the Convolvotron, a high speed digital audio processing system that delivers three-dimensional sound over headphones. It consists of a two-card set designed for use with a personal computer. The Convolvotron's primary application is presentation of 3D audio signals over headphones. Four independent sound sources are filtered with large time-varying filters that compensate for motion. The perceived location of the sound remains constant. Possible applications are in air traffic control towers or airplane cockpits, hearing and perception research and virtual reality development.

  7. Aeroacoustic analysis of the human phonation process based on a hybrid acoustic PIV approach

    NASA Astrophysics Data System (ADS)

    Lodermeyer, Alexander; Tautz, Matthias; Becker, Stefan; Döllinger, Michael; Birk, Veronika; Kniesburges, Stefan

    2018-01-01

    The detailed analysis of sound generation in human phonation is severely limited as the accessibility to the laryngeal flow region is highly restricted. Consequently, the physical basis of the underlying fluid-structure-acoustic interaction that describes the primary mechanism of sound production is not yet fully understood. Therefore, we propose the implementation of a hybrid acoustic PIV procedure to evaluate aeroacoustic sound generation during voice production within a synthetic larynx model. Focusing on the flow field downstream of synthetic, aerodynamically driven vocal folds, we calculated acoustic source terms based on the velocity fields obtained by time-resolved high-speed PIV applied to the mid-coronal plane. The radiation of these sources into the acoustic far field was numerically simulated and the resulting acoustic pressure was finally compared with experimental microphone measurements. We identified the tonal sound to be generated downstream in a small region close to the vocal folds. The simulation of the sound propagation underestimated the tonal components, whereas the broadband sound was well reproduced. Our results demonstrate the feasibility to locate aeroacoustic sound sources inside a synthetic larynx using a hybrid acoustic PIV approach. Although the technique employs a 2D-limited flow field, it accurately reproduces the basic characteristics of the aeroacoustic field in our larynx model. In future studies, not only the aeroacoustic mechanisms of normal phonation will be assessable, but also the sound generation of voice disorders can be investigated more profoundly.

  8. Active noise control using a steerable parametric array loudspeaker.

    PubMed

    Tanaka, Nobuo; Tanaka, Motoki

    2010-06-01

    Arguably active noise control enables the sound suppression at the designated control points, while the sound pressure except the targeted locations is likely to augment. The reason is clear; a control source normally radiates the sound omnidirectionally. To cope with this problem, this paper introduces a parametric array loudspeaker (PAL) which produces a spatially focused sound beam due to the attribute of ultrasound used for carrier waves, thereby allowing one to suppress the sound pressure at the designated point without causing spillover in the whole sound field. First the fundamental characteristics of PAL are overviewed. The scattered pressure in the near field contributed by source strength of PAL is then described, which is needed for the design of an active noise control system. Furthermore, the optimal control law for minimizing the sound pressure at control points is derived, the control effect being investigated analytically and experimentally. With a view to tracking a moving target point, a steerable PAL based upon a phased array scheme is presented, with the result that the generation of a moving zone of quiet becomes possible without mechanically rotating the PAL. An experiment is finally conducted, demonstrating the validity of the proposed method.

  9. Eavesdropping to Find Mates: The Function of Male Hearing for a Cicada-Hunting Parasitoid Fly, Emblemasoma erro (Diptera: Sarcophagidae)

    PubMed Central

    Stucky, Brian J.

    2016-01-01

    Females of several species of dipteran parasitoids use long-range hearing to locate hosts for their offspring by eavesdropping on the acoustic mating calls of other insects. Males of these acoustic eavesdropping parasitoids also have physiologically functional ears, but so far, no adaptive function for male hearing has been discovered. I investigated the function of male hearing for the sarcophagid fly Emblemasoma erro Aldrich, an acoustic parasitoid of cicadas, by testing the hypothesis that both male and female E. erro use hearing to locate potential mates. I found that both male and nongravid female E. erro perform phonotaxis to the sounds of calling cicadas, that male flies engage in short-range, mate-finding behavior once they arrive at a sound source, and that encounters between females and males at a sound source can lead to copulation. Thus, cicada calling songs appear to serve as a mate-finding cue for both sexes of E. erro. Emblemasoma erro’s mate-finding behavior is compared to that of other sarcophagid flies, other acoustic parasitoids, and nonacoustic eavesdropping parasitoids. PMID:27382133

  10. Spacecraft Internal Acoustic Environment Modeling

    NASA Technical Reports Server (NTRS)

    Chu, SShao-sheng R.; Allen, Christopher S.

    2009-01-01

    Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. In FY09, the physical mockup developed in FY08, with interior geometric shape similar to Orion CM (Crew Module) IML (Interior Mode Line), was used to validate SEA (Statistical Energy Analysis) acoustic model development with realistic ventilation fan sources. The sound power levels of these sources were unknown a priori, as opposed to previous studies that RSS (Reference Sound Source) with known sound power level was used. The modeling results were evaluated based on comparisons to measurements of sound pressure levels over a wide frequency range, including the frequency range where SEA gives good results. Sound intensity measurement was performed over a rectangular-shaped grid system enclosing the ventilation fan source. Sound intensities were measured at the top, front, back, right, and left surfaces of the and system. Sound intensity at the bottom surface was not measured, but sound blocking material was placed tinder the bottom surface to reflect most of the incident sound energy back to the remaining measured surfaces. Integrating measured sound intensities over measured surfaces renders estimated sound power of the source. The reverberation time T6o of the mockup interior had been modified to match reverberation levels of ISS US Lab interior for speech frequency bands, i.e., 0.5k, 1k, 2k, 4 kHz, by attaching appropriately sized Thinsulate sound absorption material to the interior wall of the mockup. Sound absorption of Thinsulate was modeled in three methods: Sabine equation with measured mockup interior reverberation time T60, layup model based on past impedance tube testing, and layup model plus air absorption correction. The evaluation/validation was carried out by acquiring octave band microphone data simultaneously at ten fixed locations throughout the mockup. SPLs (Sound Pressure Levels) predicted by our SEA model match well with measurements for our CM mockup, with a more complicated shape. Additionally in FY09, background NC noise (Noise Criterion) simulation and MRT (Modified Rhyme Test) were developed and performed in the mockup to determine the maximum noise level in CM habitable volume for fair crew voice communications. Numerous demonstrations of simulated noise environment in the mockup and associated SIL (Speech Interference Level) via MRT were performed for various communities, including members from NASA and Orion prime-/sub-contractors. Also, a new HSIR (Human-Systems Integration Requirement) for limiting pre- and post-landing SIL was proposed.

  11. An experimental investigation of sound radiation from a duct with a circumferentially varying liner

    NASA Technical Reports Server (NTRS)

    Fuller, C. R.; Silcox, R. J.

    1983-01-01

    The radiation of sound from an asymmetrically lined duct is experimentally studied for various hard-walled standing mode sources. Measurements were made of the directivity of the radiated field and amplitude reflection coefficients in the hard-walled source section. These measurements are compared with baseline hardwall and uniformly lined duct data. The dependence of these characteristics on mode number and angular location of the source is investigated. A comparison between previous theoretical calculations and the experimentally measured results is made and in general good agreement is obtained. For the several cases presented an asymmetry in the liner impedance distribution was found to produce related asymmetries in the radiated acoustic field.

  12. A Robust Sound Source Localization Approach for Microphone Array with Model Errors

    NASA Astrophysics Data System (ADS)

    Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong

    In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.

  13. Noise in a Laboratory Animal Facility from the Human and Mouse Perspectives

    PubMed Central

    Reynolds, Randall P; Kinard, Will L; Degraff, Jesse J; Leverage, Ned; Norton, John N

    2010-01-01

    The current study was performed to understand the level of sound produced by ventilated racks, animal transfer stations, and construction equipment that mice in ventilated cages hear relative to what humans would hear in the same environment. Although the ventilated rack and animal transfer station both produced sound pressure levels above the ambient level within the human hearing range, the sound pressure levels within the mouse hearing range did not increase above ambient noise from either noise source. When various types of construction equipment were used 3 ft from the ventilated rack, the sound pressure level within the mouse hearing range was increased but to a lesser degree for each implement than were the sound pressure levels within the human hearing range. At more distant locations within the animal facility, sound pressure levels from the large jackhammer within the mouse hearing range decreased much more rapidly than did those in the human hearing range, indicating that less of the sound is perceived by mice than by humans. The relatively high proportion of low-frequency sound produced by the shot blaster, used without the metal shot that it normally uses to clean concrete, increased the sound pressure level above the ambient level for humans but did not increase sound pressure levels above ambient noise for mice at locations greater than 3 ft from inside of the cage, where sound was measured. This study demonstrates that sound clearly audible to humans in the animal facility may be perceived to a lesser degree or not at all by mice, because of the frequency content of the sound. PMID:20858361

  14. Evaluation of substitution monopole models for tire noise sound synthesis

    NASA Astrophysics Data System (ADS)

    Berckmans, D.; Kindt, P.; Sas, P.; Desmet, W.

    2010-01-01

    Due to the considerable efforts in engine noise reduction, tire noise has become one of the major sources of passenger car noise nowadays and the demand for accurate prediction models is high. A rolling tire is therefore experimentally characterized by means of the substitution monopole technique, suiting a general sound synthesis approach with a focus on perceived sound quality. The running tire is substituted by a monopole distribution covering the static tire. All monopoles have mutual phase relationships and a well-defined volume velocity distribution which is derived by means of the airborne source quantification technique; i.e. by combining static transfer function measurements with operating indicator pressure measurements close to the rolling tire. Models with varying numbers/locations of monopoles are discussed and the application of different regularization techniques is evaluated.

  15. A sound budget for the southeastern Bering Sea: measuring wind, rainfall, shipping, and other sources of underwater sound.

    PubMed

    Nystuen, Jeffrey A; Moore, Sue E; Stabeno, Phyllis J

    2010-07-01

    Ambient sound in the ocean contains quantifiable information about the marine environment. A passive aquatic listener (PAL) was deployed at a long-term mooring site in the southeastern Bering Sea from 27 April through 28 September 2004. This was a chain mooring with lots of clanking. However, the sampling strategy of the PAL filtered through this noise and allowed the background sound field to be quantified for natural signals. Distinctive signals include the sound from wind, drizzle and rain. These sources dominate the sound budget and their intensity can be used to quantify wind speed and rainfall rate. The wind speed measurement has an accuracy of +/-0.4 m s(-1) when compared to a buoy-mounted anemometer. The rainfall rate measurement is consistent with a land-based measurement in the Aleutian chain at Cold Bay, AK (170 km south of the mooring location). Other identifiable sounds include ships and short transient tones. The PAL was designed to reject transients in the range important for quantification of wind speed and rainfall, but serendipitously recorded peaks in the sound spectrum between 200 Hz and 3 kHz. Some of these tones are consistent with whale calls, but most are apparently associated with mooring self-noise.

  16. An open access database for the evaluation of heart sound algorithms.

    PubMed

    Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D

    2016-12-01

    In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.

  17. Effect of diffusive and nondiffusive surfaces combinations on sound diffusion

    NASA Astrophysics Data System (ADS)

    Shafieian, Masoume; Kashani, Farokh Hodjat

    2010-05-01

    One of room acoustic goals, especially in small to medium rooms, is sound diffusion in low frequencies, which have been the subject of lots of researches. Sound diffusion is a very important consideration in acoustics because it minimizes the coherent reflections that cause problems. It also tends to make an enclosed space sound larger than it is. Diffusion is an excellent alternative or complement to sound absorption in acoustic treatment because it doesn’t really remove much energy, which means it can be used to effectively reduce reflections while still leaving an ambient or live sounding space. Distribution of diffusive and nondiffusive surfaces on room walls affect sound diffusion in room, but the amount, combination, and location of these surfaces are still the matter of question. This paper investigates effects of these issues on room acoustic frequency response in different parts of the room with different source-receiver locations. Room acoustic model based on wave method is used (implemented) which is very accurate and convenient for low frequencies in such rooms. Different distributions of acoustic surfaces on room walls have been introduced to the model and room frequency response results are calculated. For the purpose of comparison, some measurements results are presented. Finally for more smooth frequency response in small and medium rooms, some suggestions are made.

  18. Identification and tracking of particular speaker in noisy environment

    NASA Astrophysics Data System (ADS)

    Sawada, Hideyuki; Ohkado, Minoru

    2004-10-01

    Human is able to exchange information smoothly using voice under different situations such as noisy environment in a crowd and with the existence of plural speakers. We are able to detect the position of a source sound in 3D space, extract a particular sound from mixed sounds, and recognize who is talking. By realizing this mechanism with a computer, new applications will be presented for recording a sound with high quality by reducing noise, presenting a clarified sound, and realizing a microphone-free speech recognition by extracting particular sound. The paper will introduce a realtime detection and identification of particular speaker in noisy environment using a microphone array based on the location of a speaker and the individual voice characteristics. The study will be applied to develop an adaptive auditory system of a mobile robot which collaborates with a factory worker.

  19. Reproduction of a higher-order circular harmonic field using a linear array of loudspeakers.

    PubMed

    Lee, Jung-Min; Choi, Jung-Woo; Kim, Yang-Hann

    2015-03-01

    This paper presents a direct formula for reproducing a sound field consisting of higher-order circular harmonics with polar phase variation. Sound fields with phase variation can be used for synthesizing various spatial attributes, such as the perceived width or the location of a virtual sound source. To reproduce such a sound field using a linear loudspeaker array, the driving function of the array is derived in the format of an integral formula. The proposed function shows fewer reproduction errors than a conventional formula focused on magnitude variations. In addition, analysis of the sweet spot reveals that its shape can be asymmetric, depending on the order of harmonics.

  20. Quench dynamics in SRF cavities: can we locate the quench origin with 2nd sound?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maximenko, Yulia; /Moscow, MIPT; Segatskov, Dmitri A.

    2011-03-01

    A newly developed method of locating quenches in SRF cavities by detecting second-sound waves has been gaining popularity in SRF laboratories. The technique is based on measurements of time delays between the quench as determined by the RF system and arrival of the second-sound wave to the multiple detectors placed around the cavity in superfluid helium. Unlike multi-channel temperature mapping, this approach requires only a few sensors and simple readout electronics; it can be used with SRF cavities of almost arbitrary shape. One of its drawbacks is that being an indirect method it requires one to solve an inverse problemmore » to find the location of a quench. We tried to solve this inverse problem by using a parametric forward model. By analyzing the data we found that the approximation where the second-sound emitter is a near-singular source does not describe the physical system well enough. A time-dependent analysis of the quench process can help us to put forward a more adequate model. We present here our current algorithm to solve the inverse problem and discuss the experimental results.« less

  1. Assessment of sound levels in a neonatal intensive care unit in tabriz, iran.

    PubMed

    Valizadeh, Sousan; Bagher Hosseini, Mohammad; Alavi, Nasrinsadat; Asadollahi, Malihe; Kashefimehr, Siamak

    2013-03-01

    High levels of sound have several negative effects, such as noise-induced hearing loss and delayed growth and development, on premature infants in neonatal intensive care units (NICUs). In order to reduce sound levels, they should first be measured. This study was performed to assess sound levels and determine sources of noise in the NICU of Alzahra Teaching Hospital (Tabriz, Iran). In a descriptive study, 24 hours in 4 workdays were randomly selected. Equivalent continuous sound level (Leq), sound level that is exceeded only 10% of the time (L10), maximum sound level (Lmax), and peak instantaneous sound pressure level (Lzpeak) were measured by CEL-440 sound level meter (SLM) at 6 fixed locations in the NICU. Data was collected using a questionnaire. SPSS13 was then used for data analysis. Mean values of Leq, L10, and Lmax were determined as 63.46 dBA, 65.81 dBA, and 71.30 dBA, respectively. They were all higher than standard levels (Leq < 45 dB, L10 ≤50 dB, and Lmax ≤65 dB). The highest Leq was measured at the time of nurse rounds. Leq was directly correlated with the number of staff members present in the ward. Finally, sources of noise were ordered based on their intensity. Considering that sound levels were higher than standard levels in our studied NICU, it is necessary to adopt policies to reduce sound.

  2. Assessment of Sound Levels in a Neonatal Intensive Care Unit in Tabriz, Iran

    PubMed Central

    Valizadeh, Sousan; Bagher Hosseini, Mohammad; Alavi, Nasrinsadat; Asadollahi, Malihe; Kashefimehr, Siamak

    2013-01-01

    Introduction: High levels of sound have several negative effects, such as noise-induced hearing loss and delayed growth and development, on premature infants in neonatal intensive care units (NICUs). In order to reduce sound levels, they should first be measured. This study was performed to assess sound levels and determine sources of noise in the NICU of Alzahra Teaching Hospital (Tabriz, Iran). Methods: In a descriptive study, 24 hours in 4 workdays were randomly selected. Equivalent continuous sound level (Leq), sound level that is exceeded only 10% of the time (L10), maximum sound level (Lmax), and peak instantaneous sound pressure level (Lzpeak) were measured by CEL-440 sound level meter (SLM) at 6 fixed locations in the NICU. Data was collected using a questionnaire. SPSS13 was then used for data analysis. Results: Mean values of Leq, L10, and Lmax were determined as 63.46 dBA, 65.81 dBA, and 71.30 dBA, respectively. They were all higher than standard levels (Leq < 45 dB, L10 ≤50 dB, and Lmax ≤65 dB). The highest Leq was measured at the time of nurse rounds. Leq was directly correlated with the number of staff members present in the ward. Finally, sources of noise were ordered based on their intensity. Conclusion: Considering that sound levels were higher than standard levels in our studied NICU, it is necessary to adopt policies to reduce sound. PMID:25276706

  3. Sounds and source levels from bowhead whales off Pt. Barrow, Alaska.

    PubMed

    Cummings, W C; Holliday, D V

    1987-09-01

    Sounds were recorded from bowhead whales migrating past Pt. Barrow, AK, to the Canadian Beaufort Sea. They mainly consisted of various low-frequency (25- to 900-Hz) moans and well-defined sound sequences organized into "song" (20-5000 Hz) recorded with our 2.46-km hydrophone array suspended from the ice. Songs were composed of up to 20 repeated phrases (mean, 10) which lasted up to 146 s (mean, 66.3). Several bowhead whales often were within acoustic range of the array at once, but usually only one sang at a time. Vocalizations exhibited diurnal peaks of occurrence (0600-0800, 1600-1800 h). Sounds which were located in the horizontal plane had peak source spectrum levels as follows--44 moans: 129-178 dB re: 1 microPa, 1 m (median, 159); 3 garglelike utterances: 152, 155, and 169 dB; 33 songs: 158-189 dB (median, 177), all presumably from different whales. Based on ambient noise levels, measured total propagation loss, and whale sound source levels, our detection of whale sounds was theoretically noise-limited beyond 2.5 km (moans) and beyond 10.7 km (songs), a model supported by actual localizations. This study showed that over much of the shallow Arctic and sub-Arctic waters, underwater communications of the bowhead whale would be limited to much shorter ranges than for other large whales in lower latitude, deep-water regions.

  4. Neural population encoding and decoding of sound source location across sound level in the rabbit inferior colliculus

    PubMed Central

    Delgutte, Bertrand

    2015-01-01

    At lower levels of sensory processing, the representation of a stimulus feature in the response of a neural population can vary in complex ways across different stimulus intensities, potentially changing the amount of feature-relevant information in the response. How higher-level neural circuits could implement feature decoding computations that compensate for these intensity-dependent variations remains unclear. Here we focused on neurons in the inferior colliculus (IC) of unanesthetized rabbits, whose firing rates are sensitive to both the azimuthal position of a sound source and its sound level. We found that the azimuth tuning curves of an IC neuron at different sound levels tend to be linear transformations of each other. These transformations could either increase or decrease the mutual information between source azimuth and spike count with increasing level for individual neurons, yet population azimuthal information remained constant across the absolute sound levels tested (35, 50, and 65 dB SPL), as inferred from the performance of a maximum-likelihood neural population decoder. We harnessed evidence of level-dependent linear transformations to reduce the number of free parameters in the creation of an accurate cross-level population decoder of azimuth. Interestingly, this decoder predicts monotonic azimuth tuning curves, broadly sensitive to contralateral azimuths, in neurons at higher levels in the auditory pathway. PMID:26490292

  5. USAF bioenvironmental noise data handbook. Volume 161: A/M32A-86 generator set, diesel engine driven

    NASA Astrophysics Data System (ADS)

    Rau, T. H.

    1982-05-01

    The A/M32A-86 generator set is a diesel engine driven source of electrical power used for the starting of aircraft, and for ground maintenance. This report provides measured and extrapolated data defining the bioacoustic environments produced by this unit operating outdoors on a concrete apron at normal rated/loaded conditions. Near-field data are reported for 37 locations in a wide variety of physical and psychoacoustic measures: overall and band sound pressure levels, C-weighted and A-weighted sound levels, preferred speech interference level, perceived noise level, and limiting times for total daily exposure of personnel with and without standard Air Force ear protectors. Far-field data measured at 36 locations are normalized to standard meteorological conditions and extrapolated from 10 - 1600 meters to derive sets of equal-value contours for these same seven acoustic measures as functions of angle and distance from the source.

  6. Emphasis of spatial cues in the temporal fine structure during the rising segments of amplitude-modulated sounds

    PubMed Central

    Dietz, Mathias; Marquardt, Torsten; Salminen, Nelli H.; McAlpine, David

    2013-01-01

    The ability to locate the direction of a target sound in a background of competing sources is critical to the survival of many species and important for human communication. Nevertheless, brain mechanisms that provide for such accurate localization abilities remain poorly understood. In particular, it remains unclear how the auditory brain is able to extract reliable spatial information directly from the source when competing sounds and reflections dominate all but the earliest moments of the sound wave reaching each ear. We developed a stimulus mimicking the mutual relationship of sound amplitude and binaural cues, characteristic to reverberant speech. This stimulus, named amplitude modulated binaural beat, allows for a parametric and isolated change of modulation frequency and phase relations. Employing magnetoencephalography and psychoacoustics it is demonstrated that the auditory brain uses binaural information in the stimulus fine structure only during the rising portion of each modulation cycle, rendering spatial information recoverable in an otherwise unlocalizable sound. The data suggest that amplitude modulation provides a means of “glimpsing” low-frequency spatial cues in a manner that benefits listening in noisy or reverberant environments. PMID:23980161

  7. Fluid dynamic aspects of jet noise generation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The location of the noise sources within jet flows, their relative importance to the overall radiated field, and the mechanisms by which noise generation occurs, are studied by detailed measurements of the level and spectral composition of the radiated sound in the far field. Directional microphones are used to isolate the contribution to the radiated sound of small regions of the flow, and for cross-correlation between the radiated acoustic field and either the velocity fluctuations or the pressure fluctuations in the source field. Acquired data demonstrate the supersonic convection of the acoustic field and the resulting limited upstream influence of the signal source, as well as a possible increase of signal strength as it propagates toward the centerline of the flow.

  8. Evaluation of the Acoustic Measurement Capability of the NASA Langley V/STOL Wind Tunnel Open Test Section with Acoustically Absorbent Ceiling and Floor Treatments

    NASA Technical Reports Server (NTRS)

    Theobald, M. A.

    1978-01-01

    The single source location used for helicopter model studies was utilized in a study to determine the distances and directions upstream of the model accurate at which measurements of the direct acoustic field could be obtained. The method used was to measure the decrease of sound pressure levels with distance from a noise source and thereby determine the Hall radius as a function of frequency and direction. Test arrangements and procedures are described. Graphs show the normalized sound pressure level versus distance curves for the glass fiber floor treatment and for the foam floor treatment.

  9. Underwater Sound: Deep-Ocean Propagation: Variations of temperature and pressure have great influence on the propagation of sound in the ocean.

    PubMed

    Frosch, R A

    1964-11-13

    The absorption of sound in sea water varies markedly with frequency, being much greater at high than at low frequencies. It is sufficiently small at frequencies below several kilocycles per second, however, to permit propagation to thousands of miles. Oceanographic factors produce variations in sound velocity with depth, and these variations have a strong influence on long-range propagation. The deep ocean is characterized by a strong channel, generally at a depth of 500 to 1500 meters. In addition to guided propagation in this channel, the velocity structure gives rise to strongly peaked propagation from surface sources to surface receivers 48 to 56 kilometers away, with strong shadow zones of weak intensity in between. The near-surface shadow zone, in the latter case, may be filled in by bottom reflections or near-surface guided propagation due to a surface isothermal layer. The near-surface shadow zones can be avoided with certainty only through locating sources and receivers deep in the ocean.

  10. Physiological correlates of sound localization in a parasitoid fly, Ormia ochracea

    NASA Astrophysics Data System (ADS)

    Oshinsky, Michael Lee

    A major focus of research in the nervous system is the investigation of neural circuits. The question of how neurons connect to form functional units has driven modern neuroscience research from its inception. From the beginning, the neural circuits of the auditory system and specifically sound localization were used as a model system for investigating neural connectivity and computation. Sound localization lends itself to this task because there is no mapping of spatial information on a receptor sheet as in vision. With only one eye, an animal would still have positional information for objects. Since the receptor sheet in the ear is frequency oriented and not spatially oriented, positional information for a sound source does not exist with only one ear. The nervous system computes the location of a sound source based on differences in the physiology of the two ears. In this study, I investigated the neural circuits for sound localization in a fly, Ormia ochracea (Diptera, Tachinidae, Ormiini), which is a parasitoid of crickets. This fly possess a unique mechanically coupled hearing organ. The two ears are contained in one air sac and a cuticular bridge, that has a flexible spring-like structure at its center, connects them. This mechanical coupling preprocesses the sound before it is detected by the nervous system and provides the fly with directional information. The subject of this study is the neural coding of the location of sound stimuli by a mechanically coupled auditory system. In chapter 1, I present the natural history of an acoustic parasitoid and I review the peripheral processing of sound by the Ormian ear. In chapter 2, I describe the anatomy and physiology of the auditory afferents. I present this physiology in the context of sound localization. In chapter 3, I describe the directional dependent physiology for the thoracic local and ascending acoustic interneurons. In chapter 4, I quantify the threshold and I detail the kinematics of the phonotactic walking behavior in Ormia ochracea. I also quantify the angular resolution of the phonotactic turning behavior. Using a model, I show that the temporal coding properties of the afferents provide most of the information required by the fly to localize a singing cricket.

  11. Sensing of Particular Speakers for the Construction of Voice Interface Utilized in Noisy Environment

    NASA Astrophysics Data System (ADS)

    Sawada, Hideyuki; Ohkado, Minoru

    Human is able to exchange information smoothly using voice under different situations such as noisy environment in a crowd and with the existence of plural speakers. We are able to detect the position of a source sound in 3D space, extract a particular sound from mixed sounds, and recognize who is talking. By realizing this mechanism with a computer, new applications will be presented for recording a sound with high quality by reducing noise, presenting a clarified sound, and realizing a microphone-free speech recognition by extracting particular sound. The paper will introduce a realtime detection and identification of particular speaker in noisy environment using a microphone array based on the location of a speaker and the individual voice characteristics. The study will be applied to develop an adaptive auditory system of a mobile robot which collaborates with a factory worker.

  12. Accurate Sound Localization in Reverberant Environments is Mediated by Robust Encoding of Spatial Cues in the Auditory Midbrain

    PubMed Central

    Devore, Sasha; Ihlefeld, Antje; Hancock, Kenneth; Shinn-Cunningham, Barbara; Delgutte, Bertrand

    2009-01-01

    In reverberant environments, acoustic reflections interfere with the direct sound arriving at a listener’s ears, distorting the spatial cues for sound localization. Yet, human listeners have little difficulty localizing sounds in most settings. Because reverberant energy builds up over time, the source location is represented relatively faithfully during the early portion of a sound, but this representation becomes increasingly degraded later in the stimulus. We show that the directional sensitivity of single neurons in the auditory midbrain of anesthetized cats follows a similar time course, although onset dominance in temporal response patterns results in more robust directional sensitivity than expected, suggesting a simple mechanism for improving directional sensitivity in reverberation. In parallel behavioral experiments, we demonstrate that human lateralization judgments are consistent with predictions from a population rate model decoding the observed midbrain responses, suggesting a subcortical origin for robust sound localization in reverberant environments. PMID:19376072

  13. Eavesdropping to Find Mates: The Function of Male Hearing for a Cicada-Hunting Parasitoid Fly, Emblemasoma erro (Diptera: Sarcophagidae).

    PubMed

    Stucky, Brian J

    2016-01-01

    Females of several species of dipteran parasitoids use long-range hearing to locate hosts for their offspring by eavesdropping on the acoustic mating calls of other insects. Males of these acoustic eavesdropping parasitoids also have physiologically functional ears, but so far, no adaptive function for male hearing has been discovered. I investigated the function of male hearing for the sarcophagid fly Emblemasoma erro Aldrich, an acoustic parasitoid of cicadas, by testing the hypothesis that both male and female E. erro use hearing to locate potential mates. I found that both male and nongravid female E. erro perform phonotaxis to the sounds of calling cicadas, that male flies engage in short-range, mate-finding behavior once they arrive at a sound source, and that encounters between females and males at a sound source can lead to copulation. Thus, cicada calling songs appear to serve as a mate-finding cue for both sexes of E. erro Emblemasoma erro's mate-finding behavior is compared to that of other sarcophagid flies, other acoustic parasitoids, and nonacoustic eavesdropping parasitoids. © The Author 2016. Published by Oxford University Press on behalf of the Entomological Society of America.

  14. Atmospheric sound propagation

    NASA Technical Reports Server (NTRS)

    Cook, R. K.

    1969-01-01

    The propagation of sound waves at infrasonic frequencies (oscillation periods 1.0 - 1000 seconds) in the atmosphere is being studied by a network of seven stations separated geographically by distances of the order of thousands of kilometers. The stations measure the following characteristics of infrasonic waves: (1) the amplitude and waveform of the incident sound pressure, (2) the direction of propagation of the wave, (3) the horizontal phase velocity, and (4) the distribution of sound wave energy at various frequencies of oscillation. Some infrasonic sources which were identified and studied include the aurora borealis, tornadoes, volcanos, gravity waves on the oceans, earthquakes, and atmospheric instability waves caused by winds at the tropopause. Waves of unknown origin seem to radiate from several geographical locations, including one in the Argentine.

  15. Contralateral routing of signals disrupts monaural level and spectral cues to sound localisation on the horizontal plane.

    PubMed

    Pedley, Adam J; Kitterick, Pádraig T

    2017-09-01

    Contra-lateral routing of signals (CROS) devices re-route sound between the deaf and hearing ears of unilaterally-deaf individuals. This rerouting would be expected to disrupt access to monaural level cues that can support monaural localisation in the horizontal plane. However, such a detrimental effect has not been confirmed by clinical studies of CROS use. The present study aimed to exercise strict experimental control over the availability of monaural cues to localisation in the horizontal plane and the fitting of the CROS device to assess whether signal routing can impair the ability to locate sources of sound and, if so, whether CROS selectively disrupts monaural level or spectral cues to horizontal location, or both. Unilateral deafness and CROS device use were simulated in twelve normal hearing participants. Monaural recordings of broadband white noise presented from three spatial locations (-60°, 0°, and +60°) were made in the ear canal of a model listener using a probe microphone with and without a CROS device. The recordings were presented to participants via an insert earphone placed in their right ear. The recordings were processed to disrupt either monaural level or spectral cues to horizontal sound location by roving presentation level or the energy across adjacent frequency bands, respectively. Localisation ability was assessed using a three-alternative forced-choice spatial discrimination task. Participants localised above chance levels in all conditions. Spatial discrimination accuracy was poorer when participants only had access to monaural spectral cues compared to when monaural level cues were available. CROS use impaired localisation significantly regardless of whether level or spectral cues were available. For both cues, signal re-routing had a detrimental effect on the ability to localise sounds originating from the side of the deaf ear (-60°). CROS use also impaired the ability to use level cues to localise sounds originating from straight ahead (0°). The re-routing of sounds can restrict access to the monaural cues that provide a basis for determining sound location in the horizontal plane. Perhaps encouragingly, the results suggest that both monaural level and spectral cues may not be disrupted entirely by signal re-routing and that it may still be possible to reliably identify sounds originating on the hearing side. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Impedance Eduction in Ducts with Higher-Order Modes and Flow

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Jones, Michael G.

    2009-01-01

    An impedance eduction technique, previously validated for ducts with plane waves at the source and duct termination planes, has been extended to support higher-order modes at these locations. Inputs for this method are the acoustic pressures along the source and duct termination planes, and along a microphone array located in a wall either adjacent or opposite to the test liner. A second impedance eduction technique is then presented that eliminates the need for the microphone array. The integrity of both methods is tested using three sound sources, six Mach numbers, and six selected frequencies. Results are presented for both a hardwall and a test liner (with known impedance) consisting of a perforated plate bonded to a honeycomb core. The primary conclusion of the study is that the second method performs well in the presence of higher-order modes and flow. However, the first method performs poorly when most of the microphones are located near acoustic pressure nulls. The negative effects of the acoustic pressure nulls can be mitigated by a judicious choice of the mode structure in the sound source. The paper closes by using the first impedance eduction method to design a rectangular array of 32 microphones for accurate impedance eduction in the NASA LaRC Curved Duct Test Rig in the presence of expected measurement uncertainties, higher order modes, and mean flow.

  17. Effects of hydrokinetic turbine sound on the behavior of four species of fish within an experimental mesocosm

    DOE PAGES

    Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin

    2017-02-04

    The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less

  18. Effects of hydrokinetic turbine sound on the behavior of four species of fish within an experimental mesocosm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin

    The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less

  19. Acoustic Localization with Infrasonic Signals

    NASA Astrophysics Data System (ADS)

    Threatt, Arnesha; Elbing, Brian

    2015-11-01

    Numerous geophysical and anthropogenic events emit infrasonic frequencies (<20 Hz), including volcanoes, hurricanes, wind turbines and tornadoes. These sounds, which cannot be heard by the human ear, can be detected from large distances (in excess of 100 miles) due to low frequency acoustic signals having a very low decay rate in the atmosphere. Thus infrasound could be used for long-range, passive monitoring and detection of these events. An array of microphones separated by known distances can be used to locate a given source, which is known as acoustic localization. However, acoustic localization with infrasound is particularly challenging due to contamination from other signals, sensitivity to wind noise and producing a trusted source for system development. The objective of the current work is to create an infrasonic source using a propane torch wand or a subwoofer and locate the source using multiple infrasonic microphones. This presentation will present preliminary results from various microphone configurations used to locate the source.

  20. Near-field sound radiation of fan tones from an installed turbofan aero-engine.

    PubMed

    McAlpine, Alan; Gaffney, James; Kingan, Michael J

    2015-09-01

    The development of a distributed source model to predict fan tone noise levels of an installed turbofan aero-engine is reported. The key objective is to examine a canonical problem: how to predict the pressure field due to a distributed source located near an infinite, rigid cylinder. This canonical problem is a simple representation of an installed turbofan, where the distributed source is based on the pressure pattern generated by a spinning duct mode, and the rigid cylinder represents an aircraft fuselage. The radiation of fan tones can be modelled in terms of spinning modes. In this analysis, based on duct modes, theoretical expressions for the near-field acoustic pressures on the cylinder, or at the same locations without the cylinder, have been formulated. Simulations of the near-field acoustic pressures are compared against measurements obtained from a fan rig test. Also, the installation effect is quantified by calculating the difference in the sound pressure levels with and without the adjacent cylindrical fuselage. Results are shown for the blade passing frequency fan tone radiated at a supersonic fan operating condition.

  1. Hemispheric lateralization in an analysis of speech sounds. Left hemisphere dominance replicated in Japanese subjects.

    PubMed

    Koyama, S; Gunji, A; Yabe, H; Oiwa, S; Akahane-Yamada, R; Kakigi, R; Näätänen, R

    2000-09-01

    Evoked magnetic responses to speech sounds [R. Näätänen, A. Lehtokoski, M. Lennes, M. Cheour, M. Huotilainen, A. Iivonen, M. Vainio, P. Alku, R.J. Ilmoniemi, A. Luuk, J. Allik, J. Sinkkonen and K. Alho, Language-specific phoneme representations revealed by electric and magnetic brain responses. Nature, 385 (1997) 432-434.] were recorded from 13 Japanese subjects (right-handed). Infrequently presented vowels ([o]) among repetitive vowels ([e]) elicited the magnetic counterpart of mismatch negativity, MMNm (Bilateral, nine subjects; Left hemisphere alone, three subjects; Right hemisphere alone, one subject). The estimated source of the MMNm was stronger in the left than in the right auditory cortex. The sources were located posteriorly in the left than in the right auditory cortex. These findings are consistent with the results obtained in Finnish [R. Näätänen, A. Lehtokoski, M. Lennes, M. Cheour, M. Huotilainen, A. Iivonen, M.Vainio, P.Alku, R.J. Ilmoniemi, A. Luuk, J. Allik, J. Sinkkonen and K. Alho, Language-specific phoneme representations revealed by electric and magnetic brain responses. Nature, 385 (1997) 432-434.][T. Rinne, K. Alho, P. Alku, M. Holi, J. Sinkkonen, J. Virtanen, O. Bertrand and R. Näätänen, Analysis of speech sounds is left-hemisphere predominant at 100-150 ms after sound onset. Neuroreport, 10 (1999) 1113-1117.] and English [K. Alho, J.F. Connolly, M. Cheour, A. Lehtokoski, M. Huotilainen, J. Virtanen, R. Aulanko and R.J. Ilmoniemi, Hemispheric lateralization in preattentive processing of speech sounds. Neurosci. Lett., 258 (1998) 9-12.] subjects. Instead of the P1m observed in Finnish [M. Tervaniemi, A. Kujala, K. Alho, J. Virtanen, R.J. Ilmoniemi and R. Näätänen, Functional specialization of the human auditory cortex in processing phonetic and musical sounds: A magnetoencephalographic (MEG) study. Neuroimage, 9 (1999) 330-336.] and English [K. Alho, J. F. Connolly, M. Cheour, A. Lehtokoski, M. Huotilainen, J. Virtanen, R. Aulanko and R.J. Ilmoniemi, Hemispheric lateralization in preattentive processing of speech sounds. Neurosci. Lett., 258 (1998) 9-12.] subjects, prior to the MMNm, M60, was elicited by both rare and frequent sounds. Both MMNm and M60 sources were posteriorly located in the left than the right hemisphere.

  2. A Green Soundscape Index (GSI): The potential of assessing the perceived balance between natural sound and traffic noise.

    PubMed

    Kogan, Pablo; Arenas, Jorge P; Bermejo, Fernando; Hinalaf, María; Turra, Bruno

    2018-06-13

    Urban soundscapes are dynamic and complex multivariable environmental systems. Soundscapes can be organized into three main entities containing the multiple variables: Experienced Environment (EE), Acoustic Environment (AE), and Extra-Acoustic Environment (XE). This work applies a multidimensional and synchronic data-collecting methodology at eight urban environments in the city of Córdoba, Argentina. The EE was assessed by means of surveys, the AE by acoustic measurements and audio recordings, and the XE by photos, video, and complementary sources. In total, 39 measurement locations were considered, where data corresponding to 61 AE and 203 EE were collected. Multivariate analysis and GIS techniques were used for data processing. The types of sound sources perceived, and their extents make up part of the collected variables that belong to the EE, i.e. traffic, people, natural sounds, and others. Sources explaining most of the variance were traffic noise and natural sounds. Thus, a Green Soundscape Index (GSI) is defined here as the ratio of the perceived extents of natural sounds to traffic noise. Collected data were divided into three ranges according to GSI value: 1) perceptual predominance of traffic noise, 2) balanced perception, and 3) perceptual predominance of natural sounds. For each group, three additional variables from the EE and three from the AE were applied, which reported significant differences, especially between ranges 1 and 2 with 3. These results confirm the key role of perceiving natural sounds in a town environment and also support the proposal of a GSI as a valuable indicator to classify urban soundscapes. In addition, the collected GSI-related data significantly helps to assess the overall soundscape. It is noted that this proposed simple perceptual index not only allows one to assess and classify urban soundscapes but also contributes greatly toward a technique for separating environmental sound sources. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. New theory on the reverberation of rooms. [considering sound wave travel time

    NASA Technical Reports Server (NTRS)

    Pujolle, J.

    1974-01-01

    The inadequacy of the various theories which have been proposed for finding the reverberation time of rooms can be explained by an attempt to examine what might occur at a listening point when image sources of determined acoustic power are added to the actual source. The number and locations of the image sources are stipulated. The intensity of sound at the listening point can be calculated by means of approximations whose conditions for validity are given. This leads to the proposal of a new expression for the reverberation time, yielding results which fall between those obtained through use of the Eyring and Millington formulae; these results are made to depend on the shape of the room by means of a new definition of the mean free path.

  4. The Contribution of Head Movement to the Externalization and Internalization of Sounds

    PubMed Central

    Brimijoin, W. Owen; Boyd, Alan W.; Akeroyd, Michael A.

    2013-01-01

    Background When stimuli are presented over headphones, they are typically perceived as internalized; i.e., they appear to emanate from inside the head. Sounds presented in the free-field tend to be externalized, i.e., perceived to be emanating from a source in the world. This phenomenon is frequently attributed to reverberation and to the spectral characteristics of the sounds: those sounds whose spectrum and reverberation matches that of free-field signals arriving at the ear canal tend to be more frequently externalized. Another factor, however, is that the virtual location of signals presented over headphones moves in perfect concert with any movements of the head, whereas the location of free-field signals moves in opposition to head movements. The effects of head movement have not been systematically disentangled from reverberation and/or spectral cues, so we measured the degree to which movements contribute to externalization. Methodology/Principal Findings We performed two experiments: 1) Using motion tracking and free-field loudspeaker presentation, we presented signals that moved in their spatial location to match listeners’ head movements. 2) Using motion tracking and binaural room impulse responses, we presented filtered signals over headphones that appeared to remain static relative to the world. The results from experiment 1 showed that free-field signals from the front that move with the head are less likely to be externalized (23%) than those that remain fixed (63%). Experiment 2 showed that virtual signals whose position was fixed relative to the world are more likely to be externalized (65%) than those fixed relative to the head (20%), regardless of the fidelity of the individual impulse responses. Conclusions/Significance Head movements play a significant role in the externalization of sound sources. These findings imply tight integration between binaural cues and self motion cues and underscore the importance of self motion for spatial auditory perception. PMID:24312677

  5. Electric and kinematic structure of the Oklahoma mesoscale convective system of 7 June 1989

    NASA Technical Reports Server (NTRS)

    Hunter, Steven M.; Schur, Terry J.; Marshall, Thomas C.; Rust, W. D.

    1992-01-01

    Balloon soundings of electric field in Oklahoma mesoscale convective systems (MCS) were obtained by the National Severe Storms Laboratory in the spring of 1989. This study focuses on a sounding made in the rearward edge of an MCS stratiform rain area on 7 June 1989. Data from Doppler radars, a lightning ground-strike location system, satellite, and other sources is used to relate the mesoscale attributes of the MCS to the observed electric-field profile.

  6. Effects of multiple congruent cues on concurrent sound segregation during passive and active listening: an event-related potential (ERP) study.

    PubMed

    Kocsis, Zsuzsanna; Winkler, István; Szalárdy, Orsolya; Bendixen, Alexandra

    2014-07-01

    In two experiments, we assessed the effects of combining different cues of concurrent sound segregation on the object-related negativity (ORN) and the P400 event-related potential components. Participants were presented with sequences of complex tones, half of which contained some manipulation: one or two harmonic partials were mistuned, delayed, or presented from a different location than the rest. In separate conditions, one, two, or three of these manipulations were combined. Participants watched a silent movie (passive listening) or reported after each tone whether they perceived one or two concurrent sounds (active listening). ORN was found in almost all conditions except for location difference alone during passive listening. Combining several cues or manipulating more than one partial consistently led to sub-additive effects on the ORN amplitude. These results support the view that ORN reflects a combined, feature-unspecific assessment of the auditory system regarding the contribution of two sources to the incoming sound. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. The Effect of Microphone Placement on Interaural Level Differences and Sound Localization Across the Horizontal Plane in Bilateral Cochlear Implant Users.

    PubMed

    Jones, Heath G; Kan, Alan; Litovsky, Ruth Y

    2016-01-01

    This study examined the effect of microphone placement on the interaural level differences (ILDs) available to bilateral cochlear implant (BiCI) users, and the subsequent effects on horizontal-plane sound localization. Virtual acoustic stimuli for sound localization testing were created individually for eight BiCI users by making acoustic transfer function measurements for microphones placed in the ear (ITE), behind the ear (BTE), and on the shoulders (SHD). The ILDs across source locations were calculated for each placement to analyze their effect on sound localization performance. Sound localization was tested using a repeated-measures, within-participant design for the three microphone placements. The ITE microphone placement provided significantly larger ILDs compared to BTE and SHD placements, which correlated with overall localization errors. However, differences in localization errors across the microphone conditions were small. The BTE microphones worn by many BiCI users in everyday life do not capture the full range of acoustic ILDs available, and also reduce the change in cue magnitudes for sound sources across the horizontal plane. Acute testing with an ITE placement reduced sound localization errors along the horizontal plane compared to the other placements in some patients. Larger improvements may be observed if patients had more experience with the new ILD cues provided by an ITE placement.

  8. Reciprocity-based experimental determination of dynamic forces and moments: A feasibility study

    NASA Technical Reports Server (NTRS)

    Ver, Istvan L.; Howe, Michael S.

    1994-01-01

    BBN Systems and Technologies has been tasked by the Georgia Tech Research Center to carry Task Assignment No. 7 for the NASA Langley Research Center to explore the feasibility of 'In-Situ Experimental Evaluation of the Source Strength of Complex Vibration Sources Utilizing Reciprocity.' The task was carried out under NASA Contract No. NAS1-19061. In flight it is not feasible to connect the vibration sources to their mounting points on the fuselage through force gauges to measure dynamic forces and moments directly. However, it is possible to measure the interior sound field or vibration response caused by these structureborne sound sources at many locations and invoke principle of reciprocity to predict the dynamic forces and moments. The work carried out in the framework of Task 7 was directed to explore the feasibility of reciprocity-based measurements of vibration forces and moments.

  9. Evidence for distinct human auditory cortex regions for sound location versus identity processing

    PubMed Central

    Ahveninen, Jyrki; Huang, Samantha; Nummenmaa, Aapo; Belliveau, John W.; Hung, An-Yi; Jääskeläinen, Iiro P.; Rauschecker, Josef P.; Rossi, Stephanie; Tiitinen, Hannu; Raij, Tommi

    2014-01-01

    Neurophysiological animal models suggest that anterior auditory cortex (AC) areas process sound-identity information, whereas posterior ACs specialize in sound location processing. In humans, inconsistent neuroimaging results and insufficient causal evidence have challenged the existence of such parallel AC organization. Here we transiently inhibit bilateral anterior or posterior AC areas using MRI-guided paired-pulse transcranial magnetic stimulation (TMS) while subjects listen to Reference/Probe sound pairs and perform either sound location or identity discrimination tasks. The targeting of TMS pulses, delivered 55–145 ms after Probes, is confirmed with individual-level cortical electric-field estimates. Our data show that TMS to posterior AC regions delays reaction times (RT) significantly more during sound location than identity discrimination, whereas TMS to anterior AC regions delays RTs significantly more during sound identity than location discrimination. This double dissociation provides direct causal support for parallel processing of sound identity features in anterior AC and sound location in posterior AC. PMID:24121634

  10. Sound source localization inspired by the ears of the Ormia ochracea

    NASA Astrophysics Data System (ADS)

    Kuntzman, Michael L.; Hall, Neal A.

    2014-07-01

    The parasitoid fly Ormia ochracea has the remarkable ability to locate crickets using audible sound. This ability is, in fact, remarkable as the fly's hearing mechanism spans only 1.5 mm which is 50× smaller than the wavelength of sound emitted by the cricket. The hearing mechanism is, for all practical purposes, a point in space with no significant interaural time or level differences to draw from. It has been discovered that evolution has empowered the fly with a hearing mechanism that utilizes multiple vibration modes to amplify interaural time and level differences. Here, we present a fully integrated, man-made mimic of the Ormia's hearing mechanism capable of replicating the remarkable sound localization ability of the special fly. A silicon-micromachined prototype is presented which uses multiple piezoelectric sensing ports to simultaneously transduce two orthogonal vibration modes of the sensing structure, thereby enabling simultaneous measurement of sound pressure and pressure gradient.

  11. Sound propagation from a ridge wind turbine across a valley.

    PubMed

    Van Renterghem, Timothy

    2017-04-13

    Sound propagation outdoors can be strongly affected by ground topography. The existence of hills and valleys between a source and receiver can lead to the shielding or focusing of sound waves. Such effects can result in significant variations in received sound levels. In addition, wind speed and air temperature gradients in the atmospheric boundary layer also play an important role. All of the foregoing factors can become especially important for the case of wind turbines located on a ridge overlooking a valley. Ridges are often selected for wind turbines in order to increase their energy capture potential through the wind speed-up effects often experienced in such locations. In this paper, a hybrid calculation method is presented to model such a case, relying on an analytical solution for sound diffraction around an impedance cylinder and the conformal mapping (CM) Green's function parabolic equation (GFPE) technique. The various aspects of the model have been successfully validated against alternative prediction methods. Example calculations with this hybrid analytical-CM-GFPE model show the complex sound pressure level distribution across the valley and the effect of valley ground type. The proposed method has the potential to include the effect of refraction through the inclusion of complex wind and temperature fields, although this aspect has been highly simplified in the current simulations.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).

  12. Minke whale song, spacing, and acoustic communication on the Great Barrier Reef, Australia

    NASA Astrophysics Data System (ADS)

    Gedamke, Jason

    An inquisitive population of minke whale (Balaenoptera acutorostrata ) that concentrates on the Great Barrier Reef during its suspected breeding season offered a unique opportunity to conduct a multi-faceted study of a little-known Balaenopteran species' acoustic behavior. Chapter one investigates whether the minke whale is the source of an unusual, complex, and stereotyped sound recorded, the "star-wars" vocalization. A hydrophone array was towed from a vessel to record sounds from circling whales for subsequent localization of sound sources. These acoustic locations were matched with shipboard and in-water observations of the minke whale, demonstrating the minke whale was the source of this unusual sound. Spectral and temporal features of this sound and the source levels at which it is produced are described. The repetitive "star-wars" vocalization appears similar to the songs of other whale species and has characteristics consistent with reproductive advertisement displays. Chapter two investigates whether song (i.e. the "star-wars" vocalization) has a spacing function through passive monitoring of singer spatial patterns with a moored five-sonobuoy array. Active song playback experiments to singers were also conducted to further test song function. This study demonstrated that singers naturally maintain spatial separations between them through a nearest-neighbor analysis and animated tracks of singer movements. In response to active song playbacks, singers generally moved away and repeated song more quickly suggesting that song repetition interval may help regulate spatial interaction and singer separation. These results further indicate the Great Barrier Reef may be an important reproductive habitat for this species. Chapter three investigates whether song is part of a potentially graded repertoire of acoustic signals. Utilizing both vessel-based recordings and remote recordings from the sonobuoy array, temporal and spectral features, source levels, and associated contextual data of recorded sounds were analyzed. Two categories of sound are described here: (1) patterned song, which was regularly repeated in one of three patterns: slow, fast, and rapid-clustered repetition, and (2) non-patterned "social" sounds recorded from gregarious assemblages of whales. These discrete acoustic signals may comprise a graded system of communication (Slow/fast song → Rapid-clustered song → Social sounds) that is related to the spacing between whales.

  13. Spatial and identity negative priming in audition: evidence of feature binding in auditory spatial memory.

    PubMed

    Mayr, Susanne; Buchner, Axel; Möller, Malte; Hauke, Robert

    2011-08-01

    Two experiments are reported with identical auditory stimulation in three-dimensional space but with different instructions. Participants localized a cued sound (Experiment 1) or identified a sound at a cued location (Experiment 2). A distractor sound at another location had to be ignored. The prime distractor and the probe target sound were manipulated with respect to sound identity (repeated vs. changed) and location (repeated vs. changed). The localization task revealed a symmetric pattern of partial repetition costs: Participants were impaired on trials with identity-location mismatches between the prime distractor and probe target-that is, when either the sound was repeated but not the location or vice versa. The identification task revealed an asymmetric pattern of partial repetition costs: Responding was slowed down when the prime distractor sound was repeated as the probe target, but at another location; identity changes at the same location were not impaired. Additionally, there was evidence of retrieval of incompatible prime responses in the identification task. It is concluded that feature binding of auditory prime distractor information takes place regardless of whether the task is to identify or locate a sound. Instructions determine the kind of identity-location mismatch that is detected. Identity information predominates over location information in auditory memory.

  14. Selective attention to sound location or pitch studied with fMRI.

    PubMed

    Degerman, Alexander; Rinne, Teemu; Salmi, Juha; Salonen, Oili; Alho, Kimmo

    2006-03-10

    We used 3-T functional magnetic resonance imaging to compare the brain mechanisms underlying selective attention to sound location and pitch. In different tasks, the subjects (N = 10) attended to a designated sound location or pitch or to pictures presented on the screen. In the Attend Location conditions, the sound location varied randomly (left or right), while the pitch was kept constant (high or low). In the Attend Pitch conditions, sounds of randomly varying pitch (high or low) were presented at a constant location (left or right). Both attention to location and attention to pitch produced enhanced activity (in comparison with activation caused by the same sounds when attention was focused on the pictures) in widespread areas of the superior temporal cortex. Attention to either sound feature also activated prefrontal and inferior parietal cortical regions. These activations were stronger during attention to location than during attention to pitch. Attention to location but not to pitch produced a significant increase of activation in the premotor/supplementary motor cortices of both hemispheres and in the right prefrontal cortex, while no area showed activity specifically related to attention to pitch. The present results suggest some differences in the attentional selection of sounds on the basis of their location and pitch consistent with the suggested auditory "what" and "where" processing streams.

  15. Fuselage boundary-layer refraction of fan tones radiated from an installed turbofan aero-engine.

    PubMed

    Gaffney, James; McAlpine, Alan; Kingan, Michael J

    2017-03-01

    A distributed source model to predict fan tone noise levels of an installed turbofan aero-engine is extended to include the refraction effects caused by the fuselage boundary layer. The model is a simple representation of an installed turbofan, where fan tones are represented in terms of spinning modes radiated from a semi-infinite circular duct, and the aircraft's fuselage is represented by an infinitely long, rigid cylinder. The distributed source is a disk, formed by integrating infinitesimal volume sources located on the intake duct termination. The cylinder is located adjacent to the disk. There is uniform axial flow, aligned with the axis of the cylinder, everywhere except close to the cylinder where there is a constant thickness boundary layer. The aim is to predict the near-field acoustic pressure, and in particular, to predict the pressure on the cylindrical fuselage which is relevant to assess cabin noise. Thus no far-field approximations are included in the modelling. The effect of the boundary layer is quantified by calculating the area-averaged mean square pressure over the cylinder's surface with and without the boundary layer included in the prediction model. The sound propagation through the boundary layer is calculated by solving the Pridmore-Brown equation. Results from the theoretical method show that the boundary layer has a significant effect on the predicted sound pressure levels on the cylindrical fuselage, owing to sound radiation of fan tones from an installed turbofan aero-engine.

  16. Underwater passive acoustic localization of Pacific walruses in the northeastern Chukchi Sea.

    PubMed

    Rideout, Brendan P; Dosso, Stan E; Hannay, David E

    2013-09-01

    This paper develops and applies a linearized Bayesian localization algorithm based on acoustic arrival times of marine mammal vocalizations at spatially-separated receivers which provides three-dimensional (3D) location estimates with rigorous uncertainty analysis. To properly account for uncertainty in receiver parameters (3D hydrophone locations and synchronization times) and environmental parameters (water depth and sound-speed correction), these quantities are treated as unknowns constrained by prior estimates and prior uncertainties. Unknown scaling factors on both the prior and arrival-time uncertainties are estimated by minimizing Akaike's Bayesian information criterion (a maximum entropy condition). Maximum a posteriori estimates for sound source locations and times, receiver parameters, and environmental parameters are calculated simultaneously using measurements of arrival times for direct and interface-reflected acoustic paths. Posterior uncertainties for all unknowns incorporate both arrival time and prior uncertainties. Monte Carlo simulation results demonstrate that, for the cases considered here, linearization errors are small and the lack of an accurate sound-speed profile does not cause significant biases in the estimated locations. A sequence of Pacific walrus vocalizations, recorded in the Chukchi Sea northwest of Alaska, is localized using this technique, yielding a track estimate and uncertainties with an estimated speed comparable to normal walrus swim speeds.

  17. Cortical Transformation of Spatial Processing for Solving the Cocktail Party Problem: A Computational Model123

    PubMed Central

    Dong, Junzi; Colburn, H. Steven

    2016-01-01

    In multisource, “cocktail party” sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem. PMID:26866056

  18. Cortical Transformation of Spatial Processing for Solving the Cocktail Party Problem: A Computational Model(1,2,3).

    PubMed

    Dong, Junzi; Colburn, H Steven; Sen, Kamal

    2016-01-01

    In multisource, "cocktail party" sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem.

  19. Spatial selective attention in a complex auditory environment such as polyphonic music.

    PubMed

    Saupe, Katja; Koelsch, Stefan; Rübsamen, Rudolf

    2010-01-01

    To investigate the influence of spatial information in auditory scene analysis, polyphonic music (three parts in different timbres) was composed and presented in free field. Each part contained large falling interval jumps in the melody and the task of subjects was to detect these events in one part ("target part") while ignoring the other parts. All parts were either presented from the same location (0 degrees; overlap condition) or from different locations (-28 degrees, 0 degrees, and 28 degrees or -56 degrees, 0 degrees, and 56 degrees in the azimuthal plane), with the target part being presented either at 0 degrees or at one of the right-sided locations. Results showed that spatial separation of 28 degrees was sufficient for a significant improvement in target detection (i.e., in the detection of large interval jumps) compared to the overlap condition, irrespective of the position (frontal or right) of the target part. A larger spatial separation of the parts resulted in further improvements only if the target part was lateralized. These data support the notion of improvement in the suppression of interfering signals with spatial sound source separation. Additionally, the data show that the position of the relevant sound source influences auditory performance.

  20. Clostridium perfringens in Long Island Sound sediments: An urban sedimentary record

    USGS Publications Warehouse

    Buchholtz ten Brink, Marilyn R.; Mecray, E.L.; Galvin, E.L.

    2000-01-01

    Clostridium perfringens is a conservative tracer and an indicator of sewage-derived pollution in the marine environment. The distribution of Clostridium perfringens spores was measured in sediments from Long Island Sound, USA, as part of a regional study designed to: (1) map the distribution of contaminated sediments; (2) determine transport and dispersal paths; (3) identify the locations of sediment and contaminant focusing; and (4) constrain predictive models. In 1996, sediment cores were collected at 58 stations, and surface sediments were collected at 219 locations throughout the Sound. Elevated concentrations of Clostridium perfringens in the sediments indicate that sewage pollution is present throughout Long Island Sound and has persisted for more than a century. Concentrations range from undetectable amounts to 15,000 spores/g dry sediment and are above background levels in the upper 30 cm at nearly all core locations. Sediment focusing strongly impacts the accumulation of Clostridium perfringens spores. Inventories in the cores range from 28 to 70,000 spores/cm2, and elevated concentrations can extend to depths of 50 cm. The steep gradients in Clostridium perfringens profiles in muddier cores contrast with concentrations that are generally constant with depth in sandier cores. Clostridium perfringens concentrations rarely decrease in the uppermost sediment, unlike those reported for metal contaminants. Concentrations in surface sediments are highest in the western end of the Sound, very low in the eastern region, and intermediate in the central part. This pattern reflects winnowing and focusing of Clostridium perfringens spores and fine-grained sediment by the hydrodynamic regime; however, the proximity of sewage sources to the westernmost Sound locally enhances the Clostridium perfringens signals.

  1. A study on locating the sonic source of sinusoidal magneto-acoustic signals using a vector method.

    PubMed

    Zhang, Shunqi; Zhou, Xiaoqing; Ma, Ren; Yin, Tao; Liu, Zhipeng

    2015-01-01

    Methods based on the magnetic-acoustic effect are of great significance in studying the electrical imaging properties of biological tissues and currents. The continuous wave method, which is commonly used, can only detect the current amplitude without the sound source position. Although the pulse mode adopted in magneto-acoustic imaging can locate the sonic source, the low measuring accuracy and low SNR has limited its application. In this study, a vector method was used to solve and analyze the magnetic-acoustic signal based on the continuous sine wave mode. This study includes theory modeling of the vector method, simulations to the line model, and experiments with wire samples to analyze magneto-acoustic (MA) signal characteristics. The results showed that the amplitude and phase of the MA signal contained the location information of the sonic source. The amplitude and phase obeyed the vector theory in the complex plane. This study sets a foundation for a new technique to locate sonic sources for biomedical imaging of tissue conductivity. It also aids in studying biological current detecting and reconstruction based on the magneto-acoustic effect.

  2. 49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 5 2012-10-01 2012-10-01 false Location and operation of sound level measurement...; Stationary Test § 325.57 Location and operation of sound level measurement systems; stationary test. (a) The microphone of a sound level measurement system that conforms to the rules in § 325.23 shall be located at a...

  3. 49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 5 2013-10-01 2013-10-01 false Location and operation of sound level measurement...; Stationary Test § 325.57 Location and operation of sound level measurement systems; stationary test. (a) The microphone of a sound level measurement system that conforms to the rules in § 325.23 shall be located at a...

  4. 49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 5 2011-10-01 2011-10-01 false Location and operation of sound level measurement...; Stationary Test § 325.57 Location and operation of sound level measurement systems; stationary test. (a) The microphone of a sound level measurement system that conforms to the rules in § 325.23 shall be located at a...

  5. Action sounds update the mental representation of arm dimension: contributions of kinaesthesia and agency

    PubMed Central

    Tajadura-Jiménez, Ana; Tsakiris, Manos; Marquardt, Torsten; Bianchi-Berthouze, Nadia

    2015-01-01

    Auditory feedback accompanies almost all our actions, but its contribution to body-representation is understudied. Recently it has been shown that the auditory distance of action sounds recalibrates perceived tactile distances on one’s arm, suggesting that action sounds can change the mental representation of arm length. However, the question remains open of what factors play a role in this recalibration. In this study we investigate two of these factors, kinaesthesia, and sense of agency. Across two experiments, we asked participants to tap with their arm on a surface while extending their arm. We manipulated the tapping sounds to originate at double the distance to the tapping locations, as well as their synchrony to the action, which is known to affect feelings of agency over the sounds. Kinaesthetic cues were manipulated by having additional conditions in which participants did not displace their arm but kept tapping either close (Experiment 1) or far (Experiment 2) from their body torso. Results show that both the feelings of agency over the action sounds and kinaesthetic cues signaling arm displacement when displacement of the sound source occurs are necessary to observe changes in perceived tactile distance on the arm. In particular, these cues resulted in the perceived tactile distances on the arm being felt smaller, as compared to distances on a reference location. Moreover, our results provide the first evidence of consciously perceived changes in arm-representation evoked by action sounds and suggest that the observed changes in perceived tactile distance relate to experienced arm elongation. We discuss the observed effects in the context of forward internal models of sensorimotor integration. Our results add to these models by showing that predictions related to action sounds must fit with kinaesthetic cues in order for auditory inputs to change body-representation. PMID:26074843

  6. A unified approach for the spatial enhancement of sound

    NASA Astrophysics Data System (ADS)

    Choi, Joung-Woo; Jang, Ji-Ho; Kim, Yang-Hann

    2005-09-01

    This paper aims to control the sound field spatially, so that the desired or target acoustic variable is enhanced within a zone where a listener is located. This is somewhat analogous to having manipulators that can draw sounds in any place. This also means that one can somehow see the controlled shape of sound in frequency or in real time. The former assures its practical applicability, for example, listening zone control for music. The latter provides a mean of analyzing sound field. With all these regards, a unified approach is proposed that can enhance selected acoustic variables using multiple sources. Three kinds of acoustic variables that have to do with magnitude and direction of sound field are formulated and enhanced. The first one, which has to do with the spatial control of acoustic potential energy, enables one to make a zone of loud sound over an area. Otherwise, one can control directional characteristic of sound field by controlling directional energy density, or one can enhance the magnitude and direction of sound at the same time by controlling acoustic intensity. Throughout various examples, it is shown that these acoustic variables can be controlled successfully by the proposed approach.

  7. Stream ambient noise, spectrum and propagation of sounds in the goby Padogobius martensii: sound pressure and particle velocity.

    PubMed

    Lugli, Marco; Fine, Michael L

    2007-11-01

    The most sensitive hearing and peak frequencies of courtship calls of the stream goby, Padogobius martensii, fall within a quiet window at around 100 Hz in the ambient noise spectrum. Acoustic pressure was previously measured although Padogobius likely responds to particle motion. In this study a combination pressure (p) and particle velocity (u) detector was utilized to describe ambient noise of the habitat, the characteristics of the goby's sounds and their attenuation with distance. The ambient noise (AN) spectrum is generally similar for p and u (including the quiet window at noisy locations), although the energy distribution of u spectrum is shifted up by 50-100 Hz. The energy distribution of the goby's sounds is similar for p and u spectra of the Tonal sound, whereas the pulse-train sound exhibits larger p-u differences. Transmission loss was high for sound p and u: energy decays 6-10 dB10 cm, and sound pu ratio does not change with distance from the source in the nearfield. The measurement of particle velocity of stream AN and P. martensii sounds indicates that this species is well adapted to communicate acoustically in a complex noisy shallow-water environment.

  8. Integrated geophysical investigations for the delineation of source and subsurface structure associated with hydro-uranium anomaly: A case study from South Purulia Shear Zone (SPSZ), India

    NASA Astrophysics Data System (ADS)

    Sharma, S. P.; Biswas, A.

    2012-12-01

    South Purulia Shear Zone (SPSZ) is an important region for prospecting of uranium mineralization. Geological studies and hydro-uranium anomaly suggest the presence of Uranium deposit around Raghunathpur village which lies about 8 km north of SPSZ. However, detailed geophysical investigations have not been carried out in this region for investigation of uranium mineralization. Since surface signature of uranium mineralization is not depicted near the location, a deeper subsurface source is expected for hydro uranium anomaly. To delineate the subsurface structure and to investigate the origin of hydro-uranium anomaly present in the area, Vertical Electrical Sounding (VES) using Schlumberger array and Gradient Resistivity Profiling (GRP) were performed at different locations along a profile perpendicular to the South Purulia Shear Zone. Apparent resistivity computed from the measured sounding data at various locations shows a continuously increasing trend. As a result, conventional apparent resistivity data is not able to detect the possible source of hydro uranium anomaly. An innovative approach is applied which depicts the apparent conductivity in the subsurface revealed a possible connection from SPSZ to Raghunathpur. On the other hand resistivity profiling data suggests a low resistive zone which is also characterized by low Self-Potential (SP) anomaly zone. Since SPSZ is characterized by the source of uranium mineralization; hydro-uranium anomaly at Raghunathpur is connected with the SPSZ. The conducting zone has been delineated from SPSZ to Raghunathpur at deeper depths which could be uranium bearing. Since the location is also characterized by a low gravity and high magnetic anomaly zone, this conducting zone is likely to be mineralized zone. Keywords: Apparent resistivity; apparent conductivity; Self Potential; Uranium mineralization; shear zone; hydro-uranium anomaly.

  9. Human brain regions involved in recognizing environmental sounds.

    PubMed

    Lewis, James W; Wightman, Frederic L; Brefczynski, Julie A; Phinney, Raymond E; Binder, Jeffrey R; DeYoe, Edgar A

    2004-09-01

    To identify the brain regions preferentially involved in environmental sound recognition (comprising portions of a putative auditory 'what' pathway), we collected functional imaging data while listeners attended to a wide range of sounds, including those produced by tools, animals, liquids and dropped objects. These recognizable sounds, in contrast to unrecognizable, temporally reversed control sounds, evoked activity in a distributed network of brain regions previously associated with semantic processing, located predominantly in the left hemisphere, but also included strong bilateral activity in posterior portions of the middle temporal gyri (pMTG). Comparisons with earlier studies suggest that these bilateral pMTG foci partially overlap cortex implicated in high-level visual processing of complex biological motion and recognition of tools and other artifacts. We propose that the pMTG foci process multimodal (or supramodal) information about objects and object-associated motion, and that this may represent 'action' knowledge that can be recruited for purposes of recognition of familiar environmental sound-sources. These data also provide a functional and anatomical explanation for the symptoms of pure auditory agnosia for environmental sounds reported in human lesion studies.

  10. Model-free data analysis for source separation based on Non-Negative Matrix Factorization and k-means clustering (NMFk)

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Alexandrov, B.

    2014-12-01

    The identification of the physical sources causing spatial and temporal fluctuations of state variables such as river stage levels and aquifer hydraulic heads is challenging. The fluctuations can be caused by variations in natural and anthropogenic sources such as precipitation events, infiltration, groundwater pumping, barometric pressures, etc. The source identification and separation can be crucial for conceptualization of the hydrological conditions and characterization of system properties. If the original signals that cause the observed state-variable transients can be successfully "unmixed", decoupled physics models may then be applied to analyze the propagation of each signal independently. We propose a new model-free inverse analysis of transient data based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS) coupled with k-means clustering algorithm, which we call NMFk. NMFk is capable of identifying a set of unique sources from a set of experimentally measured mixed signals, without any information about the sources, their transients, and the physical mechanisms and properties controlling the signal propagation through the system. A classical BSS conundrum is the so-called "cocktail-party" problem where several microphones are recording the sounds in a ballroom (music, conversations, noise, etc.). Each of the microphones is recording a mixture of the sounds. The goal of BSS is to "unmix'" and reconstruct the original sounds from the microphone records. Similarly to the "cocktail-party" problem, our model-freee analysis only requires information about state-variable transients at a number of observation points, m, where m > r, and r is the number of unknown unique sources causing the observed fluctuations. We apply the analysis on a dataset from the Los Alamos National Laboratory (LANL) site. We identify and estimate the impact and sources are barometric pressure and water-supply pumping effects. We also estimate the location of the water-supply pumping wells based on the available data. The possible applications of the NMFk algorithm are not limited to hydrology problems; NMFk can be applied to any problem where temporal system behavior is observed at multiple locations and an unknown number of physical sources are causing these fluctuations.

  11. Sound Spectrum Influences Auditory Distance Perception of Sound Sources Located in a Room Environment

    PubMed Central

    Spiousas, Ignacio; Etchemendy, Pablo E.; Eguia, Manuel C.; Calcagno, Esteban R.; Abregú, Ezequiel; Vergara, Ramiro O.

    2017-01-01

    Previous studies on the effect of spectral content on auditory distance perception (ADP) focused on the physically measurable cues occurring either in the near field (low-pass filtering due to head diffraction) or when the sound travels distances >15 m (high-frequency energy losses due to air absorption). Here, we study how the spectrum of a sound arriving from a source located in a reverberant room at intermediate distances (1–6 m) influences the perception of the distance to the source. First, we conducted an ADP experiment using pure tones (the simplest possible spectrum) of frequencies 0.5, 1, 2, and 4 kHz. Then, we performed a second ADP experiment with stimuli consisting of continuous broadband and bandpass-filtered (with center frequencies of 0.5, 1.5, and 4 kHz and bandwidths of 1/12, 1/3, and 1.5 octave) pink-noise clips. Our results showed an effect of the stimulus frequency on the perceived distance both for pure tones and filtered noise bands: ADP was less accurate for stimuli containing energy only in the low-frequency range. Analysis of the frequency response of the room showed that the low accuracy observed for low-frequency stimuli can be explained by the presence of sparse modal resonances in the low-frequency region of the spectrum, which induced a non-monotonic relationship between binaural intensity and source distance. The results obtained in the second experiment suggest that ADP can also be affected by stimulus bandwidth but in a less straightforward way (i.e., depending on the center frequency, increasing stimulus bandwidth could have different effects). Finally, the analysis of the acoustical cues suggests that listeners judged source distance using mainly changes in the overall intensity of the auditory stimulus with distance rather than the direct-to-reverberant energy ratio, even for low-frequency noise bands (which typically induce high amount of reverberation). The results obtained in this study show that, depending on the spectrum of the auditory stimulus, reverberation can degrade ADP rather than improve it. PMID:28690556

  12. Sound Spectrum Influences Auditory Distance Perception of Sound Sources Located in a Room Environment.

    PubMed

    Spiousas, Ignacio; Etchemendy, Pablo E; Eguia, Manuel C; Calcagno, Esteban R; Abregú, Ezequiel; Vergara, Ramiro O

    2017-01-01

    Previous studies on the effect of spectral content on auditory distance perception (ADP) focused on the physically measurable cues occurring either in the near field (low-pass filtering due to head diffraction) or when the sound travels distances >15 m (high-frequency energy losses due to air absorption). Here, we study how the spectrum of a sound arriving from a source located in a reverberant room at intermediate distances (1-6 m) influences the perception of the distance to the source. First, we conducted an ADP experiment using pure tones (the simplest possible spectrum) of frequencies 0.5, 1, 2, and 4 kHz. Then, we performed a second ADP experiment with stimuli consisting of continuous broadband and bandpass-filtered (with center frequencies of 0.5, 1.5, and 4 kHz and bandwidths of 1/12, 1/3, and 1.5 octave) pink-noise clips. Our results showed an effect of the stimulus frequency on the perceived distance both for pure tones and filtered noise bands: ADP was less accurate for stimuli containing energy only in the low-frequency range. Analysis of the frequency response of the room showed that the low accuracy observed for low-frequency stimuli can be explained by the presence of sparse modal resonances in the low-frequency region of the spectrum, which induced a non-monotonic relationship between binaural intensity and source distance. The results obtained in the second experiment suggest that ADP can also be affected by stimulus bandwidth but in a less straightforward way (i.e., depending on the center frequency, increasing stimulus bandwidth could have different effects). Finally, the analysis of the acoustical cues suggests that listeners judged source distance using mainly changes in the overall intensity of the auditory stimulus with distance rather than the direct-to-reverberant energy ratio, even for low-frequency noise bands (which typically induce high amount of reverberation). The results obtained in this study show that, depending on the spectrum of the auditory stimulus, reverberation can degrade ADP rather than improve it.

  13. Auditory-visual integration modulates location-specific repetition suppression of auditory responses.

    PubMed

    Shrem, Talia; Murray, Micah M; Deouell, Leon Y

    2017-11-01

    Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information. © 2017 Society for Psychophysiological Research.

  14. Acoustic Green's function extraction in the ocean

    NASA Astrophysics Data System (ADS)

    Zang, Xiaoqin

    The acoustic Green's function (GF) is the key to understanding the acoustic properties of ocean environments. With knowledge of the acoustic GF, the physics of sound propagation, such as dispersion, can be analyzed; underwater communication over thousands of miles can be understood; physical properties of the ocean, including ocean temperature, ocean current speed, as well as seafloor bathymetry, can be investigated. Experimental methods of acoustic GF extraction can be categorized as active methods and passive methods. Active methods are based on employment of man-made sound sources. These active methods require less computational complexity and time, but may cause harm to marine mammals. Passive methods cost much less and do not harm marine mammals, but require more theoretical and computational work. Both methods have advantages and disadvantages that should be carefully tailored to fit the need of each specific environment and application. In this dissertation, we study one passive method, the noise interferometry method, and one active method, the inverse filter processing method, to achieve acoustic GF extraction in the ocean. The passive method of noise interferometry makes use of ambient noise to extract an approximation to the acoustic GF. In an environment with a diffusive distribution of sound sources, sound waves that pass through two hydrophones at two locations carry the information of the acoustic GF between these two locations; by listening to the long-term ambient noise signals and cross-correlating the noise data recorded at two locations, the acoustic GF emerges from the noise cross-correlation function (NCF); a coherent stack of many realizations of NCFs yields a good approximation to the acoustic GF between these two locations, with all the deterministic structures clearly exhibited in the waveform. To test the performance of noise interferometry in different types of ocean environments, two field experiments were performed and ambient noise data were collected in a 100-meter deep coastal ocean environment and a 600-meter deep ocean environment. In the coastal ocean environment, the collected noise data were processed by coherently stacking five days of cross-correlation functions between pairs of hydrophones separated by 5 km, 10 km and 15 km, respectively. NCF waveforms were modeled using the KRAKEN normal mode model, with the difference between the NCFs and the acoustic GFs quantified by a weighting function. Through waveform inversion of NCFs, an optimal geoacoustic model was obtained by minimizing the two-norm misfit between the simulation and the measurement. Using a simulated time-reversal mirror, the extracted GF was back propagated from the receiver location to the virtual source, and a strong focus was found in the vicinity of the source, which provides additional support for the optimality of the aforementioned geoacoustic model. With the extracted GF, dispersion in experimental shallow water environment was visualized in the time-frequency representation. Normal modes of GFs were separated using the time-warping transformation. By separating the modes in the frequency domain of the time-warped signal, we isolated modal arrivals and reconstructed the NCF by summing up the isolated modes, thereby significantly improving the signal-to-noise ratio of NCFs. Finally, these reconstructed NCFs were employed to estimate the depth-averaged current speed in the Florida Straits, based on an effective sound speed approximation. In the mid-deep ocean environment, the noise data were processed using the same noise interferometry method, but the obtained NCFs were not as good as those in the coastal ocean environment. Several highly possible reasons of the difference in the noise interferometry performance were investigated and discussed. The first one is the noise source composition, which is different in the spectrograms of noise records in two environments. The second is strong ocean current variability that can result in coherence loss and undermine the utility of coherent stacking. The third one is the downward refracting sound speed profile, which impedes strong coupling between near surface noise sources and the near-bottom instruments. The active method of inverse filter processing was tested in a long-range deep-ocean environment. The high-power sound source, which was located near the sound channel axis, transmitted a pre-designed signal that was composed of a precursor signal and a communication signal. After traveling 1428.5 km distance in the north Pacific Ocean, the transmitted signal was detected by the receiver and was processed using the inverse filter. The probe signal, which was composed of M sequences and was known at the receiver, was utilized for the GF extraction in the inverse filter; the communication signal was then interpreted with the extracted GF. With a glitch in the length of communication signal, the inverse filter processing method was shown to be effective for long-range low-frequency deep ocean acoustic communication. (Abstract shortened by ProQuest.).

  15. USAF bioenvironmental noise data handbook. Volume 166: AF/M32T-1 tester, pressurized cabin leakage, aircraft

    NASA Astrophysics Data System (ADS)

    Rau, T. H.

    1982-07-01

    Measured and extrapolated data define the bioacoustic environments produced by a gasoline engine driven cabin leakage tester operating outdoors on a concrete apron at normal rated conditions. Near field data are presented for 37 locations at a wide variety of physical and psychoacoustic measures: overall and band sound pressure levels, C-weighted and A-weighted sound levels, preferred speech interference level, perceived noise level, and limiting times for total daily exposure of personnel with and without standard Air Force ear protectors. Far-field data measured at 36 locations are normalized to standard meteorological conditions and extrapolated from 10 - 1600 meters to derive sets of equal-value contours for these same seven acoustic measures as functions of angle and distance from the source.

  16. Volume I. Percussion Sextet. (original Composition). Volume II. The Simulation of Acoustical Space by Means of Physical Modeling.

    NASA Astrophysics Data System (ADS)

    Manzara, Leonard Charles

    1990-01-01

    The dissertation is in two parts:. 1. Percussion Sextet. The Percussion Sextet is a one movement musical composition with a length of approximately fifteen minutes. It is for six instrumentalists, each on a number of percussion instruments. The overriding formal problem was to construct a coherent and compelling structure which fuses a diversity of musical materials and textures into a dramatic whole. Particularly important is the synthesis of opposing tendencies contained in stochastic and deterministic processes: global textures versus motivic detail, and randomness versus total control. Several compositional techniques are employed in the composition. These methods of composition will be aided, in part, by the use of artificial intelligence techniques programmed on a computer. Finally, the percussion ensemble is the ideal medium to realize the above processes since it encompasses a wide range of both pitched and unpitched timbres, and since a great variety of textures and densities can be created with a certain economy of means. 2. The simulation of acoustical space by means of physical modeling. This is a written report describing the research and development of a computer program which simulates the characteristics of acoustical space in two dimensions. With the computer program the user can simulate most conventional acoustical spaces, as well as those physically impossible to realize in the real world. The program simulates acoustical space by means of geometric modeling. This involves defining wall equations, phantom source points and wall diffusions, and then processing input files containing digital signals through the program, producing output files ready for digital to analog conversion. The user of the program is able to define wall locations and wall reflectivity and roughness characteristics, all of which can be changed over time. Sound source locations are also definable within the acoustical space and these locations can be changed independently at any rate of speed. The sounds themselves are generated from any external sound synthesis program or appropriate sampling system. Finally, listener location and orientation is also user definable and dynamic in nature. A Receive-ReBroadcast (RRB) model is used to play back the sound and is definable from two to eight channels of sound. (Abstract shortened with permission of author.).

  17. Mathematically trivial control of sound using a parametric beam focusing source.

    PubMed

    Tanaka, Nobuo; Tanaka, Motoki

    2011-01-01

    By exploiting a case regarded as trivial, this paper presents global active noise control using a parametric beam focusing source (PBFS). As with a dipole model, one is used for a primary sound source and the other for a control sound source, the control effect for minimizing a total acoustic power depends on the distance between the two. When the distance becomes zero, the total acoustic power becomes null, hence nothing less than a trivial case. Because of the constraints in practice, there exist difficulties in placing a control source close enough to a primary source. However, by projecting a sound beam of a parametric array loudspeaker onto the target sound source (primary source), a virtual sound source may be created on the target sound source, thereby enabling the collocation of the sources. In order to further ensure feasibility of the trivial case, a PBFS is then introduced in an effort to meet the size of the two sources. Reflected sound wave of the PBFS, which is tantamount to the virtual sound source output, aims to suppress the primary sound. Finally, a numerical analysis as well as an experiment is conducted, verifying the validity of the proposed methodology.

  18. A Flexible 360-Degree Thermal Sound Source Based on Laser Induced Graphene

    PubMed Central

    Tao, Lu-Qi; Liu, Ying; Ju, Zhen-Yi; Tian, He; Xie, Qian-Yi; Yang, Yi; Ren, Tian-Ling

    2016-01-01

    A flexible sound source is essential in a whole flexible system. It’s hard to integrate a conventional sound source based on a piezoelectric part into a whole flexible system. Moreover, the sound pressure from the back side of a sound source is usually weaker than that from the front side. With the help of direct laser writing (DLW) technology, the fabrication of a flexible 360-degree thermal sound source becomes possible. A 650-nm low-power laser was used to reduce the graphene oxide (GO). The stripped laser induced graphene thermal sound source was then attached to the surface of a cylindrical bottle so that it could emit sound in a 360-degree direction. The sound pressure level and directivity of the sound source were tested, and the results were in good agreement with the theoretical results. Because of its 360-degree sound field, high flexibility, high efficiency, low cost, and good reliability, the 360-degree thermal acoustic sound source will be widely applied in consumer electronics, multi-media systems, and ultrasonic detection and imaging. PMID:28335239

  19. 49 CFR 325.37 - Location and operation of sound level measurement system; highway operations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 5 2011-10-01 2011-10-01 false Location and operation of sound level measurement...; Highway Operations § 325.37 Location and operation of sound level measurement system; highway operations. (a) The microphone of a sound level measurement system that conforms to the rules in § 325.23 of this...

  20. A deterministic (non-stochastic) low frequency method for geoacoustic inversion.

    PubMed

    Tolstoy, A

    2010-06-01

    It is well known that multiple frequency sources are necessary for accurate geoacoustic inversion. This paper presents an inversion method which uses the low frequency (LF) spectrum only to estimate bottom properties even in the presence of expected errors in source location, phone depths, and ocean sound-speed profiles. Matched field processing (MFP) along a vertical array is used. The LF method first conducts an exhaustive search of the (five) parameter search space (sediment thickness, sound-speed at the top of the sediment layer, the sediment layer sound-speed gradient, the half-space sound-speed, and water depth) at 25 Hz and continues by retaining only the high MFP value parameter combinations. Next, frequency is slowly increased while again retaining only the high value combinations. At each stage of the process, only those parameter combinations which give high MFP values at all previous LF predictions are considered (an ever shrinking set). It is important to note that a complete search of each relevant parameter space seems to be necessary not only at multiple (sequential) frequencies but also at multiple ranges in order to eliminate sidelobes, i.e., false solutions. Even so, there are no mathematical guarantees that one final, unique "solution" will be found.

  1. Acoustic Modeling for Aqua Ventus I off Monhegan Island, ME

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whiting, Jonathan M.; Hanna, Luke A.; DeChello, Nicole L.

    2013-10-31

    The DeepCwind consortium, led by the University of Maine, was awarded funding under the US Department of Energy’s Offshore Wind Advanced Technology Demonstration Program to develop two floating offshore wind turbines in the Gulf of Maine equipped with Goldwind 6 MW direct drive turbines, as the Aqua Ventus I project. The Goldwind turbines have a hub height of 100 m. The turbines will be deployed in Maine State waters, approximately 2.9 miles off Monhegan Island; Monhegan Island is located roughly 10 miles off the coast of Maine. In order to site and permit the offshore turbines, the acoustic output mustmore » be evaluated to ensure that the sound will not disturb residents on Monhegan Island, nor input sufficient sound levels into the nearby ocean to disturb marine mammals. This initial assessment of the acoustic output focuses on the sound of the turbines in air by modeling the assumed sound source level, applying a sound propagation model, and taking into account the distance from shore.« less

  2. Characterizing, synthesizing, and/or canceling out acoustic signals from sound sources

    DOEpatents

    Holzrichter, John F [Berkeley, CA; Ng, Lawrence C [Danville, CA

    2007-03-13

    A system for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate and animate sound sources. Electromagnetic sensors monitor excitation sources in sound producing systems, such as animate sound sources such as the human voice, or from machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The systems disclosed enable accurate calculation of transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.

  3. Psychophysics and Neuronal Bases of Sound Localization in Humans

    PubMed Central

    Ahveninen, Jyrki; Kopco, Norbert; Jääskeläinen, Iiro P.

    2013-01-01

    Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory “where” pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. PMID:23886698

  4. The inferior colliculus encodes the Franssen auditory spatial illusion

    PubMed Central

    Rajala, Abigail Z.; Yan, Yonghe; Dent, Micheal L.; Populin, Luis C.

    2014-01-01

    Illusions are effective tools for the study of the neural mechanisms underlying perception because neural responses can be correlated to the physical properties of stimuli and the subject’s perceptions. The Franssen illusion (FI) is an auditory spatial illusion evoked by presenting a transient, abrupt tone and a slowly rising, sustained tone of the same frequency simultaneously on opposite sides of the subject. Perception of the FI consists of hearing a single sound, the sustained tone, on the side that the transient was presented. Both subcortical and cortical mechanisms for the FI have been proposed, but, to date, there is no direct evidence for either. The data show that humans and rhesus monkeys perceive the FI similarly. Recordings were taken from single units of the inferior colliculus in the monkey while they indicated the perceived location of sound sources with their gaze. The results show that the transient component of the Franssen stimulus, with a shorter first spike latency and higher discharge rate than the sustained tone, encodes the perception of sound location. Furthermore, the persistent erroneous perception of the sustained stimulus location is due to continued excitation of the same neurons, first activated by the transient, by the sustained stimulus without location information. These results demonstrate for the first time, on a trial-by-trial basis, a correlation between perception of an auditory spatial illusion and a subcortical physiological substrate. PMID:23899307

  5. Time synchronization and geoacoustic inversion using baleen whale sounds

    NASA Astrophysics Data System (ADS)

    Thode, Aaron; Gerstoft, Peter; Stokes, Dale; Noad, Mike; Burgess, William; Cato, Doug

    2005-09-01

    In 1996 matched-field processing (MFP) and geoacoustic inversion methods were used to invert for range, depth, and source levels of blue whale vocalizations. [A. M. Thode, G. L. D'Spain, and W. A. Kuperman, J. Acoust. Soc. Am. 107, 1286-1300 (2000)]. Humpback whales also produce broadband sequences of sounds that contain significant energy between 50 Hz to over 1 kHz. In Oct. 2003 and 2004 samples of humpback whale song were collected on vertical and titled arrays in 24-m-deep water in conjunction with the Humpback Acoustic Research Collaboration (HARC). The arrays consisted of autonomous recorders attached to a rope, and were time synchronized by extending standard geoacoustic inversion methods to invert for clock offset as well as whale location. The diffuse ambient noise background field was then used to correct for subsequent clock drift. Independent measurements of the local bathymetry and transmission loss were also obtained in the area. Preliminary results are presented for geoacoustic inversions of the ocean floor composition and humpback whale locations and source levels. [Work supported by ONR Ocean Acoustic Entry Level Faculty Award and Marine Mammals Program.

  6. Automatic detection of unattended changes in room acoustics.

    PubMed

    Frey, Johannes Daniel; Wendt, Mike; Jacobsen, Thomas

    2015-01-01

    Previous research has shown that the human auditory system continuously monitors its acoustic environment, detecting a variety of irregularities (e.g., deviance from prior stimulation regularity in pitch, loudness, duration, and (perceived) sound source location). Detection of irregularities can be inferred from a component of the event-related brain potential (ERP), referred to as the mismatch negativity (MMN), even in conditions in which participants are instructed to ignore the auditory stimulation. The current study extends previous findings by demonstrating that auditory irregularities brought about by a change in room acoustics elicit a MMN in a passive oddball protocol (acoustic stimuli with differing room acoustics, that were otherwise identical, were employed as standard and deviant stimuli), in which participants watched a fiction movie (silent with subtitles). While the majority of participants reported no awareness for any changes in the auditory stimulation, only one out of 14 participants reported to have become aware of changing room acoustics or sound source location. Together, these findings suggest automatic monitoring of room acoustics. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Basic experimental study of the coupling between flow instabilities and incident sound

    NASA Astrophysics Data System (ADS)

    Ahuja, K. K.

    1984-03-01

    Whether a solid trailing edge is required to produce efficient coupling between sound and instability waves in a shear layer was investigated. The differences found in the literature on the theoretical notions about receptivity, and a need to resolve them by way of well-planned experiments are discussed. Instability waves in the shear layer of a subsonic jet, excited by a point sound source located external to the jet, were first visualized using an ensemble averaging technique. Various means were adopted to shield the sound reaching the nozzle lip. It was found that the low frequency sound couples more efficiently at distances downstream of the nozzle. To substantiate the findings further, a supersonic screeching jet was tested such that it passed through a small opening in a baffle placed parallel to the exit plane. The measured feedback or screech frequencies and also the excited flow disturbances changed drastically on traversing the baffle axially thus providing a strong indication that a trailing edge is not necessary for efficient coupling between sound and flow.

  8. Basic experimental study of the coupling between flow instabilities and incident sound

    NASA Technical Reports Server (NTRS)

    Ahuja, K. K.

    1984-01-01

    Whether a solid trailing edge is required to produce efficient coupling between sound and instability waves in a shear layer was investigated. The differences found in the literature on the theoretical notions about receptivity, and a need to resolve them by way of well-planned experiments are discussed. Instability waves in the shear layer of a subsonic jet, excited by a point sound source located external to the jet, were first visualized using an ensemble averaging technique. Various means were adopted to shield the sound reaching the nozzle lip. It was found that the low frequency sound couples more efficiently at distances downstream of the nozzle. To substantiate the findings further, a supersonic screeching jet was tested such that it passed through a small opening in a baffle placed parallel to the exit plane. The measured feedback or screech frequencies and also the excited flow disturbances changed drastically on traversing the baffle axially thus providing a strong indication that a trailing edge is not necessary for efficient coupling between sound and flow.

  9. Effects of high combustion chamber pressure on rocket noise environment

    NASA Technical Reports Server (NTRS)

    Pao, S. P.

    1972-01-01

    The acoustical environment for a high combustion chamber pressure engine was examined in detail, using both conventional and advanced theoretical analysis. The influence of elevated chamber pressure on the rocket noise environment was established, based on increase in exit velocity and flame temperature, and changes in basic engine dimensions. Compared to large rocket engines, the overall sound power level is found to be 1.5 dB higher, if the thrust is the same. The peak Strouhal number shifted about one octave lower to a value near 0.01. Data on apparent sound source location and directivity patterns are also presented.

  10. Survey of the Acoustic near Field of Three Nozzles at a Pressure Ratio of 30

    NASA Technical Reports Server (NTRS)

    Mull, Harold R; Erickson, John C , Jr

    1957-01-01

    The sound pressures radiating from the exhaust streams of two convergent-divergent and one convergent nozzle were measured. Exit diameters were 1.206 in. for the expanded nozzle and 0.625 in. for the convergent nozzle. The results are presented in a series of contour maps of overall and fine 1/3-octave-band sound pressures. The location of the source of the noise in each 1/3-octave band in the frequency range of 30 to 16,000 cps and the total power radiated were determined and compared with those of subsonic jets.

  11. Is low frequency ocean sound increasing globally?

    PubMed

    Miksis-Olds, Jennifer L; Nichols, Stephen M

    2016-01-01

    Low frequency sound has increased in the Northeast Pacific Ocean over the past 60 yr [Ross (1993) Acoust. Bull. 18, 5-8; (2005) IEEE J. Ocean. Eng. 30, 257-261; Andrew, Howe, Mercer, and Dzieciuch (2002) J. Acoust. Soc. Am. 129, 642-651; McDonald, Hildebrand, and Wiggins (2006) J. Acoust. Soc. Am. 120, 711-717; Chapman and Price (2011) J. Acoust. Soc. Am. 129, EL161-EL165] and in the Indian Ocean over the past decade, [Miksis-Olds, Bradley, and Niu (2013) J. Acoust. Soc. Am. 134, 3464-3475]. More recently, Andrew, Howe, and Mercer's [(2011) J. Acoust. Soc. Am. 129, 642-651] observations in the Northeast Pacific show a level or slightly decreasing trend in low frequency noise. It remains unclear what the low frequency trends are in other regions of the world. In this work, data from the Comprehensive Nuclear-Test Ban Treaty Organization International Monitoring System was used to examine the rate and magnitude of change in low frequency sound (5-115 Hz) over the past decade in the South Atlantic and Equatorial Pacific Oceans. The dominant source observed in the South Atlantic was seismic air gun signals, while shipping and biologic sources contributed more to the acoustic environment at the Equatorial Pacific location. Sound levels over the past 5-6 yr in the Equatorial Pacific have decreased. Decreases were also observed in the ambient sound floor in the South Atlantic Ocean. Based on these observations, it does not appear that low frequency sound levels are increasing globally.

  12. Examination of propeller sound production using large eddy simulation

    NASA Astrophysics Data System (ADS)

    Keller, Jacob; Kumar, Praveen; Mahesh, Krishnan

    2018-06-01

    The flow field of a five-bladed marine propeller operating at design condition, obtained using large eddy simulation, is used to calculate the resulting far-field sound. The results of three acoustic formulations are compared, and the effects of the underlying assumptions are quantified. The integral form of the Ffowcs-Williams and Hawkings (FW-H) equation is solved on the propeller surface, which is discretized into a collection of N radial strips. Further assumptions are made to reduce FW-H to a Curle acoustic analogy and a point-force dipole model. Results show that although the individual blades are strongly tonal in the rotor plane, the propeller is acoustically compact at low frequency and the tonal sound interferes destructively in the far field. The propeller is found to be acoustically compact for frequencies up to 100 times the rotation rate. The overall far-field acoustic signature is broadband. Locations of maximum sound of the propeller occur along the axis of rotation both up and downstream. The propeller hub is found to be a source of significant sound to observers in the rotor plane, due to flow separation and interaction with the blade-root wakes. The majority of the propeller sound is generated by localized unsteadiness at the blade tip, which is caused by shedding of the tip vortex. Tonal blade sound is found to be caused by the periodic motion of the loaded blades. Turbulence created in the blade boundary layer is convected past the blade trailing edge leading to generation of broadband noise along the blade. Acoustic energy is distributed among higher frequencies as local Reynolds number increases radially along the blades. Sound source correlation and spectra are examined in the context of noise modeling.

  13. Aeroacoustics of Flight Vehicles: Theory and Practice. Volume 2. Noise Control

    DTIC Science & Technology

    1991-08-01

    noisiness, Localization and Precedence The ability to determine the location of sound sources is one of the major benefits of having a binaural hearing... binaural hearing is commonly called the Haas. or precedence, effect (ref. 16). This refers to the ability to hear as a single acoustic event the...propellers are operated at slightly different rpm values, beating interference between the two sources occurs, and the noise level in the cabin rises and

  14. The acoustical cues to sound location in the Guinea pig (cavia porcellus)

    PubMed Central

    Greene, Nathanial T; Anbuhl, Kelsey L; Williams, Whitney; Tollin, Daniel J.

    2014-01-01

    There are three main acoustical cues to sound location, each attributable to space-and frequency-dependent filtering of the propagating sound waves by the outer ears, head, and torso: Interaural differences in time (ITD) and level (ILD) as well as monaural spectral shape cues. While the guinea pig has been a common model for studying the anatomy, physiology, and behavior of binaural and spatial hearing, extensive measurements of their available acoustical cues are lacking. Here, these cues were determined from directional transfer functions (DTFs), the directional components of the head-related transfer functions, for eleven adult guinea pigs. In the frontal hemisphere, monaural spectral notches were present for frequencies from ~10 to 20 kHz; in general, the notch frequency increased with increasing sound source elevation and in azimuth toward the contralateral ear. The maximum ITDs calculated from low-pass filtered (2 kHz cutoff frequency) DTFs were ~250 µs, whereas the maximum ITD measured with low frequency tone pips was over 320 µs. A spherical head model underestimates ITD magnitude under normal conditions, but closely approximates values when the pinnae were removed. Interaural level differences (ILDs) strongly depended on location and frequency; maximum ILDs were < 10 dB for frequencies < 4 kHz and were as large as 40 dB for frequencies > 10 kHz. Removal of the pinna reduced the depth and sharpness of spectral notches, altered the acoustical axis, and reduced the acoustical gain, ITDs, and ILDs; however, spectral shape features and acoustical gain were not completely eliminated, suggesting a substantial contribution of the head and torso in altering the sounds present at the tympanic membrane. PMID:25051197

  15. Source and listener directivity for interactive wave-based sound propagation.

    PubMed

    Mehra, Ravish; Antani, Lakulish; Kim, Sujeong; Manocha, Dinesh

    2014-04-01

    We present an approach to model dynamic, data-driven source and listener directivity for interactive wave-based sound propagation in virtual environments and computer games. Our directional source representation is expressed as a linear combination of elementary spherical harmonic (SH) sources. In the preprocessing stage, we precompute and encode the propagated sound fields due to each SH source. At runtime, we perform the SH decomposition of the varying source directivity interactively and compute the total sound field at the listener position as a weighted sum of precomputed SH sound fields. We propose a novel plane-wave decomposition approach based on higher-order derivatives of the sound field that enables dynamic HRTF-based listener directivity at runtime. We provide a generic framework to incorporate our source and listener directivity in any offline or online frequency-domain wave-based sound propagation algorithm. We have integrated our sound propagation system in Valve's Source game engine and use it to demonstrate realistic acoustic effects such as sound amplification, diffraction low-passing, scattering, localization, externalization, and spatial sound, generated by wave-based propagation of directional sources and listener in complex scenarios. We also present results from our preliminary user study.

  16. Sound source identification and sound radiation modeling in a moving medium using the time-domain equivalent source method.

    PubMed

    Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang

    2015-05-01

    Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.

  17. Changes in room acoustics elicit a Mismatch Negativity in the absence of overall interaural intensity differences.

    PubMed

    Frey, Johannes Daniel; Wendt, Mike; Löw, Andreas; Möller, Stephan; Zölzer, Udo; Jacobsen, Thomas

    2017-02-15

    Changes in room acoustics provide important clues about the environment of sound source-perceiver systems, for example, by indicating changes in the reflecting characteristics of surrounding objects. To study the detection of auditory irregularities brought about by a change in room acoustics, a passive oddball protocol with participants watching a movie was applied in this study. Acoustic stimuli were presented via headphones. Standards and deviants were created by modelling rooms of different sizes, keeping the values of the basic acoustic dimensions (e.g., frequency, duration, sound pressure, and sound source location) as constant as possible. In the first experiment, each standard and deviant stimulus consisted of sequences of three short sounds derived from sinusoidal tones, resulting in three onsets during each stimulus. Deviant stimuli elicited a Mismatch Negativity (MMN) as well as two additional negative deflections corresponding to the three onset peaks. In the second experiment, only one sound was used; the stimuli were otherwise identical to the ones used in the first experiment. Again, an MMN was observed, followed by an additional negative deflection. These results provide further support for the hypothesis of automatic detection of unattended changes in room acoustics, extending previous work by demonstrating the elicitation of an MMN by changes in room acoustics. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations

    PubMed Central

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning (“opponent channel model”). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. PMID:26545618

  19. Promoting the perception of two and three concurrent sound objects: An event-related potential study.

    PubMed

    Kocsis, Zsuzsanna; Winkler, István; Bendixen, Alexandra; Alain, Claude

    2016-09-01

    The auditory environment typically comprises several simultaneously active sound sources. In contrast to the perceptual segregation of two concurrent sounds, the perception of three simultaneous sound objects has not yet been studied systematically. We conducted two experiments in which participants were presented with complex sounds containing sound segregation cues (mistuning, onset asynchrony, differences in frequency or amplitude modulation or in sound location), which were set up to promote the perceptual organization of the tonal elements into one, two, or three concurrent sounds. In Experiment 1, listeners indicated whether they heard one, two, or three concurrent sounds. In Experiment 2, participants watched a silent subtitled movie while EEG was recorded to extract the object-related negativity (ORN) component of the event-related potential. Listeners predominantly reported hearing two sounds when the segregation promoting manipulations were applied to the same tonal element. When two different tonal elements received manipulations promoting them to be heard as separate auditory objects, participants reported hearing two and three concurrent sounds objects with equal probability. The ORN was elicited in most conditions; sounds that included the amplitude- or the frequency-modulation cue generated the smallest ORN amplitudes. Manipulating two different tonal elements yielded numerically and often significantly smaller ORNs than the sum of the ORNs elicited when the same cues were applied on a single tonal element. These results suggest that ORN reflects the presence of multiple concurrent sounds, but not their number. The ORN results are compatible with the horse-race principle of combining different cues of concurrent sound segregation. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Numerical simulation of the SOFIA flowfield

    NASA Technical Reports Server (NTRS)

    Klotz, Stephen P.

    1994-01-01

    This report provides a concise summary of the contribution of computational fluid dynamics (CFD) to the SOFIA (Stratospheric Observatory for Infrared Astronomy) project at NASA Ames and presents results obtained from closed- and open-cavity SOFIA simulations. The aircraft platform is a Boeing 747SP and these are the first SOFIA simulations run with the aircraft empennage included in the geometry database. In the open-cavity run the telescope is mounted behind the wings. Results suggest that the cavity markedly influences the mean pressure distribution on empennage surfaces and that 110-140 decibel (db) sound pressure levels are typical in the cavity and on the horizontal and vertical stabilizers. A strong source of sound was found to exist on the rim of the open telescope cavity. The presence of this source suggests that additional design work needs to be performed in order to minimize the sound emanating from that location. A fluid dynamic analysis of the engine plumes is also contained in this report. The analysis was part of an effort to quantify the degradation of telescope performance resulting from the proximity of the port engine exhaust plumes to the open telescope bay.

  1. A method for evaluating the relation between sound source segregation and masking

    PubMed Central

    Lutfi, Robert A.; Liu, Ching-Ju

    2011-01-01

    Sound source segregation refers to the ability to hear as separate entities two or more sound sources comprising a mixture. Masking refers to the ability of one sound to make another sound difficult to hear. Often in studies, masking is assumed to result from a failure of segregation, but this assumption may not always be correct. Here a method is offered to identify the relation between masking and sound source segregation in studies and an example is given of its application. PMID:21302979

  2. Influence of sound source location on the behavior and physiology of the precedence effect in cats.

    PubMed

    Dent, Micheal L; Tollin, Daniel J; Yin, Tom C T

    2009-08-01

    Psychophysical experiments on the precedence effect (PE) in cats have shown that they localize pairs of auditory stimuli presented from different locations in space based on the spatial position of the stimuli and the interstimulus delay (ISD) between the stimuli in a manner similar to humans. Cats exhibit localization dominance for pairs of transient stimuli with |ISDs| from approximately 0.4 to 10 ms, summing localization for |ISDs| < 0.4 ms and breakdown of fusion for |ISDs| > 10 ms, which is the approximate echo threshold. The neural correlates to the PE have been described in both anesthetized and unanesthetized animals at many levels from auditory nerve to cortex. Single-unit recordings from the inferior colliculus (IC) and auditory cortex of cats demonstrate that neurons respond to both lead and lag sounds at ISDs above behavioral echo thresholds, but the response to the lag is reduced at shorter ISDs, consistent with localization dominance. Here the influence of the relative locations of the leading and lagging sources on the PE was measured behaviorally in a psychophysical task and physiologically in the IC of awake behaving cats. At all configurations of lead-lag stimulus locations, the cats behaviorally exhibited summing localization, localization dominance, and breakdown of fusion. Recordings from the IC of awake behaving cats show neural responses paralleling behavioral measurements. Both behavioral and physiological results suggest systematically shorter echo thresholds when stimuli are further apart in space.

  3. Influence of Sound Source Location on the Behavior and Physiology of the Precedence Effect in Cats

    PubMed Central

    Dent, Micheal L.; Tollin, Daniel J.; Yin, Tom C. T.

    2009-01-01

    Psychophysical experiments on the precedence effect (PE) in cats have shown that they localize pairs of auditory stimuli presented from different locations in space based on the spatial position of the stimuli and the interstimulus delay (ISD) between the stimuli in a manner similar to humans. Cats exhibit localization dominance for pairs of transient stimuli with |ISDs| from ∼0.4 to 10 ms, summing localization for |ISDs| < 0.4 ms and breakdown of fusion for |ISDs| > 10 ms, which is the approximate echo threshold. The neural correlates to the PE have been described in both anesthetized and unanesthetized animals at many levels from auditory nerve to cortex. Single-unit recordings from the inferior colliculus (IC) and auditory cortex of cats demonstrate that neurons respond to both lead and lag sounds at ISDs above behavioral echo thresholds, but the response to the lag is reduced at shorter ISDs, consistent with localization dominance. Here the influence of the relative locations of the leading and lagging sources on the PE was measured behaviorally in a psychophysical task and physiologically in the IC of awake behaving cats. At all configurations of lead-lag stimulus locations, the cats behaviorally exhibited summing localization, localization dominance, and breakdown of fusion. Recordings from the IC of awake behaving cats show neural responses paralleling behavioral measurements. Both behavioral and physiological results suggest systematically shorter echo thresholds when stimuli are further apart in space. PMID:19439668

  4. Selective attention in normal and impaired hearing.

    PubMed

    Shinn-Cunningham, Barbara G; Best, Virginia

    2008-12-01

    A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.

  5. Acoustics of Jet Surface Interaction - Scrubbing Noise

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas

    2014-01-01

    Concepts envisioned for the future of civil air transport consist of unconventional propulsion systems in the close proximity to the structure or embedded in the airframe. While such integrated systems are intended to shield noise from the community, they also introduce new sources of sound. Sound generation due to interaction of a jet flow past a nearby solid surface is investigated here using the generalized acoustic analogy theory. The analysis applies to the boundary layer noise generated at and near a wall, and excludes the scattered noise component that is produced at the leading or the trailing edge. While compressibility effects are relatively unimportant at very low Mach numbers, frictional heat generation and thermal gradient normal to the surface could play important roles in generation and propagation of sound in high speed jets of practical interest. A general expression is given for the spectral density of the far field sound as governed by the variable density Pridmore-Brown equation. The propagation Green's function is solved numerically for a high aspect-ratio rectangular jet starting with the boundary conditions on the surface and subject to specified mean velocity and temperature profiles between the surface and the observer. It is shown the magnitude of the Green's function decreases with increasing source frequency and/or jet temperature. The phase remains constant for a rigid surface, but varies with source location when subject to an impedance type boundary condition. The Green's function in the absence of the surface, and flight effects are also investigated

  6. Selective Attention in Normal and Impaired Hearing

    PubMed Central

    Shinn-Cunningham, Barbara G.; Best, Virginia

    2008-01-01

    A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention. PMID:18974202

  7. Acoustics of Jet Surface Interaction-Scrubbing Noise

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas

    2014-01-01

    Concepts envisioned for the future of civil air transport consist of unconventional propulsion systems in the close proximity of the structure or embedded in the airframe. While such integrated systems are intended to shield noise from community, they also introduce new sources of sound. Sound generation due to interaction of a jet flow past a nearby solid surface is investigated here using the generalized acoustic analogy theory. The analysis applies to the boundary layer noise generated at and near a wall, and excludes the scattered noise component that is produced at the leading or the trailing edge. While compressibility effects are relatively unimportant at very low Mach numbers, frictional heat generation and thermal gradient normal to the surface could play important roles in generation and propagation of sound in high speed jets of practical interest. A general expression is given for the spectral density of the far field sound as governed by the variable density Pridmore-Brown equation. The propagation Greens function is solved numerically for a high aspect-ratio rectangular jet starting with the boundary conditions on the surface and subject to specified mean velocity and temperature profiles between the surface and the observer. It is shown the magnitude of the Greens function decreases with increasing source frequency andor jet temperature. The phase remains constant for a rigid surface, but varies with source location when subject to an impedance type boundary condition. The Greens function in the absence of the surface, and flight effect are also investigated.

  8. Sound source localization identification accuracy: Envelope dependencies.

    PubMed

    Yost, William A

    2017-07-01

    Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.

  9. System and method for characterizing synthesizing and/or canceling out acoustic signals from inanimate sound sources

    DOEpatents

    Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.

    2003-01-01

    A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.

  10. System and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources

    DOEpatents

    Holzrichter, John F; Burnett, Greg C; Ng, Lawrence C

    2013-05-21

    A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.

  11. System and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources

    DOEpatents

    Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.

    2007-10-16

    A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.

  12. USAF Bioenvironmental Noise Data Handbook. Volume 165: MC-1 heater, duct type, portable

    NASA Astrophysics Data System (ADS)

    Rau, T. H.

    1982-06-01

    The MC-1 heater is a gasoline-motor driven, portable ground heater used primarily for cockpit and cabin temperature control. This report provides measured and extrapolated data defining the bioacoustic environments produced by this unit operating outdoors on a concrete apron at normal rated conditions. Near-field data are reported for 37 locations in a wide variety of physical and psychoacoustic measures: overall and band sound pressure levels, C-weighted and A-weighted sound levels, preferred speech interference level, perceived noise levels, and limiting times for total daily exposure of personnel with and without standard Air Force ear protectors. Far-field data measured at 36 locations are normalized to standard meteorological conditions and extrapolated from 10 1600 meters to derive sets of equal-value contours for these same seven acoustic measures as functions of angle and distance from the source.

  13. USAF bioenvironmental noise data handbook. Volume 158: F-106A aircraft, near and far-field noise

    NASA Astrophysics Data System (ADS)

    Rau, T. H.

    1982-05-01

    The USAF F-106A is a single seat, all-weather fighter/interceptor aircraft powered by a J75-P-17 turbojet engine. This report provides measured and extrapolated data defining the bioacoustic environments produced by this aircraft operating on a concrete runup pad for five engine-power conditions. Near-field data are reported for five locations in a wide variety of physical and psychoacoustic measures: overall and band sound pressure levels, C-weighted and A-weighted sound levels, preferred speech interference level, perceived noise levels, and limiting times for total daily exposure of personnel with and without standard Air Force ear protectors. Far-field data measured at 19 locations are normalized to standard meteorological conditions and extrapolated from 75 - 8000 meters to derive sets of equal-value contours for these same seven acoustic measures as functions of angle and distance from the source.

  14. USAF bioenvironmental noise data handbook. Volume 163: GPC-28 compressor

    NASA Astrophysics Data System (ADS)

    Rau, T. H.

    1982-05-01

    The GPC-28 is a gasoline engine-driven compressor with a 120 volt 60 Hz generator used for general purpose maintenance. This report provides measured and extrapolated data defining the bioacoustic environments produced by this unit operating outdoors on a concrete apron at a normal rated condition. Near-field data are reported for 37 locations in a wide variety of physical and psychoacoustic measures: overall and band sound pressure levels, C-weighted and A-weighted sound levels, preferred speech interference level, perceived noise level, and limiting times for total daily exposure of personnel with and without standard Air Force ear protectors. Far-field data measured at 36 locations are normalized to standard meteorological conditions and extrapolated from 10 - 1600 meters to derive sets of equal-value contours for these same seven acoustic measures as functions of angle and distance from the source.

  15. Investigation of Acoustical Shielding by a Wedge-Shaped Airframe

    NASA Technical Reports Server (NTRS)

    Gerhold, Carl H.; Clark, Lorenzo R.; Dunn, Mark H.; Tweed, John

    2006-01-01

    Experiments on a scale model of an advanced unconventional subsonic transport concept, the Blended Wing Body (BWB), have demonstrated significant shielding of inlet-radiated noise. A computational model of the shielding mechanism has been developed using a combination of boundary integral equation method (BIEM) and equivalent source method (ESM). The computation models the incident sound from a point source in a nacelle and determines the scattered sound field. In this way the sound fields with and without the airfoil can be estimated for comparison to experiment. An experimental test bed using a simplified wedge-shape airfoil and a broadband point noise source in a simulated nacelle has been developed for the purposes of verifying the analytical model and also to study the effect of engine nacelle placement on shielding. The experimental study is conducted in the Anechoic Noise Research Facility at NASA Langley Research Center. The analytic and experimental results are compared at 6300 and 8000 Hz. These frequencies correspond to approximately 150 Hz on the full scale aircraft. Comparison between the experimental and analytic results is quite good, not only for the noise scattering by the airframe, but also for the total sound pressure in the far field. Many of the details of the sound field that the analytic model predicts are seen or indicated in the experiment, within the spatial resolution limitations of the experiment. Changing nacelle location produces comparable changes in noise shielding contours evaluated analytically and experimentally. Future work in the project will be enhancement of the analytic model to extend the analysis to higher frequencies corresponding to the blade passage frequency of the high bypass ratio ducted fan engines that are expected to power the BWB.

  16. Investigation of Acoustical Shielding by a Wedge-Shaped Airframe

    NASA Technical Reports Server (NTRS)

    Gerhold, Carl H.; Clark, Lorenzo R.; Dunn, Mark H.; Tweed, John

    2004-01-01

    Experiments on a scale model of an advanced unconventional subsonic transport concept, the Blended Wing Body (BWB), have demonstrated significant shielding of inlet-radiated noise. A computational model of the shielding mechanism has been developed using a combination of boundary integral equation method (BIEM) and equivalent source method (ESM). The computation models the incident sound from a point source in a nacelle and determines the scattered sound field. In this way the sound fields with and without the airfoil can be estimated for comparison to experiment. An experimental test bed using a simplified wedge-shape airfoil and a broadband point noise source in a simulated nacelle has been developed for the purposes of verifying the analytical model and also to study the effect of engine nacelle placement on shielding. The experimental study is conducted in the Anechoic Noise Research Facility at NASA Langley Research Center. The analytic and experimental results are compared at 6300 and 8000 Hz. These frequencies correspond to approximately 150 Hz on the full scale aircraft. Comparison between the experimental and analytic results is quite good, not only for the noise scattering by the airframe, but also for the total sound pressure in the far field. Many of the details of the sound field that the analytic model predicts are seen or indicated in the experiment, within the spatial resolution limitations of the experiment. Changing nacelle location produces comparable changes in noise shielding contours evaluated analytically and experimentally. Future work in the project will be enhancement of the analytic model to extend the analysis to higher frequencies corresponding to the blade passage frequency of the high bypass ratio ducted fan engines that are expected to power the BWB.

  17. USAF bioenvironmental noise data handbook. Volume 162: MD-4MO generator set

    NASA Astrophysics Data System (ADS)

    Rau, T. H.

    1982-05-01

    The MD-4MO generator set is an electric motor-driven source of electrical power used primarily for the starting of aircraft, and for ground maintenance. This report provides measured and extrapolated data defining the bioacoustic environments produced by this unit operating outdoors on a concrete apron at a normal rated condition. Near-field data are reported for 37 locations in a wide variety of physical and psychoacoustic measures: overall and band sound pressure levels, C-weighted and A-weighted sound levels, preferred speech interference levels, perceived noise levels, and limiting times for total daily exposure of personnel with and without standard Air Force ear protectors.

  18. Drive-by large-region acoustic noise-source mapping via sparse beamforming tomography.

    PubMed

    Tuna, Cagdas; Zhao, Shengkui; Nguyen, Thi Ngoc Tho; Jones, Douglas L

    2016-10-01

    Environmental noise is a risk factor for human physical and mental health, demanding an efficient large-scale noise-monitoring scheme. The current technology, however, involves extensive sound pressure level (SPL) measurements at a dense grid of locations, making it impractical on a city-wide scale. This paper presents an alternative approach using a microphone array mounted on a moving vehicle to generate two-dimensional acoustic tomographic maps that yield the locations and SPLs of the noise-sources sparsely distributed in the neighborhood traveled by the vehicle. The far-field frequency-domain delay-and-sum beamforming output power values computed at multiple locations as the vehicle drives by are used as tomographic measurements. The proposed method is tested with acoustic data collected by driving an electric vehicle with a rooftop-mounted microphone array along a straight road next to a large open field, on which various pre-recorded noise-sources were produced by a loudspeaker at different locations. The accuracy of the tomographic imaging results demonstrates the promise of this approach for rapid, low-cost environmental noise-monitoring.

  19. The Scaling of Broadband Shock-Associated Noise with Increasing Temperature

    NASA Technical Reports Server (NTRS)

    Miller, Steven A. E.

    2013-01-01

    A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. To isolate the relevant physics, the scaling of BBSAN peak intensity level at the sideline observer location is examined. The equivalent source within the framework of an acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green's function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for saturation of BBSAN with increasing stagnation temperature. The sources and vector Green's function have arguments involving the steady Reynolds- Averaged Navier-Stokes solution of the jet. It is proposed that saturation of BBSAN with increasing jet temperature occurs due to a balance between the amplication of the sound propagation through the shear layer and the source term scaling.

  20. Microbiological quality of Puget Sound Basin streams and identification of contaminant sources

    USGS Publications Warehouse

    Embrey, S.S.

    2001-01-01

    Fecal coliforms, Escherichia coli, enterococci, and somatic coliphages were detected in samples from 31 sites on streams draining urban and agricultural regions of the Puget Sound Basin Lowlands. Densities of bacteria in 48 and 71 percent of the samples exceeded U.S. Environmental Protection Agency's freshwater recreation criteria for Escherichia coli and enterococci, respectively, and 81 percent exceeded Washington State fecal coliform standards. Male-specific coliphages were detected in samples from 15 sites. Male-specific F+RNA coliphages isolated from samples taken at South Fork Thornton and Longfellow Creeks were serotyped as Group II, implicating humans as potential contaminant sources. These two sites are located in residential, urban areas. F+RNA coliphages in samples from 10 other sites, mostly in agricultural or rural areas, were serotyped as Group I, implicating non-human animals as likely sources. Chemicals common to wastewater, including fecal sterols, were detected in samples from several urban streams, and also implicate humans, at least in part, as possible sources of fecal bacteria and viruses to the streams.

  1. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    PubMed Central

    Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten

    2016-01-01

    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli. PMID:27853290

  2. Spherical loudspeaker array for local active control of sound.

    PubMed

    Rafaely, Boaz

    2009-05-01

    Active control of sound has been employed to reduce noise levels around listeners' head using destructive interference from noise-canceling sound sources. Recently, spherical loudspeaker arrays have been studied as multiple-channel sound sources, capable of generating sound fields with high complexity. In this paper, the potential use of a spherical loudspeaker array for local active control of sound is investigated. A theoretical analysis of the primary and secondary sound fields around a spherical sound source reveals that the natural quiet zones for the spherical source have a shell-shape. Using numerical optimization, quiet zones with other shapes are designed, showing potential for quiet zones with extents that are significantly larger than the well-known limit of a tenth of a wavelength for monopole sources. The paper presents several simulation examples showing quiet zones in various configurations.

  3. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations.

    PubMed

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. © The Author 2015. Published by Oxford University Press.

  4. Sound production due to large-scale coherent structures

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.

    1979-01-01

    The acoustic pressure fluctuations due to large-scale finite amplitude disturbances in a free turbulent shear flow are calculated. The flow is decomposed into three component scales; the mean motion, the large-scale wave-like disturbance, and the small-scale random turbulence. The effect of the large-scale structure on the flow is isolated by applying both a spatial and phase average on the governing differential equations and by initially taking the small-scale turbulence to be in energetic equilibrium with the mean flow. The subsequent temporal evolution of the flow is computed from global energetic rate equations for the different component scales. Lighthill's theory is then applied to the region with the flowfield as the source and an observer located outside the flowfield in a region of uniform velocity. Since the time history of all flow variables is known, a minimum of simplifying assumptions for the Lighthill stress tensor is required, including no far-field approximations. A phase average is used to isolate the pressure fluctuations due to the large-scale structure, and also to isolate the dynamic process responsible. Variation of mean square pressure with distance from the source is computed to determine the acoustic far-field location and decay rate, and, in addition, spectra at various acoustic field locations are computed and analyzed. Also included are the effects of varying the growth and decay of the large-scale disturbance on the sound produced.

  5. Virtual targeting in three-dimensional space with sound and light interference

    NASA Astrophysics Data System (ADS)

    Chua, Florence B.; DeMarco, Robert M.; Bergen, Michael T.; Short, Kenneth R.; Servatius, Richard J.

    2006-05-01

    Law enforcement and the military are critically concerned with the targeting and firing accuracy of opponents. Stimuli which impede opponent targeting and firing accuracy can be incorporated into defense systems. An automated virtual firing range was developed to assess human targeting accuracy under conditions of sound and light interference, while avoiding dangers associated with live fire. This system has the ability to quantify sound and light interference effects on targeting and firing accuracy in three dimensions. This was achieved by development of a hardware and software system that presents the subject with a sound or light target, preceded by a sound or light interference. SonyXplod. TM 4-way speakers present sound interference and sound targeting. The Martin ® MiniMAC TM Profile operates as a source of light interference, while a red laser light serves as a target. A tracking system was created to monitor toy gun movement and firing in three-dimensional space. Data are collected via the Ascension ® Flock of Birds TM tracking system and a custom National Instrument ® LabVIEW TM 7.0 program to monitor gun movement and firing. A test protocol examined system parameters. Results confirm that the system enables tracking of virtual shots from a fired simulation gun to determine shot accuracy and location in three dimensions.

  6. Two-Microphone Spatial Filtering Improves Speech Reception for Cochlear-Implant Users in Reverberant Conditions With Multiple Noise Sources

    PubMed Central

    2014-01-01

    This study evaluates a spatial-filtering algorithm as a method to improve speech reception for cochlear-implant (CI) users in reverberant environments with multiple noise sources. The algorithm was designed to filter sounds using phase differences between two microphones situated 1 cm apart in a behind-the-ear hearing-aid capsule. Speech reception thresholds (SRTs) were measured using a Coordinate Response Measure for six CI users in 27 listening conditions including each combination of reverberation level (T60 = 0, 270, and 540 ms), number of noise sources (1, 4, and 11), and signal-processing algorithm (omnidirectional response, dipole-directional response, and spatial-filtering algorithm). Noise sources were time-reversed speech segments randomly drawn from the Institute of Electrical and Electronics Engineers sentence recordings. Target speech and noise sources were processed using a room simulation method allowing precise control over reverberation times and sound-source locations. The spatial-filtering algorithm was found to provide improvements in SRTs on the order of 6.5 to 11.0 dB across listening conditions compared with the omnidirectional response. This result indicates that such phase-based spatial filtering can improve speech reception for CI users even in highly reverberant conditions with multiple noise sources. PMID:25330772

  7. Slow Temporal Integration Enables Robust Neural Coding and Perception of a Cue to Sound Source Location.

    PubMed

    Brown, Andrew D; Tollin, Daniel J

    2016-09-21

    In mammals, localization of sound sources in azimuth depends on sensitivity to interaural differences in sound timing (ITD) and level (ILD). Paradoxically, while typical ILD-sensitive neurons of the auditory brainstem require millisecond synchrony of excitatory and inhibitory inputs for the encoding of ILDs, human and animal behavioral ILD sensitivity is robust to temporal stimulus degradations (e.g., interaural decorrelation due to reverberation), or, in humans, bilateral clinical device processing. Here we demonstrate that behavioral ILD sensitivity is only modestly degraded with even complete decorrelation of left- and right-ear signals, suggesting the existence of a highly integrative ILD-coding mechanism. Correspondingly, we find that a majority of auditory midbrain neurons in the central nucleus of the inferior colliculus (of chinchilla) effectively encode ILDs despite complete decorrelation of left- and right-ear signals. We show that such responses can be accounted for by relatively long windows of bilateral excitatory-inhibitory interaction, which we explicitly measure using trains of narrowband clicks. Neural and behavioral data are compared with the outputs of a simple model of ILD processing with a single free parameter, the duration of excitatory-inhibitory interaction. Behavioral, neural, and modeling data collectively suggest that ILD sensitivity depends on binaural integration of excitation and inhibition within a ≳3 ms temporal window, significantly longer than observed in lower brainstem neurons. This relatively slow integration potentiates a unique role for the ILD system in spatial hearing that may be of particular importance when informative ITD cues are unavailable. In mammalian hearing, interaural differences in the timing (ITD) and level (ILD) of impinging sounds carry critical information about source location. However, natural sounds are often decorrelated between the ears by reverberation and background noise, degrading the fidelity of both ITD and ILD cues. Here we demonstrate that behavioral ILD sensitivity (in humans) and neural ILD sensitivity (in single neurons of the chinchilla auditory midbrain) remain robust under stimulus conditions that render ITD cues undetectable. This result can be explained by "slow" temporal integration arising from several-millisecond-long windows of excitatory-inhibitory interaction evident in midbrain, but not brainstem, neurons. Such integrative coding can account for the preservation of ILD sensitivity despite even extreme temporal degradations in ecological acoustic stimuli. Copyright © 2016 the authors 0270-6474/16/369908-14$15.00/0.

  8. Large Eddy Simulation of Sound Generation by Turbulent Reacting and Nonreacting Shear Flows

    NASA Astrophysics Data System (ADS)

    Najafi-Yazdi, Alireza

    The objective of the present study was to investigate the mechanisms of sound generation by subsonic jets. Large eddy simulations were performed along with bandpass filtering of the flow and sound in order to gain further insight into the pole of coherent structures in subsonic jet noise generation. A sixth-order compact scheme was used for spatial discretization of the fully compressible Navier-Stokes equations. Time integration was performed through the use of the standard fourth-order, explicit Runge-Kutta scheme. An implicit low dispersion, low dissipation Runge-Kutta (ILDDRK) method was developed and implemented for simulations involving sources of stiffness such as flows near solid boundaries, or combustion. A surface integral acoustic analogy formulation, called Formulation 1C, was developed for farfield sound pressure calculations. Formulation 1C was derived based on the convective wave equation in order to take into account the presence of a mean flow. The formulation was derived to be easy to implement as a numerical post-processing tool for CFD codes. Sound radiation from an unheated, Mach 0.9 jet at Reynolds number 400, 000 was considered. The effect of mesh size on the accuracy of the nearfield flow and farfield sound results was studied. It was observed that insufficient grid resolution in the shear layer results in unphysical laminar vortex pairing, and increased sound pressure levels in the farfield. Careful examination of the bandpass filtered pressure field suggested that there are two mechanisms of sound radiation in unheated subsonic jets that can occur in all scales of turbulence. The first mechanism is the stretching and the distortion of coherent vortical structures, especially close to the termination of the potential core. As eddies are bent or stretched, a portion of their kinetic energy is radiated. This mechanism is quadrupolar in nature, and is responsible for strong sound radiation at aft angles. The second sound generation mechanism appears to be associated with the transverse vibration of the shear-layer interface within the ambient quiescent flow, and has dipolar characteristics. This mechanism is believed to be responsible for sound radiation along the sideline directions. Jet noise suppression through the use of microjets was studied. The microjet injection induced secondary instabilities in the shear layer which triggered the transition to turbulence, and suppressed laminar vortex pairing. This in turn resulted in a reduction of OASPL at almost all observer locations. In all cases, the bandpass filtering of the nearfield flow and the associated sound provides revealing details of the sound radiation process. The results suggest that circumferential modes are significant and need to be included in future wavepacket models for jet noise prediction. Numerical simulations of sound radiation from nonpremixed flames were also performed. The simulations featured the solution of the fully compressible Navier-Stokes equations. Therefore, sound generation and radiation were directly captured in the simulations. A thickened flamelet model was proposed for nonpremixed flames. The model yields artificially thickened flames which can be better resolved on the computational grid, while retaining the physically currect values of the total heat released into the flow. Combustion noise has monopolar characteristics for low frequencies. For high frequencies, the sound field is no longer omni-directional. Major sources of sound appear to be located in the jet shear layer within one potential core length from the jet nozzle.

  9. Issues in Humanoid Audition and Sound Source Localization by Active Audition

    NASA Astrophysics Data System (ADS)

    Nakadai, Kazuhiro; Okuno, Hiroshi G.; Kitano, Hiroaki

    In this paper, we present an active audition system which is implemented on the humanoid robot "SIG the humanoid". The audition system for highly intelligent humanoids localizes sound sources and recognizes auditory events in the auditory scene. Active audition reported in this paper enables SIG to track sources by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. However, such an active head movement inevitably creates motor noises.The system adaptively cancels motor noises using motor control signals and the cover acoustics. The experimental result demonstrates that active audition by integration of audition, vision, and motor control attains sound source tracking in variety of conditions.onditions.

  10. The influence of acoustic emissions for underwater data transmission on the behaviour of harbour porpoises (Phocoena phocoena) in a floating pen.

    PubMed

    Kastelein, R A; Verboom, W C; Muijsers, M; Jennings, N V; van der Heul, S

    2005-05-01

    To prevent grounding of ships and collisions between ships in shallow coastal waters, an underwater data collection and communication network is currently under development: Acoustic Communication network for Monitoring of underwater Environment in coastal areas (ACME). Marine mammals might be affected by ACME sounds since they use sounds of similar frequencies (around 12 kHz) for communication, orientation, and prey location. If marine mammals tend to avoid the vicinity of the transmitters, they may be kept away from ecologically important areas by ACME sounds. One marine mammal species that may be affected in the North Sea is the harbour porpoise. Therefore, as part of an environmental impact assessment program, two captive harbour porpoises were subjected to four sounds, three of which may be used in the underwater acoustic data communication network. The effect of each sound was judged by comparing the animals' positions and respiration rates during a test period with those during a baseline period. Each of the four sounds could be made a deterrent by increasing the amplitude of the sound. The porpoises reacted by swimming away from the sounds and by slightly, but significantly, increasing their respiration rate. From the sound pressure level distribution in the pen, and the distribution of the animals during test sessions, discomfort sound level thresholds were determined for each sound. In combination with information on sound propagation in the areas where the communication system may be deployed, the extent of the 'discomfort zone' can be estimated for several source levels (SLs). The discomfort zone is defined as the area around a sound source that harbour porpoises are expected to avoid. Based on these results, SLs can be selected that have an acceptable effect on harbour porpoises in particular areas. The discomfort zone of a communication sound depends on the selected sound, the selected SL, and the propagation characteristics of the area in which the sound system is operational. In shallow, winding coastal water courses, with sandbanks, etc., the type of habitat in which the ACME sounds will be produced, propagation loss cannot be accurately estimated by using a simple propagation model, but should be measured on site. The SL of the communication system should be adapted to each area (taking into account bounding conditions created by narrow channels, sound propagation variability due to environmental factors, and the importance of an area to the affected species). The discomfort zone should not prevent harbour porpoises from spending sufficient time in ecologically important areas (for instance feeding areas), or routes towards these areas.

  11. Ray-based acoustic localization of cavitation in a highly reverberant environment.

    PubMed

    Chang, Natasha A; Dowling, David R

    2009-05-01

    Acoustic detection and localization of cavitation have inherent advantages over optical techniques because cavitation bubbles are natural sound sources, and acoustic transduction of cavitation sounds does not require optical access to the region of cavitating flow. In particular, near cavitation inception, cavitation bubbles may be visually small and occur infrequently, but may still emit audible sound pulses. In this investigation, direct-path acoustic recordings of cavitation events are made with 16 hydrophones mounted on the periphery of a water tunnel test section containing a low-cavitation-event-rate vortical flow. These recordings are used to localize the events in three dimensions via cross correlations to obtain arrival time differences. Here, bubble localization is hindered by reverberation, background noise, and the fact that both the pulse emission time and waveform are unknown. These hindrances are partially mitigated by a signal-processing scheme that incorporates straight-ray acoustic propagation and Monte-Carlo techniques for compensating ray-path, sound-speed, and hydrophone-location uncertainties. The acoustic localization results are compared to simultaneous optical localization results from dual-camera high-speed digital-video recordings. For 53 bubbles and a peak-signal to noise ratio frequency of 6.7 kHz, the root-mean-square spatial difference between optical and acoustic bubble location results was 1.94 cm. Parametric dependences in acoustic localization performance are also presented.

  12. Spatial release from masking based on binaural processing for up to six maskers

    PubMed Central

    Yost, William A.

    2017-01-01

    Spatial Release from Masking (SRM) was measured for identification of a female target word spoken in the presence of male masker words. Target words from a single loudspeaker located at midline were presented when two, four, or six masker words were presented either from the same source as the target or from spatially separated masker sources. All masker words were presented from loudspeakers located symmetrically around the centered target source in the front azimuth hemifield. Three masking conditions were employed: speech-in-speech masking (involving both informational and energetic masking), speech-in-noise masking (involving energetic masking), and filtered speech-in-filtered speech masking (involving informational masking). Psychophysical results were summarized as three-point psychometric functions relating proportion of correct word identification to target-to-masker ratio (in decibels) for both the co-located and spatially separated target and masker sources cases. SRM was then calculated by comparing the slopes and intercepts of these functions. SRM decreased as the number of symmetrically placed masker sources increased from two to six. This decrease was independent of the type of masking, with almost no SRM measured for six masker sources. These results suggest that when SRM is dependent primarily on binaural processing, SRM is effectively limited to fewer than six sound sources. PMID:28372135

  13. Acoustic deterrence of bighead carp (Hypophthalmichthys nobilis) to a broadband sound stimulus

    USGS Publications Warehouse

    Vetter, Brooke J.; Murchy, Kelsie; Cupp, Aaron R.; Amberg, Jon J.; Gaikowski, Mark P.; Mensinger, Allen F.

    2017-01-01

    Recent studies have shown the potential of acoustic deterrents against invasive silver carp (Hypophthalmichthys molitrix). This study examined the phonotaxic response of the bighead carp (H. nobilis) to pure tones (500–2000 Hz) and playbacks of broadband sound from an underwater recording of a 100 hp outboard motor (0.06–10 kHz) in an outdoor concrete pond (10 × 5 × 1.2 m) at the U.S. Geological Survey Upper Midwest Environmental Science Center in La Crosse, WI. The number of consecutive times the fish reacted to sound from alternating locations at each end of the pond was assessed. Bighead carp were relatively indifferent to the pure tones with median consecutive responses ranging from 0 to 2 reactions away from the sound source. However, fish consistently exhibited significantly (P < 0.001) greater negative phonotaxis to the broadband sound (outboard motor recording) with an overall median response of 20 consecutive reactions during the 10 min trials. In over 50% of broadband sound tests, carp were still reacting to the stimulus at the end of the trial, implying that fish were not habituating to the sound. This study suggests that broadband sound may be an effective deterrent to bighead carp and provides a basis for conducting studies with wild fish.

  14. Method for chemically analyzing a solution by acoustic means

    DOEpatents

    Beller, Laurence S.

    1997-01-01

    A method and apparatus for determining a type of solution and the concention of that solution by acoustic means. Generally stated, the method consists of: immersing a sound focusing transducer within a first liquid filled container; locating a separately contained specimen solution at a sound focal point within the first container; locating a sound probe adjacent to the specimen, generating a variable intensity sound signal from the transducer; measuring fundamental and multiple harmonic sound signal amplitudes; and then comparing a plot of a specimen sound response with a known solution sound response, thereby determining the solution type and concentration.

  15. Sound source localization method in an environment with flow based on Amiet-IMACS

    NASA Astrophysics Data System (ADS)

    Wei, Long; Li, Min; Qin, Sheng; Fu, Qiang; Yang, Debin

    2017-05-01

    A sound source localization method is proposed to localize and analyze the sound source in an environment with airflow. It combines the improved mapping of acoustic correlated sources (IMACS) method and Amiet's method, and is called Amiet-IMACS. It can localize uncorrelated and correlated sound sources with airflow. To implement this approach, Amiet's method is used to correct the sound propagation path in 3D, which improves the accuracy of the array manifold matrix and decreases the position error of the localized source. Then, the mapping of acoustic correlated sources (MACS) method, which is as a high-resolution sound source localization algorithm, is improved by self-adjusting the constraint parameter at each irritation process to increase convergence speed. A sound source localization experiment using a pair of loud speakers in an anechoic wind tunnel under different flow speeds is conducted. The experiment exhibits the advantage of Amiet-IMACS in localizing a more accurate sound source position compared with implementing IMACS alone in an environment with flow. Moreover, the aerodynamic noise produced by a NASA EPPLER 862 STRUT airfoil model in airflow with a velocity of 80 m/s is localized using the proposed method, which further proves its effectiveness in a flow environment. Finally, the relationship between the source position of this airfoil model and its frequency, along with its generation mechanism, is determined and interpreted.

  16. J-85 jet engine noise measured in the ONERA S1 wind tunnel and extrapolated to far field

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Julienne, Alain; Atencio, Adolph, Jr.

    1991-01-01

    Noise from a J-85 turbojet with a conical, convergent nozzle was measured in simulated flight in the ONERA S1 Wind Tunnel. Data are presented for several flight speeds up to 130 m/sec and for radiation angles of 40 to 160 degrees relative to the upstream direction. The jet was operated with subsonic and sonic exhaust speeds. A moving microphone on a 2 m sideline was used to survey the radiated sound field in the acoustically treated, closed test section. The data were extrapolated to a 122 m sideline by means of a multiple-sideline source-location method, which was used to identify the acoustic source regions, directivity patterns, and near field effects. The source-location method is described along with its advantages and disadvantages. Results indicate that the effects of simulated flight on J-85 noise are significant. At the maximum forward speed of 130 m/sec, the peak overall sound levels in the aft quadrant were attentuated approximately 10 dB relative to sound levels of the engine operated statically. As expected, the simulated flight and static data tended to merge in the forward quadrant as the radiation angle approached 40 degrees. There is evidence that internal engine or shock noise was important in the forward quadrant. The data are compared with published predictions for flight effects on pure jet noise and internal engine noise. A new empirical prediction is presented that relates the variation of internally generated engine noise or broadband shock noise to forward speed. Measured near field noise extrapolated to far field agrees reasonably well with data from similar engines tested statically outdoors, in flyover, in a wind tunnel, and on the Bertin Aerotrain. Anomalies in the results for the forward quadrant and for angles above 140 degrees are discussed. The multiple-sideline method proved to be cumbersome in this application, and it did not resolve all of the uncertainties associated with measurements of jet noise close to the jet. The simulation was complicated by wind-tunnel background noise and the propagation of low frequency sound around the circuit.

  17. Active room compensation for sound reinforcement using sound field separation techniques.

    PubMed

    Heuchel, Franz M; Fernandez-Grande, Efren; Agerkvist, Finn T; Shabalina, Elena

    2018-03-01

    This work investigates how the sound field created by a sound reinforcement system can be controlled at low frequencies. An indoor control method is proposed which actively absorbs the sound incident on a reflecting boundary using an array of secondary sources. The sound field is separated into incident and reflected components by a microphone array close to the secondary sources, enabling the minimization of reflected components by means of optimal signals for the secondary sources. The method is purely feed-forward and assumes constant room conditions. Three different sound field separation techniques for the modeling of the reflections are investigated based on plane wave decomposition, equivalent sources, and the Spatial Fourier transform. Simulations and an experimental validation are presented, showing that the control method performs similarly well at enhancing low frequency responses with the three sound separation techniques. Resonances in the entire room are reduced, although the microphone array and secondary sources are confined to a small region close to the reflecting wall. Unlike previous control methods based on the creation of a plane wave sound field, the investigated method works in arbitrary room geometries and primary source positions.

  18. Implementing and testing a panel-based method for modeling acoustic scattering from CFD input

    NASA Astrophysics Data System (ADS)

    Swift, S. Hales

    Exposure of sailors to high levels of noise in the aircraft carrier deck environment is a problem that has serious human and economic consequences. A variety of approaches to quieting exhausting jets from high-performance aircraft are undergoing development. However, testing of noise abatement solutions at full-scale may be prohibitively costly when many possible nozzle treatments are under consideration. A relatively efficient and accurate means of predicting the noise levels resulting from engine-quieting technologies at personnel locations is needed. This is complicated by the need to model both the direct and the scattered sound field in order to determine the resultant spectrum and levels. While the direct sound field may be obtained using CFD plus surface integral methods such as the Ffowcs-Williams Hawkings method, the scattered sound field is complicated by its dependence on the geometry of the scattering surface--the aircraft carrier deck, aircraft control surfaces and other nearby structures. In this work, a time-domain boundary element method, or TD-BEM, (sometimes referred to in terms of source panels) is proposed and developed that takes advantage of and offers beneficial effects for the substantial planar components of the aircraft carrier deck environment and uses pressure gradients as its input. This method is applied to and compared with analytical results for planar surfaces, corners and spherical surfaces using an analytic point source as input. The method can also accept input from CFD data on an acoustic data surface by using the G1A pressure gradient formulation to obtain pressure gradients on the surface from the flow variables contained on the acoustic data surface. The method is also applied to a planar scattering surface characteristic of an aircraft carrier flight deck with an acoustic data surface from a supersonic jet large eddy simulation, or LES, as input to the scattering model. In this way, the process for modeling the complete sound field (assuming the availability of an acoustic data surface from a time-realized numerical simulation of the jet flow field) is outlined for a realistic group of source location, scattering surface location and observer locations. The method was able to successfully model planar cases, corners and spheres with a level of error that is low enough for some engineering purposes. Significant benefits were realized for fully planar surfaces including high parallelizability and avoidance of interaction between portions of the paneled boundary. When the jet large eddy simulation case was considered the method was able to capture a substantial portion of the spectrum including the peak frequency region and a majority of the spectral energy with good fidelity.

  19. Auditory and visual localization accuracy in young children and adults.

    PubMed

    Martin, Karen; Johnstone, Patti; Hedrick, Mark

    2015-06-01

    This study aimed to measure and compare sound and light source localization ability in young children and adults who have normal hearing and normal/corrected vision in order to determine the extent to which age, type of stimuli, and stimulus order affects sound localization accuracy. Two experiments were conducted. The first involved a group of adults only. The second involved a group of 30 children aged 3 to 5 years. Testing occurred in a sound-treated booth containing a semi-circular array of 15 loudspeakers set at 10° intervals from -70° to 70° azimuth. Each loudspeaker had a tiny light bulb and a small picture fastened underneath. Seven of the loudspeakers were used to randomly test sound and light source identification. The sound stimulus was the word "baseball". The light stimulus was a flashing of a light bulb triggered by the digital signal of the word "baseball". Each participant was asked to face 0° azimuth, and identify the location of the test stimulus upon presentation. Adults used a computer mouse to click on an icon; children responded by verbally naming or walking toward the picture underneath the corresponding loudspeaker or light. A mixed experimental design using repeated measures was used to determine the effect of age and stimulus type on localization accuracy in children and adults. A mixed experimental design was used to compare the effect of stimulus order (light first/last) and varying or fixed intensity sound on localization accuracy in children and adults. Localization accuracy was significantly better for light stimuli than sound stimuli for children and adults. Children, compared to adults, showed significantly greater localization errors for audition. Three-year-old children had significantly greater sound localization errors compared to 4- and 5-year olds. Adults performed better on the sound localization task when the light localization task occurred first. Young children can understand and attend to localization tasks, but show poorer localization accuracy than adults in sound localization. This may be a reflection of differences in sensory modality development and/or central processes in young children, compared to adults. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Control of Toxic Chemicals in Puget Sound, Phase 3: Study of Atmospheric Deposition of Air Toxics to the Surface of Puget Sound

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandenberger, Jill M.; Louchouarn, Patrick; Kuo, Li-Jung

    2010-07-05

    The results of the Phase 1 Toxics Loading study suggested that runoff from the land surface and atmospheric deposition directly to marine waters have resulted in considerable loads of contaminants to Puget Sound (Hart Crowser et al. 2007). The limited data available for atmospheric deposition fluxes throughout Puget Sound was recognized as a significant data gap. Therefore, this study provided more recent or first reported atmospheric deposition fluxes of PAHs, PBDEs, and select trace elements for Puget Sound. Samples representing bulk atmospheric deposition were collected during 2008 and 2009 at seven stations around Puget Sound spanning from Padilla Bay southmore » to Nisqually River including Hood Canal and the Straits of Juan de Fuca. Revised annual loading estimates for atmospheric deposition to the waters of Puget Sound were calculated for each of the toxics and demonstrated an overall decrease in the atmospheric loading estimates except for polybrominated diphenyl ethers (PBDEs) and total mercury (THg). The median atmospheric deposition flux of total PBDE (7.0 ng/m2/d) was higher than that of the Hart Crowser (2007) Phase 1 estimate (2.0 ng/m2/d). The THg was not significantly different from the original estimates. The median atmospheric deposition flux for pyrogenic PAHs (34.2 ng/m2/d; without TCB) shows a relatively narrow range across all stations (interquartile range: 21.2- 61.1 ng/m2/d) and shows no influence of season. The highest median fluxes for all parameters were measured at the industrial location in Tacoma and the lowest were recorded at the rural sites in Hood Canal and Sequim Bay. Finally, a semi-quantitative apportionment study permitted a first-order characterization of source inputs to the atmosphere of the Puget Sound. Both biomarker ratios and a principal component analysis confirmed regional data from the Puget Sound and Straits of Georgia region and pointed to the predominance of biomass and fossil fuel (mostly liquid petroleum products such as gasoline and/or diesel) combustion as source inputs of combustion by-products to the atmosphere of the region and subsequently to the waters of Puget Sound.« less

  1. Developmental Changes in Locating Voice and Sound in Space

    PubMed Central

    Kezuka, Emiko; Amano, Sachiko; Reddy, Vasudevi

    2017-01-01

    We know little about how infants locate voice and sound in a complex multi-modal space. Using a naturalistic laboratory experiment the present study tested 35 infants at 3 ages: 4 months (15 infants), 5 months (12 infants), and 7 months (8 infants). While they were engaged frontally with one experimenter, infants were presented with (a) a second experimenter’s voice and (b) castanet sounds from three different locations (left, right, and behind). There were clear increases with age in the successful localization of sounds from all directions, and a decrease in the number of repetitions required for success. Nonetheless even at 4 months two-thirds of the infants attempted to search for the voice or sound. At all ages localizing sounds from behind was more difficult and was clearly present only at 7 months. Perseverative errors (looking at the last location) were present at all ages and appeared to be task specific (only present in the 7 month-olds for the behind location). Spontaneous attention shifts by the infants between the two experimenters, evident at 7 months, suggest early evidence for infant initiation of triadic attentional engagements. There was no advantage found for voice over castanet sounds in this study. Auditory localization is a complex and contextual process emerging gradually in the first half of the first year. PMID:28979220

  2. An evaluation of talker localization based on direction of arrival estimation and statistical sound source identification

    NASA Astrophysics Data System (ADS)

    Nishiura, Takanobu; Nakamura, Satoshi

    2002-11-01

    It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.

  3. Application of acoustic radiosity methods to noise propagation within buildings

    NASA Astrophysics Data System (ADS)

    Muehleisen, Ralph T.; Beamer, C. Walter

    2005-09-01

    The prediction of sound pressure levels in rooms from transmitted sound is a difficult problem. The sound energy in the source room incident on the common wall must be accurately predicted. In the receiving room, the propagation of sound from the planar wall source must also be accurately predicted. The radiosity method naturally computes the spatial distribution of sound energy incident on a wall and also naturally predicts the propagation of sound from a planar area source. In this paper, the application of the radiosity method to sound transmission problems is introduced and explained.

  4. Acoustic ground impedance meter

    NASA Technical Reports Server (NTRS)

    Zuckerwar, A. J.

    1981-01-01

    A compact, portable instrument was developed to measure the acoustic impedance of the ground, or other surfaces, by direct pressure-volume velocity measurement. A Helmholz resonator, constructed of heavy-walled stainless steel but open at the bottom, is positioned over the surface having the unknown impedance. The sound source, a cam-driven piston of known stroke and thus known volume velocity, is located in the neck of the resonator. The cam speed is a variable up to a maximum 3600 rpm. The sound pressure at the test surface is measured by means of a microphone flush-mounted in the wall of the chamber. An optical monitor of the piston displacement permits measurement of the phase angle between the volume velocity and the sound pressure, from which the real and imaginary parts of the impedance can be evaluated. Measurements using a 5-lobed cam can be made up to 300 Hz. Detailed design criteria and results on a soil sample are presented.

  5. Experiments on the applicability of MAE techniques for predicting sound diffraction by irregular terrains. [Matched Asymptotic Expansion

    NASA Technical Reports Server (NTRS)

    Berthelot, Yves H.; Pierce, Allan D.; Kearns, James A.

    1987-01-01

    The sound field diffracted by a single smooth hill of finite impedance is studied both analytically, within the context of the theory of Matched Asymptotic Expansions (MAE), and experimentally, under laboratory scale modeling conditions. Special attention is given to the sound field on the diffracting surface and throughout the transition region between the illuminated and the shadow zones. The MAE theory yields integral equations that are amenable to numerical computations. Experimental results are obtained with a spark source producing a pulse of 42 microsec duration and about 130 Pa at 1 m. The insertion loss of the hill is inferred from measurements of the acoustic signals at two locations in the field, with subsequent Fourier analysis on an IBM PC/AT. In general, experimental results support the predictions of the MAE theory, and provide a basis for the analysis of more complicated geometries.

  6. Analysis of temporal decay of diffuse broadband sound fields in enclosures by decomposition in powers of an absorption parameter

    NASA Astrophysics Data System (ADS)

    Bliss, Donald; Franzoni, Linda; Rouse, Jerry; Manning, Ben

    2005-09-01

    An analysis method for time-dependent broadband diffuse sound fields in enclosures is described. Beginning with a formulation utilizing time-dependent broadband intensity boundary sources, the strength of these wall sources is expanded in a series in powers of an absorption parameter, thereby giving a separate boundary integral problem for each power. The temporal behavior is characterized by a Taylor expansion in the delay time for a source to influence an evaluation point. The lowest-order problem has a uniform interior field proportional to the reciprocal of the absorption parameter, as expected, and exhibits relatively slow exponential decay. The next-order problem gives a mean-square pressure distribution that is independent of the absorption parameter and is primarily responsible for the spatial variation of the reverberant field. This problem, which is driven by input sources and the lowest-order reverberant field, depends on source location and the spatial distribution of absorption. Additional problems proceed at integer powers of the absorption parameter, but are essentially higher-order corrections to the spatial variation. Temporal behavior is expressed in terms of an eigenvalue problem, with boundary source strength distributions expressed as eigenmodes. Solutions exhibit rapid short-time spatial redistribution followed by long-time decay of a predominant spatial mode.

  7. Method for chemically analyzing a solution by acoustic means

    DOEpatents

    Beller, L.S.

    1997-04-22

    A method and apparatus are disclosed for determining a type of solution and the concentration of that solution by acoustic means. Generally stated, the method consists of: immersing a sound focusing transducer within a first liquid filled container; locating a separately contained specimen solution at a sound focal point within the first container; locating a sound probe adjacent to the specimen, generating a variable intensity sound signal from the transducer; measuring fundamental and multiple harmonic sound signal amplitudes; and then comparing a plot of a specimen sound response with a known solution sound response, thereby determining the solution type and concentration. 10 figs.

  8. Different spatio-temporal electroencephalography features drive the successful decoding of binaural and monaural cues for sound localization.

    PubMed

    Bednar, Adam; Boland, Francis M; Lalor, Edmund C

    2017-03-01

    The human ability to localize sound is essential for monitoring our environment and helps us to analyse complex auditory scenes. Although the acoustic cues mediating sound localization have been established, it remains unknown how these cues are represented in human cortex. In particular, it is still a point of contention whether binaural and monaural cues are processed by the same or distinct cortical networks. In this study, participants listened to a sequence of auditory stimuli from different spatial locations while we recorded their neural activity using electroencephalography (EEG). The stimuli were presented over a loudspeaker array, which allowed us to deliver realistic, free-field stimuli in both the horizontal and vertical planes. Using a multivariate classification approach, we showed that it is possible to decode sound source location from scalp-recorded EEG. Robust and consistent decoding was shown for stimuli that provide binaural cues (i.e. Left vs. Right stimuli). Decoding location when only monaural cues were available (i.e. Front vs. Rear and elevational stimuli) was successful for a subset of subjects and showed less consistency. Notably, the spatio-temporal pattern of EEG features that facilitated decoding differed based on the availability of binaural and monaural cues. In particular, we identified neural processing of binaural cues at around 120 ms post-stimulus and found that monaural cues are processed later between 150 and 200 ms. Furthermore, different spatial activation patterns emerged for binaural and monaural cue processing. These spatio-temporal dissimilarities suggest the involvement of separate cortical mechanisms in monaural and binaural acoustic cue processing. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  9. Visual Presentation Effects on Identification of Multiple Environmental Sounds

    PubMed Central

    Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio

    2016-01-01

    This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478

  10. An experimental study of transmission, reflection and scattering of sound in a free jet flight simulation facility and comparison with theory

    NASA Technical Reports Server (NTRS)

    Ahuja, K. K.; Tanna, H. K.; Tester, B. J.

    1981-01-01

    When a free jet (or open jet) is used as a wind tunnel to simulate the effects of flight on model noise sources, it is necessary to calibrate out the effects of the free jet shear layer on the transmitted sound, since the shear layer is absent in the real flight case. In this paper, a theoretical calibration procedure for this purpose is first summarized; following this, the results of an experimental program, designed to test the validity of the various components of the calibration procedure, are described. The experiments are conducted by using a point sound source located at various axial positions within the free jet potential core. By using broadband excitation and cross-correlation methods, the angle changes associated with ray paths across the shear layer are first established. Measurements are then made simultaneously inside and outside the free jet along the proper ray paths to determine the amplitude changes across the shear layer. It is shown that both the angle and amplitude changes can be predicted accurately by theory. It is also found that internal reflection at the shear layer is significant only for large ray angles in the forward quadrant where total internal reflection occurs. Finally, the effects of sound absorption and scattering by the shear layer turbulence are also examined experimentally.

  11. Statistical signal processing technique for identification of different infected sites of the diseased lungs.

    PubMed

    Abbas, Ali

    2012-06-01

    Accurate Diagnosis of lung disease depends on understanding the sounds emanating from lung and its location. Lung sounds are of significance as they supply precise and important information on the health of the respiratory system. In addition, correct interpretation of breath sounds depends on a systematic approach to auscultation; it also requires the ability to describe the location of abnormal finding in relation to bony structures and anatomic landmark lines. Lungs consist of number of lobes; each lung lobe is further subdivided into smaller segments. These segments are attached to each other. Knowledge of the position of the lung segments is useful and important during the auscultation and diagnosis of the lung diseases. Usually the medical doctors give the location of the infection a segmental position reference. Breath sounds are auscultated over the anterior chest wall surface, the lateral chest wall surfaces, and posterior chest wall surface. Adventitious sounds from different location can be detected. It is common to seek confirmation of the sound detection and its location using invasive and potentially harmful imaging diagnosis techniques like x-rays. To overcome this limitation and for fast, reliable, accurate, and inexpensive diagnose a technique is developed in this research for identifying the location of infection through a computerized auscultation system.

  12. Numerical simulation of the SOFIA flow field

    NASA Technical Reports Server (NTRS)

    Klotz, Stephen P.

    1995-01-01

    This report provides a concise summary of the contribution of computational fluid dynamics (CFD) to the SOFIA (Stratospheric Observatory for Infrared Astronomy) project at NASA Ames and presents results obtained from closed- and open-cavity SOFIA simulations. The aircraft platform is a Boeing 747SP and these are the first SOFIA simulations run with the aircraft empennage included in the geometry database. In the open-cavity runs the telescope is mounted behind the wings. Results suggest that the cavity markedly influences the mean pressure distribution on empennage surfaces and that 110-140 decibel (db) sound pressure levels are typical in the cavity and on the horizontal and vertical stabilizers. A strong source of sound was found to exist on the rim of the open telescope cavity. The presence of this source suggests that additional design work needs to be performed in order to minimize the sound emanating from that location. A fluid dynamic analysis of the engine plumes is also contained in this report. The analysis was part of an effort to quantify the degradation of telescope performance resulting from the proximity of the port engine exhaust plumes to the open telescope bay.

  13. Application of the aeroacoustic analogy to a shrouded, subsonic, radial fan

    NASA Astrophysics Data System (ADS)

    Buccieri, Bryan M.; Richards, Christopher M.

    2016-12-01

    A study was conducted to investigate the predictive capability of computational aeroacoustics with respect to a shrouded, subsonic, radial fan. A three dimensional unsteady fluid dynamics simulation was conducted to produce aerodynamic data used as the acoustic source for an aeroacoustics simulation. Two acoustic models were developed: one modeling the forces on the rotating fan blades as a set of rotating dipoles located at the center of mass of each fan blade and one modeling the forces on the stationary fan shroud as a field of distributed stationary dipoles. Predicted acoustic response was compared to experimental data measured at two operating speeds using three different outlet restrictions. The blade source model predicted overall far field sound power levels within 5 dB averaged over the six different operating conditions while the shroud model predicted overall far field sound power levels within 7 dB averaged over the same conditions. Doubling the density of the computational fluids mesh and using a scale adaptive simulation turbulence model increased broadband noise accuracy. However, computation time doubled and the accuracy of the overall sound power level prediction improved by only 1 dB.

  14. Source levels of social sounds in migrating humpback whales (Megaptera novaeangliae).

    PubMed

    Dunlop, Rebecca A; Cato, Douglas H; Noad, Michael J; Stokes, Dale M

    2013-07-01

    The source level of an animal sound is important in communication, since it affects the distance over which the sound is audible. Several measurements of source levels of whale sounds have been reported, but the accuracy of many is limited because the distance to the source and the acoustic transmission loss were estimated rather than measured. This paper presents measurements of source levels of social sounds (surface-generated and vocal sounds) of humpback whales from a sample of 998 sounds recorded from 49 migrating humpback whale groups. Sources were localized using a wide baseline five hydrophone array and transmission loss was measured for the site. Social vocalization source levels were found to range from 123 to 183 dB re 1 μPa @ 1 m with a median of 158 dB re 1 μPa @ 1 m. Source levels of surface-generated social sounds ("breaches" and "slaps") were narrower in range (133 to 171 dB re 1 μPa @ 1 m) but slightly higher in level (median of 162 dB re 1 μPa @ 1 m) compared to vocalizations. The data suggest that group composition has an effect on group vocalization source levels in that singletons and mother-calf-singing escort groups tend to vocalize at higher levels compared to other group compositions.

  15. Dynamic Spatial Hearing by Human and Robot Listeners

    NASA Astrophysics Data System (ADS)

    Zhong, Xuan

    This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.

  16. Wave field synthesis of moving virtual sound sources with complex radiation properties.

    PubMed

    Ahrens, Jens; Spors, Sascha

    2011-11-01

    An approach to the synthesis of moving virtual sound sources with complex radiation properties in wave field synthesis is presented. The approach exploits the fact that any stationary sound source of finite spatial extent radiates spherical waves at sufficient distance. The angular dependency of the radiation properties of the source under consideration is reflected by the amplitude and phase distribution on the spherical wave fronts. The sound field emitted by a uniformly moving monopole source is derived and the far-field radiation properties of the complex virtual source under consideration are incorporated in order to derive a closed-form expression for the loudspeaker driving signal. The results are illustrated via numerical simulations of the synthesis of the sound field of a sample moving complex virtual source.

  17. Acoustical deterrence of Silver Carp (Hypophthalmichthys molitrix)

    USGS Publications Warehouse

    Brooke J. Vetter,; Cupp, Aaron R.; Fredricks, Kim T.; Gaikowski, Mark P.; Allen F. Mensinger,

    2015-01-01

    The invasive Silver Carp (Hypophthalmichthys molitrix) dominate large regions of the Mississippi River drainage and continue to expand their range northward threatening the Laurentian Great Lakes. This study found that complex broadband sound (0–10 kHz) is effective in altering the behavior of Silver Carp with implications for deterrent barriers or potential control measures (e.g., herding fish into nets). The phonotaxic response of Silver Carp was investigated using controlled experiments in outdoor concrete ponds (10 × 4.9 × 1.2 m). Pure tones (500–2000 Hz) and complex sound (underwater field recordings of outboard motors) were broadcast using underwater speakers. Silver Carp always reacted to the complex sounds by exhibiting negative phonotaxis to the sound source and by alternating speaker location, Silver Carp could be directed consistently, up to 37 consecutive times, to opposite ends of the large outdoor pond. However, fish habituated quickly to pure tones, reacting to only approximately 5 % of these presentations and never showed more than two consecutive responses. Previous studies have demonstrated the success of sound barriers in preventing Silver Carp movement using pure tones and this research suggests that a complex sound stimulus would be an even more effective deterrent.

  18. A critical review of the potential impacts of marine seismic surveys on fish & invertebrates.

    PubMed

    Carroll, A G; Przeslawski, R; Duncan, A; Gunning, M; Bruce, B

    2017-01-15

    Marine seismic surveys produce high intensity, low-frequency impulsive sounds at regular intervals, with most sound produced between 10 and 300Hz. Offshore seismic surveys have long been considered to be disruptive to fisheries, but there are few ecological studies that target commercially important species, particularly invertebrates. This review aims to summarise scientific studies investigating the impacts of low-frequency sound on marine fish and invertebrates, as well as to critically evaluate how such studies may apply to field populations exposed to seismic operations. We focus on marine seismic surveys due to their associated unique sound properties (i.e. acute, low-frequency, mobile source locations), as well as fish and invertebrates due to the commercial value of many species in these groups. The main challenges of seismic impact research are the translation of laboratory results to field populations over a range of sound exposure scenarios and the lack of sound exposure standardisation which hinders the identification of response thresholds. An integrated multidisciplinary approach to manipulative and in situ studies is the most effective way to establish impact thresholds in the context of realistic exposure levels, but if that is not practical the limitations of each approach must be carefully considered. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  19. What's what in auditory cortices?

    PubMed

    Retsa, Chrysa; Matusz, Pawel J; Schnupp, Jan W H; Murray, Micah M

    2018-08-01

    Distinct anatomical and functional pathways are postulated for analysing a sound's object-related ('what') and space-related ('where') information. It remains unresolved to which extent distinct or overlapping neural resources subserve specific object-related dimensions (i.e. who is speaking and what is being said can both be derived from the same acoustic input). To address this issue, we recorded high-density auditory evoked potentials (AEPs) while participants selectively attended and discriminated sounds according to their pitch, speaker identity, uttered syllable ('what' dimensions) or their location ('where'). Sound acoustics were held constant across blocks; the only manipulation involved the sound dimension that participants had to attend to. The task-relevant dimension was varied across blocks. AEPs from healthy participants were analysed within an electrical neuroimaging framework to differentiate modulations in response strength from modulations in response topography; the latter of which forcibly follow from changes in the configuration of underlying sources. There were no behavioural differences in discrimination of sounds across the 4 feature dimensions. As early as 90ms post-stimulus onset, AEP topographies differed across 'what' conditions, supporting a functional sub-segregation within the auditory 'what' pathway. This study characterises the spatio-temporal dynamics of segregated, yet parallel, processing of multiple sound object-related feature dimensions when selective attention is directed to them. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Snapshot recordings provide a first description of the acoustic signatures of deeper habitats adjacent to coral reefs of Moorea.

    PubMed

    Bertucci, Frédéric; Parmentier, Eric; Berthe, Cécile; Besson, Marc; Hawkins, Anthony D; Aubin, Thierry; Lecchini, David

    2017-01-01

    Acoustic recording has been recognized as a valuable tool for non-intrusive monitoring of the marine environment, complementing traditional visual surveys. Acoustic surveys conducted on coral ecosystems have so far been restricted to barrier reefs and to shallow depths (10-30 m). Since they may provide refuge for coral reef organisms, the monitoring of outer reef slopes and describing of the soundscapes of deeper environment could provide insights into the characteristics of different biotopes of coral ecosystems. In this study, the acoustic features of four different habitats, with different topographies and substrates, located at different depths from 10 to 100 m, were recorded during day-time on the outer reef slope of the north Coast of Moorea Island (French Polynesia). Barrier reefs appeared to be the noisiest habitats whereas the average sound levels at other habitats decreased with their distance from the reef and with increasing depth. However, sound levels were higher than expected by propagation models, supporting that these habitats possess their own sound sources. While reef sounds are known to attract marine larvae, sounds from deeper habitats may then also have a non-negligible attractive potential, coming into play before the reef itself.

  1. Further Progress in Noise Source Identification in High Speed Jets via Causality Principle

    NASA Technical Reports Server (NTRS)

    Panda, J.; Seasholtz, R. G.; Elam, K. A.

    2004-01-01

    To locate noise sources in high-speed jets, the sound pressure fluctuations p/, measured at far field locations, were correlated with each of density p, axial velocity u, radial velocity v, puu and pvv fluctuations measured from various points in fully expanded, unheated plumes of Mach number 0.95, 1.4 and 1.8. The velocity and density fluctuations were measured simultaneously using a recently developed, non-intrusive, point measurement technique based on molecular Rayleigh scattering (Seasholtz, Panda, and Elam, AIAA Paper 2002-0827). The technique uses a continuous wave, narrow line-width laser, Fabry-Perot interferometer and photon counting electronics. The far field sound pressure fluctuations at 30 to the jet axis provided the highest correlation coefficients with all flow variables. The correlation coefficients decreased sharply with increased microphone polar angle, and beyond about 60 all correlation mostly fell below the experimental noise floor. Among all correlations < puu; p/> showed the highest values. Interestingly, , in all respects, were very similar to . The and correlations with 90deg microphone fell below the noise floor. By moving the laser probe at various locations in the jet it was found that the strongest noise source lies downstream of the end of the potential core and extends many diameters beyond. Correlation measurement from the lip shear layer showed a Mach number dependency. While significant correlations were measured in Mach 1.8 jet, values were mostly below the noise floor for subsonic Mach 0.95 jet. Various additional analyses showed that fluctuations from large organized structures mostly contributed to the measured correlation, while that from small scale structures fell below the noise floor.

  2. Characterizing the Seismic Ocean Bottom Environment of the Bransfield Strait

    NASA Astrophysics Data System (ADS)

    Washington, B.; Lekic, V.; Schmerr, N. C.

    2017-12-01

    Ocean bottom seismometers record ground motions that result from earthquakes, anthropogenic sound sources (e.g. propellers, air gun sources, etc.), ocean waves and currents, biological activity, as well as surface processes on the sea and coastal land. Over a two-week span in April, 2001 - the Austral late fall -ten stations arranged in eleven lines were deployed beneath the Bransfield Strait along the Antarctica Peninsula to passively record data before and after an active source seismic survey. The goal of this study is to understand ocean bottom seismicity, identify centers of seismic activity and characterize possible glaciological mechanisms of icequakes and tremors. The instruments were sampled at 200Hz, allowing signals of ice-quakes, small earthquakes, and other high frequency sources to be detected and located. By visualizing the data as spectrograms, we identify and document ground vibrations excited by local earthquakes, whale songs, and those potentially due to surface processes, such as the cracking and movement of icebergs or ice shelves, including possible harmonic tremors from the ice or the volcanic arc nearby. Using relative timing of P-wave arrivals, we locate the hypocenters of nearby earthquakes and icequakes, and present frequency-dependent polarization analysis of their waveforms. Marine mammal sounds were detected in a substantial part of the overall acoustic environment-late March and Early April are the best months to hear whales such as humpback, sperm and orca communicating amongst each other because they are drawn to the cold, nutrient-rich Antarctic waters. We detect whales communicating for several hours in the dataset. Other extensively recorded sources resemble harmonic tremors, and we also identify signals possibly associated with waves set up on the notoriously stormy seas.

  3. Doppler effect for sound emitted by a moving airborne source and received by acoustic sensors located above and below the sea surface.

    PubMed

    Ferguson, B G

    1993-12-01

    The acoustic emissions from a propeller-driven aircraft are received by a microphone mounted just above ground level and then by a hydrophone located below the sea surface. The dominant feature in the output spectrum of each acoustic sensor is the spectral line corresponding to the propeller blade rate. A frequency estimation technique is applied to the acoustic data from each sensor so that the Doppler shift in the blade rate can be observed at short time intervals during the aircraft's transit overhead. For each acoustic sensor, the observed variation with time of the Doppler-shifted blade rate is compared with the variation predicted by a simple ray-theory model that assumes the atmosphere and the sea are distinct isospeed sound propagation media separated by a plane boundary. The results of the comparison are shown for an aircraft flying with a speed of about 250 kn at altitudes of 500, 700, and 1000 ft.

  4. Near-field noise of a single-rotation propfan at an angle of attack

    NASA Technical Reports Server (NTRS)

    Nallasamy, M.; Envia, E.; Clark, B. J.; Groeneweg, J. F.

    1990-01-01

    The near field noise characteristics of a propfan operating at an angle of attack are examined utilizing the unsteady pressure field obtained from a 3-D Euler simulation of the propfan flowfield. The near field noise is calculated employing three different procedures: a direct computation method in which the noise field is extracted directly from the Euler solution, and two acoustic-analogy-based frequency domain methods which utilize the computed unsteady pressure distribution on the propfan blades as the source term. The inflow angles considered are -0.4, 1.6, and 4.6 degrees. The results of the direct computation method and one of the frequency domain methods show qualitative agreement with measurements. They show that an increase in the inflow angle is accompanied by an increase in the sound pressure level at the outboard wing boom locations and a decrease in the sound pressure level at the (inboard) fuselage locations. The trends in the computed azimuthal directivities of the noise field also conform to the measured and expected results.

  5. An Amplitude-Based Estimation Method for International Space Station (ISS) Leak Detection and Localization Using Acoustic Sensor Networks

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Madaras, Eric I.

    2009-01-01

    The development of a robust and efficient leak detection and localization system within a space station environment presents a unique challenge. A plausible approach includes the implementation of an acoustic sensor network system that can successfully detect the presence of a leak and determine the location of the leak source. Traditional acoustic detection and localization schemes rely on the phase and amplitude information collected by the sensor array system. Furthermore, the acoustic source signals are assumed to be airborne and far-field. Likewise, there are similar applications in sonar. In solids, there are specialized methods for locating events that are used in geology and in acoustic emission testing that involve sensor arrays and depend on a discernable phase front to the received signal. These methods are ineffective if applied to a sensor detection system within the space station environment. In the case of acoustic signal location, there are significant baffling and structural impediments to the sound path and the source could be in the near-field of a sensor in this particular setting.

  6. Hydroacoustic Signals Recorded by the International Monitoring System

    NASA Astrophysics Data System (ADS)

    Blackman, D.; de Groot-Hedlin, C.; Orcutt, J.; Harben, P.

    2002-12-01

    Networks of hydrophones, such as the hydroacoustic part of the International Monitoring System (IMS), and hydrophone arrays, such as the U.S. Navy operates, record many types of signals, some of which travel thousands of kilometers in the oceanic sound channel. Abyssal earthquakes generate many such individual events and occasionally occur in swarms. Here we focus on signals generated by other types of sources, illustrating their character with recent data, mostly from the Indian Ocean. Shipping generates signals in the 5-40 Hz band. Large airgun arrays can generate T-waves that travel across an ocean basin if the near-source seafloor has appropriate depth/slope. Airgun array shots from our 2001 experiment were located with an accuracy of 25-40 km at 700-1000 km ranges, using data from a Diego Garcia tripartite sensor station. Shots at greater range (up to 4800 km) were recorded at multiple stations but their higher background noise levels in the 5-30 Hz band resulted in location errors of ~100 km. Imploding glass spheres shattered within the sound channel produce a very impulsive arrival, even after propagating 4400 km. Recordings of the sphere signal have energy concentrated in the band above 40 Hz. Natural sources such as undersea volcanic eruptions and marine mammals also produce signals that are clearly evident in hydrophone recordings. For whales, the frequency range is 20~120Hz and specific patterns of vocalization characterize different species. Volcanic eruptions typically produce intense swarms of acoustic activity that last days-weeks and the source area can migrate tens of kms during the period. The utility of these types of hydroacoustic sources for research and/or monitoring purposes depends on the accuracy with which recordings can be used to locate and quantitatively characterize the source. Oceanic weather, both local and regional, affect background noise levels in key frequency bands at the recording stations. Databases used in forward modeling of propagation and acoustic losses can be sparse in remote regions. Our Indian Ocean results suggest that when bathymetric coverage is poor, predictions for 8 Hz propagation/loss match observations better than those for propagation of 30 Hz signals over 1000-km distances.

  7. Detection and modeling of the acoustic perturbation produced by the launch of the Space Shuttle using the Global Positioning System

    NASA Astrophysics Data System (ADS)

    Bowling, T. J.; Calais, E.; Dautermann, T.

    2010-12-01

    Rocket launches are known to produce infrasonic pressure waves that propagate into the ionosphere where coupling between electrons and neutral particles induces fluctuations in ionospheric electron density observable in GPS measurements. We have detected ionospheric perturbations following the launch of space shuttle Atlantis on 11 May 2009 using an array of continually operating GPS stations across the Southeastern coast of the United States and in the Caribbean. Detections are prominent to the south of the westward shuttle trajectory in the area of maximum coupling between the acoustic wave and Earth’s magnetic field, move at speeds consistent with the speed of sound, and show coherency between stations covering a large geographic range. We model the perturbation as an explosive source located at the point of closest approach between the shuttle path and each sub-ionospheric point. The neutral pressure wave is propagated using ray tracing, resultant changes in electron density are calculated at points of intersection between rays and satellite-to-reciever line-of-sight, and synthetic integrated electron content values are derived. Arrival times of the observed and synthesized waveforms match closely, with discrepancies related to errors in the apriori sound speed model used for ray tracing. Current work includes the estimation of source location and energy.

  8. Localization of virtual sound at 4 Gz.

    PubMed

    Sandor, Patrick M B; McAnally, Ken I; Pellieux, Lionel; Martin, Russell L

    2005-02-01

    Acceleration directed along the body's z-axis (Gz) leads to misperception of the elevation of visual objects (the "elevator illusion"), most probably as a result of errors in the transformation from eye-centered to head-centered coordinates. We have investigated whether the location of sound sources is misperceived under increased Gz. Visually guided localization responses were made, using a remotely controlled laser pointer, to virtual auditory targets under conditions of 1 and 4 Gz induced in a human centrifuge. As these responses would be expected to be affected by the elevator illusion, we also measured the effect of Gz on the accuracy with which subjects could point to the horizon. Horizon judgments were lower at 4 Gz than at 1 Gz, so sound localization responses at 4 Gz were corrected for this error in the transformation from eye-centered to head-centered coordinates. We found that the accuracy and bias of sound localization are not significantly affected by increased Gz. The auditory modality is likely to provide a reliable means of conveying spatial information to operators in dynamic environments in which Gz can vary.

  9. Rocket noise - A review

    NASA Astrophysics Data System (ADS)

    McInerny, S. A.

    1990-10-01

    This paper reviews what is known about far-field rocket noise from the controlled studies of the late 1950s and 1960s and from launch data. The peak dimensionless frequency, the dependence of overall sound power on exhaust parameters, and the directivity of the overall sound power of rockets are compared to those of subsonic jets and turbo-jets. The location of the dominant sound source in the rocket exhaust plume and the mean flow velocity in this region are discussed and shown to provide a qualitative explanation for the low peak Strouhal number, fD(e)/V(e), and large angle of maximum directivity. Lastly, two empirical prediction methods are compared with data from launches of a Titan family vehicle (two, solid rocket motors of 5.7 x 10 to the 6th N thrust each) and the Saturn V (five, liquid oxygen/rocket propellant engines of 6.7 x 10 to the 6th N thrust, each). The agreement is favorable. In contrast, these methods appear to overpredict the far-field sound pressure levels generated by the Space Shuttle.

  10. A Numerical Experiment on the Role of Surface Shear Stress in the Generation of Sound

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Wang, Meng; Merriam, Marshal (Technical Monitor)

    1996-01-01

    The sound generated due to a localized flow over an infinite flat surface is considered. It is known that the unsteady surface pressure, while appearing in a formal solution to the Lighthill equation, does not constitute a source of sound but rather represents the effect of image quadrupoles. The question of whether a similar surface shear stress term constitutes a true source of dipole sound is less settled. Some have boldly assumed it is a true source while others have argued that, like the surface pressure, it depends on the sound field (via an acoustic boundary layer) and is therefore not a true source. A numerical experiment based on the viscous, compressible Navier-Stokes equations was undertaken to investigate the issue. A small region of a wall was oscillated tangentially. The directly computed sound field was found to to agree with an acoustic analogy based calculation which regards the surface shear as an acoustically compact dipole source of sound.

  11. Sound pressure distribution within natural and artificial human ear canals: forward stimulation.

    PubMed

    Ravicz, Michael E; Tao Cheng, Jeffrey; Rosowski, John J

    2014-12-01

    This work is part of a study of the interaction of sound pressure in the ear canal (EC) with tympanic membrane (TM) surface displacement. Sound pressures were measured with 0.5-2 mm spacing at three locations within the shortened natural EC or an artificial EC in human temporal bones: near the TM surface, within the tympanic ring plane, and in a plane transverse to the long axis of the EC. Sound pressure was also measured at 2-mm intervals along the long EC axis. The sound field is described well by the size and direction of planar sound pressure gradients, the location and orientation of standing-wave nodal lines, and the location of longitudinal standing waves along the EC axis. Standing-wave nodal lines perpendicular to the long EC axis are present on the TM surface >11-16 kHz in the natural or artificial EC. The range of sound pressures was larger in the tympanic ring plane than at the TM surface or in the transverse EC plane. Longitudinal standing-wave patterns were stretched. The tympanic-ring sound field is a useful approximation of the TM sound field, and the artificial EC approximates the natural EC.

  12. CLIVAR Mode Water Dynamics Experiment (CLIMODE), Fall 2006 R/V Oceanus Voyage 434, November 16, 2006-December 3, 2006

    DTIC Science & Technology

    2007-12-01

    except for the dive zero time which needed to be programmed during the cruise when the deployment schedule dates were confirmed. _ ACM - Aanderaa ACM...guards bolted on to complete the frame prior to deployment. Sound Source - Sound sources were scheduled to be redeployed. Sound sources were originally...battery voltages and a vacuum. A +27 second time drift was noted and the time was reset. The sound source was scheduled to go to full power on November

  13. Statistics of natural reverberation enable perceptual separation of sound and space

    PubMed Central

    Traer, James; McDermott, Josh H.

    2016-01-01

    In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us. PMID:27834730

  14. Statistics of natural reverberation enable perceptual separation of sound and space.

    PubMed

    Traer, James; McDermott, Josh H

    2016-11-29

    In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us.

  15. Effect of Free Jet on Refraction and Noise

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Georgiadis, Nicholas J.; Bridges, James E.; Dippold, Vance F., III

    2005-01-01

    This article investigates the role of a free jet on the sound radiated from a jet. In particular, the role of an infinite wind tunnel, which simulates the forward flight condition, is compared to that of a finite wind tunnel. The second configuration is usually used in experiments, where the microphones are located in a static ambient medium far outside the free jet. To study the effect of the free jet on noise, both propagation and source strength need to be addressed. In this work, the exact Green's function in a locally parallel flow is derived for a simulated flight case. Numerical examples are presented that show a reduction in the magnitude of the Green's function in the aft arc and an increase in the forward arc for the simulated flight condition. The effect of finite wind tunnel on refraction is sensitive to the source location and is most pronounced in the aft arc. A Reynolds-averaged Navier-Stokes solution (RANS) yields the required mean flow and turbulence scales that are used in the jet mixing noise spectrum calculations. In addition to the sound/flow interaction, the separate effect of source strength and elongation of the noise-generating region of the jet in a forward flight is studied. Comparisons are made with experiments for the static and finite tunnel cases. Finally, the standard free-jet shear corrections that convert the finite wind tunnel measurements to an ideal wind tunnel arrangement are evaluated.

  16. Late-Quaternary glaciation and postglacial emergence, southern Eureka Sound, high-Arctic Canada

    NASA Astrophysics Data System (ADS)

    O Cofaigh, Colm Seamus

    Eureka Sound is the inter-island channel separating Ellesmere and Axel Heiberg islands, High Arctic Canada. This thesis reconstructs the glacial and sea level history of southern Eureka Sound through surficial geological mapping, studies of glacial sedimentology and geomorphology, surveying of raised marine shorelines, radiocarbon dating of marine shells and driftwood and surface exposure dating of erratics and bedrock. Granite dispersal trains, shelly till and ice-moulded bedrock record westerly-flow of warm-based, regional ice into Eureka Sound from a source on southeastern Ellesmere Island during the late Wisconsinan. Regional ice was coalescent with local ice domes over Raanes and northern Svendsen peninsulas. Marine limit (dating <=9.2 ka BP; <=9.9 ka cal BP) is inset into the dispersal trains and records early Holocene deglaciation of regional ice. Collectively these data indicate an extensive ice-cover in southern Eureka Sound during the Last Glacial Maximum. Ice-divides were located along the highlands of central Ellesmere and Axel Heiberg islands, from which ice converged on Eureka Sound, and subsequently flowed north and south along the channel. Deglaciation was characterised by a two-step retreat pattern, likely triggered by eustatic sea level rise and abrupt early Holocene warming. Initial break-up and radial retreat of ice in Eureka Sound and the larger fiords, preceded terrestrial stabilisation along coastlines and inner fiords. Location of deglacial depocentres was predominantly controlled by fiord bathymetry. Regionally, two-step deglaciation is reflected by prominent contrasts in glacial geomorphology between the inner and outer parts of many fiords. Glacial sedimentological and geomorphological evidence indicates spatial variation in basal thermal regime between retreating trunk glaciers. Holocene emergence of up to 150 m asl along southern Eureka Sound is recorded by raised marine deltas, beaches and washing limits. Emergence curves exhibit marked contrasts in the form and rate of initial unloading. Isobases drawn on the 8.5 ka shoreline for greater Eureka Sound demonstrate that a cell of highest emergence extends along the length of the channel, and closes in the vicinity of the entrance to Norwegian Bay. The isobase pattern indicates a distinct loading centre over the sound, and in conjunction with glacial geological evidence, suggests that the thickest late Wisconsinan ice lay over the channel.

  17. Detection, Source Location, and Analysis of Volcano Infrasound

    NASA Astrophysics Data System (ADS)

    McKee, Kathleen F.

    The study of volcano infrasound focuses on low frequency sound from volcanoes, how volcanic processes produce it, and the path it travels from the source to our receivers. In this dissertation we focus on detecting, locating, and analyzing infrasound from a number of different volcanoes using a variety of analysis techniques. These works will help inform future volcano monitoring using infrasound with respect to infrasonic source location, signal characterization, volatile flux estimation, and back-azimuth to source determination. Source location is an important component of the study of volcano infrasound and in its application to volcano monitoring. Semblance is a forward grid search technique and common source location method in infrasound studies as well as seismology. We evaluated the effectiveness of semblance in the presence of significant topographic features for explosions of Sakurajima Volcano, Japan, while taking into account temperature and wind variations. We show that topographic obstacles at Sakurajima cause a semblance source location offset of 360-420 m to the northeast of the actual source location. In addition, we found despite the consistent offset in source location semblance can still be a useful tool for determining periods of volcanic activity. Infrasonic signal characterization follows signal detection and source location in volcano monitoring in that it informs us of the type of volcanic activity detected. In large volcanic eruptions the lowermost portion of the eruption column is momentum-driven and termed the volcanic jet or gas-thrust zone. This turbulent fluid-flow perturbs the atmosphere and produces a sound similar to that of jet and rocket engines, known as jet noise. We deployed an array of infrasound sensors near an accessible, less hazardous, fumarolic jet at Aso Volcano, Japan as an analogue to large, violent volcanic eruption jets. We recorded volcanic jet noise at 57.6° from vertical, a recording angle not normally feasible in volcanic environments. The fumarolic jet noise was found to have a sustained, low amplitude signal with a spectral peak between 7-10 Hz. From thermal imagery we measure the jet temperature ( 260 °C) and estimate the jet diameter ( 2.5 m). From the estimated jet diameter, an assumed Strouhal number of 0.19, and the jet noise peak frequency, we estimated the jet velocity to be 79 - 132 m/s. We used published gas data to then estimate the volatile flux at 160 - 270 kg/s (14,000 - 23,000 t/d). These estimates are typically difficult to obtain in volcanic environments, but provide valuable information on the eruption. At regional and global length scales we use infrasound arrays to detect signals and determine their source back-azimuths. A ground coupled airwave (GCA) occurs when an incident acoustic pressure wave encounters the Earth's surface and part of the energy of the wave is transferred to the ground. GCAs are commonly observed from sources such as volcanic eruptions, bolides, meteors, and explosions. They have been observed to have retrograde particle motion. When recorded on collocated seismo-acoustic sensors, the phase between the infrasound and seismic signals is 90°. If the sensors are separated wind noise is usually incoherent and an additional phase is added due to the sensor separation. We utilized the additional phase and the characteristic particle motion to determine a unique back-azimuth solution to an acoustic source. The additional phase will be different depending on the direction from which a wave arrives. Our technique was tested using synthetic seismo-acoustic data from a coupled Earth-atmosphere 3D finite difference code and then applied to two well-constrained datasets: Mount St. Helens, USA, and Mount Pagan, Commonwealth of the Northern Mariana Islands Volcanoes. The results from our method are within <1° - 5° of the actual and traditional infrasound array processing determined back-azimuths. Ours is a new method to detect and determine the back-azimuth to infrasonic signals, which will be useful when financial and spatial resources are limited.

  18. Development of the sound localization cues in cats

    NASA Astrophysics Data System (ADS)

    Tollin, Daniel J.

    2004-05-01

    Cats are a common model for developmental studies of the psychophysical and physiological mechanisms of sound localization. Yet, there are few studies on the development of the acoustical cues to location in cats. The magnitude of the three main cues, interaural differences in time (ITDs) and level (ILDs), and monaural spectral shape cues, vary with location in adults. However, the increasing interaural distance associated with a growing head and pinnae during development will result in cues that change continuously until maturation is complete. Here, we report measurements, in cats aged 1 week to adulthood, of the physical dimensions of the head and pinnae and the localization cues, computed from measurements of directional transfer functions. At 1 week, ILD depended little on azimuth for frequencies <6-7 kHz, maximum ITD was 175 μs, and for sources varying in elevation, a prominent spectral notch was located at higher frequencies than in the older cats. As cats develop, the spectral cues and the frequencies at which ILDs become substantial (>10 dB) shift to lower frequencies, and the maximum ITD increases to nearly 370 μs. Changes in the cues are correlated with the increasing size of the head and pinnae. [Work supported by NIDCD DC05122.

  19. Sound localization and auditory response capabilities in round goby (Neogobius melanostomus)

    NASA Astrophysics Data System (ADS)

    Rollo, Audrey K.; Higgs, Dennis M.

    2005-04-01

    A fundamental role in vertebrate auditory systems is determining the direction of a sound source. While fish show directional responses to sound, sound localization remains in dispute. The species used in the current study, Neogobius melanostomus (round goby) uses sound in reproductive contexts, with both male and female gobies showing directed movement towards a calling male. The two-choice laboratory experiment was used (active versus quiet speaker) to analyze behavior of gobies in response to sound stimuli. When conspecific male spawning sounds were played, gobies moved in a direct path to the active speaker, suggesting true localization to sound. Of the animals that responded to conspecific sounds, 85% of the females and 66% of the males moved directly to the sound source. Auditory playback of natural and synthetic sounds showed differential behavioral specificity. Of gobies that responded, 89% were attracted to the speaker playing Padogobius martensii sounds, 87% to 100 Hz tone, 62% to white noise, and 56% to Gobius niger sounds. Swimming speed, as well as mean path angle to the speaker, will be presented during the presentation. Results suggest a strong localization of the round goby to a sound source, with some differential sound specificity.

  20. Experiments of multichannel least-square methods for sound field reproduction inside aircraft mock-up: Objective evaluations

    NASA Astrophysics Data System (ADS)

    Gauthier, P.-A.; Camier, C.; Lebel, F.-A.; Pasco, Y.; Berry, A.; Langlois, J.; Verron, C.; Guastavino, C.

    2016-08-01

    Sound environment reproduction of various flight conditions in aircraft mock-ups is a valuable tool for the study, prediction, demonstration and jury testing of interior aircraft sound quality and annoyance. To provide a faithful reproduced sound environment, time, frequency and spatial characteristics should be preserved. Physical sound field reproduction methods for spatial sound reproduction are mandatory to immerse the listener's body in the proper sound fields so that localization cues are recreated at the listener's ears. Vehicle mock-ups pose specific problems for sound field reproduction. Confined spaces, needs for invisible sound sources and very specific acoustical environment make the use of open-loop sound field reproduction technologies such as wave field synthesis (based on free-field models of monopole sources) not ideal. In this paper, experiments in an aircraft mock-up with multichannel least-square methods and equalization are reported. The novelty is the actual implementation of sound field reproduction with 3180 transfer paths and trim panel reproduction sources in laboratory conditions with a synthetic target sound field. The paper presents objective evaluations of reproduced sound fields using various metrics as well as sound field extrapolation and sound field characterization.

  1. Student's Second-Language Grade May Depend on Classroom Listening Position.

    PubMed

    Hurtig, Anders; Sörqvist, Patrik; Ljung, Robert; Hygge, Staffan; Rönnberg, Jerker

    2016-01-01

    The purpose of this experiment was to explore whether listening positions (close or distant location from the sound source) in the classroom, and classroom reverberation, influence students' score on a test for second-language (L2) listening comprehension (i.e., comprehension of English in Swedish speaking participants). The listening comprehension test administered was part of a standardized national test of English used in the Swedish school system. A total of 125 high school pupils, 15 years old, participated. Listening position was manipulated within subjects, classroom reverberation between subjects. The results showed that L2 listening comprehension decreased as distance from the sound source increased. The effect of reverberation was qualified by the participants' baseline L2 proficiency. A shorter reverberation was beneficial to participants with high L2 proficiency, while the opposite pattern was found among the participants with low L2 proficiency. The results indicate that listening comprehension scores-and hence students' grade in English-may depend on students' classroom listening position.

  2. Student’s Second-Language Grade May Depend on Classroom Listening Position

    PubMed Central

    Sörqvist, Patrik; Ljung, Robert; Hygge, Staffan; Rönnberg, Jerker

    2016-01-01

    The purpose of this experiment was to explore whether listening positions (close or distant location from the sound source) in the classroom, and classroom reverberation, influence students’ score on a test for second-language (L2) listening comprehension (i.e., comprehension of English in Swedish speaking participants). The listening comprehension test administered was part of a standardized national test of English used in the Swedish school system. A total of 125 high school pupils, 15 years old, participated. Listening position was manipulated within subjects, classroom reverberation between subjects. The results showed that L2 listening comprehension decreased as distance from the sound source increased. The effect of reverberation was qualified by the participants’ baseline L2 proficiency. A shorter reverberation was beneficial to participants with high L2 proficiency, while the opposite pattern was found among the participants with low L2 proficiency. The results indicate that listening comprehension scores—and hence students’ grade in English—may depend on students’ classroom listening position. PMID:27304980

  3. Temporal and Spatial Comparisons of Underwater Sound Signatures of Different Reef Habitats in Moorea Island, French Polynesia.

    PubMed

    Bertucci, Frédéric; Parmentier, Eric; Berten, Laëtitia; Brooker, Rohan M; Lecchini, David

    2015-01-01

    As environmental sounds are used by larval fish and crustaceans to locate and orientate towards habitat during settlement, variations in the acoustic signature produced by habitats could provide valuable information about habitat quality, helping larvae to differentiate between potential settlement sites. However, very little is known about how acoustic signatures differ between proximate habitats. This study described within- and between-site differences in the sound spectra of five contiguous habitats at Moorea Island, French Polynesia: the inner reef crest, the barrier reef, the fringing reef, a pass and a coastal mangrove forest. Habitats with coral (inner, barrier and fringing reefs) were characterized by a similar sound spectrum with average intensities ranging from 70 to 78 dB re 1 μPa.Hz(-1). The mangrove forest had a lower sound intensity of 70 dB re 1 μPa.Hz(-1) while the pass was characterized by a higher sound level with an average intensity of 91 dB re 1 μPa.Hz(-1). Habitats showed significantly different intensities for most frequencies, and a decreasing intensity gradient was observed from the reef to the shore. While habitats close to the shore showed no significant diel variation in sound intensities, sound levels increased at the pass during the night and barrier reef during the day. These two habitats also appeared to be louder in the North than in the West. These findings suggest that daily variations in sound intensity and across-reef sound gradients could be a valuable source of information for settling larvae. They also provide further evidence that closely related habitats, separated by less than 1 km, can differ significantly in their spectral composition and that these signatures might be typical and conserved along the coast of Moorea.

  4. Modeling of reverberant room responses for two-dimensional spatial sound field analysis and synthesis.

    PubMed

    Bai, Mingsian R; Li, Yi; Chiang, Yi-Hao

    2017-10-01

    A unified framework is proposed for analysis and synthesis of two-dimensional spatial sound field in reverberant environments. In the sound field analysis (SFA) phase, an unbaffled 24-element circular microphone array is utilized to encode the sound field based on the plane-wave decomposition. Depending on the sparsity of the sound sources, the SFA stage can be implemented in two manners. For sparse-source scenarios, a one-stage algorithm based on compressive sensing algorithm is utilized. Alternatively, a two-stage algorithm can be used, where the minimum power distortionless response beamformer is used to localize the sources and Tikhonov regularization algorithm is used to extract the source amplitudes. In the sound field synthesis (SFS), a 32-element rectangular loudspeaker array is employed to decode the target sound field using pressure matching technique. To establish the room response model, as required in the pressure matching step of the SFS phase, an SFA technique for nonsparse-source scenarios is utilized. Choice of regularization parameters is vital to the reproduced sound field. In the SFS phase, three SFS approaches are compared in terms of localization performance and voice reproduction quality. Experimental results obtained in a reverberant room are presented and reveal that an accurate room response model is vital to immersive rendering of the reproduced sound field.

  5. Sound Source Localization and Speech Understanding in Complex Listening Environments by Single-sided Deaf Listeners After Cochlear Implantation.

    PubMed

    Zeitler, Daniel M; Dorman, Michael F; Natale, Sarah J; Loiselle, Louise; Yost, William A; Gifford, Rene H

    2015-09-01

    To assess improvements in sound source localization and speech understanding in complex listening environments after unilateral cochlear implantation for single-sided deafness (SSD). Nonrandomized, open, prospective case series. Tertiary referral center. Nine subjects with a unilateral cochlear implant (CI) for SSD (SSD-CI) were tested. Reference groups for the task of sound source localization included young (n = 45) and older (n = 12) normal-hearing (NH) subjects and 27 bilateral CI (BCI) subjects. Unilateral cochlear implantation. Sound source localization was tested with 13 loudspeakers in a 180 arc in front of the subject. Speech understanding was tested with the subject seated in an 8-loudspeaker sound system arrayed in a 360-degree pattern. Directionally appropriate noise, originally recorded in a restaurant, was played from each loudspeaker. Speech understanding in noise was tested using the Azbio sentence test and sound source localization quantified using root mean square error. All CI subjects showed poorer-than-normal sound source localization. SSD-CI subjects showed a bimodal distribution of scores: six subjects had scores near the mean of those obtained by BCI subjects, whereas three had scores just outside the 95th percentile of NH listeners. Speech understanding improved significantly in the restaurant environment when the signal was presented to the side of the CI. Cochlear implantation for SSD can offer improved speech understanding in complex listening environments and improved sound source localization in both children and adults. On tasks of sound source localization, SSD-CI patients typically perform as well as BCI patients and, in some cases, achieve scores at the upper boundary of normal performance.

  6. Computational study of the interaction between a shock and a near-wall vortex using a weighted compact nonlinear scheme

    NASA Astrophysics Data System (ADS)

    Zuo, Zhifeng; Maekawa, Hiroshi

    2014-02-01

    The interaction between a moderate-strength shock wave and a near-wall vortex is studied numerically by solving the two-dimensional, unsteady compressible Navier-Stokes equations using a weighted compact nonlinear scheme with a simple low-dissipation advection upstream splitting method for flux splitting. Our main purpose is to clarify the development of the flow field and the generation of sound waves resulting from the interaction. The effects of the vortex-wall distance on the sound generation associated with variations in the flow structures are also examined. The computational results show that three sound sources are involved in this problem: (i) a quadrupolar sound source due to the shock-vortex interaction; (ii) a dipolar sound source due to the vortex-wall interaction; and (iii) a dipolar sound source due to unsteady wall shear stress. The sound field is the combination of the sound waves produced by all three sound sources. In addition to the interaction of the incident shock with the vortex, a secondary shock-vortex interaction is caused by the reflection of the reflected shock (MR2) from the wall. The flow field is dominated by the primary and secondary shock-vortex interactions. The generation mechanism of the third sound, which is newly discovered, due to the MR2-vortex interaction is presented. The pressure variations generated by (ii) become significant with decreasing vortex-wall distance. The sound waves caused by (iii) are extremely weak compared with those caused by (i) and (ii) and are negligible in the computed sound field.

  7. Detection of a Novel Mechanism of Acousto-Optic Modulation of Incoherent Light

    PubMed Central

    Jarrett, Christopher W.; Caskey, Charles F.; Gore, John C.

    2014-01-01

    A novel form of acoustic modulation of light from an incoherent source has been detected in water as well as in turbid media. We demonstrate that patterns of modulated light intensity appear to propagate as the optical shadow of the density variations caused by ultrasound within an illuminated ultrasonic focal zone. This pattern differs from previous reports of acousto-optical interactions that produce diffraction effects that rely on phase shifts and changes in light directions caused by the acoustic modulation. Moreover, previous studies of acousto-optic interactions have mainly reported the effects of sound on coherent light sources via photon tagging, and/or the production of diffraction phenomena from phase effects that give rise to discrete sidebands. We aimed to assess whether the effects of ultrasound modulation of the intensity of light from an incoherent light source could be detected directly, and how the acoustically modulated (AOM) light signal depended on experimental parameters. Our observations suggest that ultrasound at moderate intensities can induce sufficiently large density variations within a uniform medium to cause measurable modulation of the intensity of an incoherent light source by absorption. Light passing through a region of high intensity ultrasound then produces a pattern that is the projection of the density variations within the region of their interaction. The patterns exhibit distinct maxima and minima that are observed at locations much different from those predicted by Raman-Nath, Bragg, or other diffraction theory. The observed patterns scaled appropriately with the geometrical magnification and sound wavelength. We conclude that these observed patterns are simple projections of the ultrasound induced density changes which cause spatial and temporal variations of the optical absorption within the illuminated sound field. These effects potentially provide a novel method for visualizing sound fields and may assist the interpretation of other hybrid imaging methods. PMID:25105880

  8. Reconstruction of sound source signal by analytical passive TR in the environment with airflow

    NASA Astrophysics Data System (ADS)

    Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu

    2017-03-01

    In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.

  9. Efficiency of vibrational sounding in parasitoid host location depends on substrate density.

    PubMed

    Fischer, S; Samietz, J; Dorn, S

    2003-10-01

    Parasitoids of concealed hosts have to drill through a substrate with their ovipositor for successful parasitization. Hymenopteran species in this drill-and-sting guild locate immobile pupal hosts by vibrational sounding, i.e., echolocation on solid substrate. Although this host location strategy is assumed to be common among the Orussidae and Ichneumonidae there is no information yet whether it is adapted to characteristics of the host microhabitat. This study examined the effect of substrate density on responsiveness and host location efficiency in two pupal parasitoids, Pimpla turionellae and Xanthopimpla stemmator (Hymenoptera: Ichneumonidae), with different host-niche specialization and corresponding ovipositor morphology. Location and frequency of ovipositor insertions were scored on cylindrical plant stem models of various densities. Substrate density had a significant negative effect on responsiveness, number of ovipositor insertions, and host location precision in both species. The more niche-specific species X. stemmator showed a higher host location precision and insertion activity. We could show that vibrational sounding is obviously adapted to the host microhabitat of the parasitoid species using this host location strategy. We suggest the attenuation of pulses during vibrational sounding as the energetically costly limiting factor for this adaptation.

  10. Experimental Simulation of Active Control With On-line System Identification on Sound Transmission Through an Elastic Plate

    NASA Technical Reports Server (NTRS)

    1998-01-01

    An adaptive control algorithm with on-line system identification capability has been developed. One of the great advantages of this scheme is that an additional system identification mechanism such as an additional uncorrelated random signal generator as the source of system identification is not required. A time-varying plate-cavity system is used to demonstrate the control performance of this algorithm. The time-varying system consists of a stainless-steel plate which is bolted down on a rigid cavity opening where the cavity depth was changed with respect to time. For a given externally located harmonic sound excitation, the system identification and the control are simultaneously executed to minimize the transmitted sound in the cavity. The control performance of the algorithm is examined for two cases. First, all the water was drained, the external disturbance frequency is swept with 1 Hz/sec. The result shows an excellent frequency tracking capability with cavity internal sound suppression of 40 dB. For the second case, the water level is initially empty and then raised to 3/20 full in 60 seconds while the external sound excitation is fixed with a frequency. Hence, the cavity resonant frequency decreases and passes the external sound excitation frequency. The algorithm shows 40 dB transmitted noise suppression without compromising the system identification tracking capability.

  11. Sound pressure distribution within natural and artificial human ear canals: Forward stimulation

    PubMed Central

    Ravicz, Michael E.; Tao Cheng, Jeffrey; Rosowski, John J.

    2014-01-01

    This work is part of a study of the interaction of sound pressure in the ear canal (EC) with tympanic membrane (TM) surface displacement. Sound pressures were measured with 0.5–2 mm spacing at three locations within the shortened natural EC or an artificial EC in human temporal bones: near the TM surface, within the tympanic ring plane, and in a plane transverse to the long axis of the EC. Sound pressure was also measured at 2-mm intervals along the long EC axis. The sound field is described well by the size and direction of planar sound pressure gradients, the location and orientation of standing-wave nodal lines, and the location of longitudinal standing waves along the EC axis. Standing-wave nodal lines perpendicular to the long EC axis are present on the TM surface >11–16 kHz in the natural or artificial EC. The range of sound pressures was larger in the tympanic ring plane than at the TM surface or in the transverse EC plane. Longitudinal standing-wave patterns were stretched. The tympanic-ring sound field is a useful approximation of the TM sound field, and the artificial EC approximates the natural EC. PMID:25480061

  12. Leak locating microphone, method and system for locating fluid leaks in pipes

    DOEpatents

    Kupperman, David S.; Spevak, Lev

    1994-01-01

    A leak detecting microphone inserted directly into fluid within a pipe includes a housing having a first end being inserted within the pipe and a second opposed end extending outside the pipe. A diaphragm is mounted within the first housing end and an acoustic transducer is coupled to the diaphragm for converting acoustical signals to electrical signals. A plurality of apertures are provided in the housing first end, the apertures located both above and below the diaphragm, whereby to equalize fluid pressure on either side of the diaphragm. A leak locating system and method are provided for locating fluid leaks within a pipe. A first microphone is installed within fluid in the pipe at a first selected location and sound is detected at the first location. A second microphone is installed within fluid in the pipe at a second selected location and sound is detected at the second location. A cross-correlation is identified between the detected sound at the first and second locations for identifying a leak location.

  13. Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation

    PubMed Central

    Oliva, Aude

    2017-01-01

    Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals. PMID:28451630

  14. Aeroacoustic Characterization of the NASA Ames Experimental Aero-Physics Branch 32- by 48-Inch Subsonic Wind Tunnel with a 24-Element Phased Microphone Array

    NASA Technical Reports Server (NTRS)

    Costanza, Bryan T.; Horne, William C.; Schery, S. D.; Babb, Alex T.

    2011-01-01

    The Aero-Physics Branch at NASA Ames Research Center utilizes a 32- by 48-inch subsonic wind tunnel for aerodynamics research. The feasibility of acquiring acoustic measurements with a phased microphone array was recently explored. Acoustic characterization of the wind tunnel was carried out with a floor-mounted 24-element array and two ceiling-mounted speakers. The minimum speaker level for accurate level measurement was evaluated for various tunnel speeds up to a Mach number of 0.15 and streamwise speaker locations. A variety of post-processing procedures, including conventional beamforming and deconvolutional processing such as TIDY, were used. The speaker measurements, with and without flow, were used to compare actual versus simulated in-flow speaker calibrations. Data for wind-off speaker sound and wind-on tunnel background noise were found valuable for predicting sound levels for which the speakers were detectable when the wind was on. Speaker sources were detectable 2 - 10 dB below the peak background noise level with conventional data processing. The effectiveness of background noise cross-spectral matrix subtraction was assessed and found to improve the detectability of test sound sources by approximately 10 dB over a wide frequency range.

  15. Converting a Monopole Emission into a Dipole Using a Subwavelength Structure

    NASA Astrophysics Data System (ADS)

    Fan, Xu-Dong; Zhu, Yi-Fan; Liang, Bin; Cheng, Jian-chun; Zhang, Likun

    2018-03-01

    High-efficiency emission of multipoles is unachievable by a source much smaller than the wavelength, preventing compact acoustic devices for generating directional sound beams. Here, we present a primary scheme towards solving this problem by numerically and experimentally enclosing a monopole sound source in a structure with a dimension of around 1 /10 sound wavelength to emit a dipolar field. The radiated sound power is found to be more than twice that of a bare dipole. Our study of efficient emission of directional low-frequency sound from a monopole source in a subwavelength space may have applications such as focused ultrasound for imaging, directional underwater sound beams, miniaturized sonar, etc.

  16. Evolution of directional hearing in moths via conversion of bat detection devices to asymmetric pressure gradient receivers

    PubMed Central

    Reid, Andrew; Marin-Cudraz, Thibaut

    2016-01-01

    Small animals typically localize sound sources by means of complex internal connections and baffles that effectively increase time or intensity differences between the two ears. However, some miniature acoustic species achieve directional hearing without such devices, indicating that other mechanisms have evolved. Using 3D laser vibrometry to measure tympanum deflection, we show that female lesser waxmoths (Achroia grisella) can orient toward the 100-kHz male song, because each ear functions independently as an asymmetric pressure gradient receiver that responds sharply to high-frequency sound arriving from an azimuth angle 30° contralateral to the animal's midline. We found that females presented with a song stimulus while running on a locomotion compensation sphere follow a trajectory 20°–40° to the left or right of the stimulus heading but not directly toward it, movement consistent with the tympanum deflections and suggestive of a monaural mechanism of auditory tracking. Moreover, females losing their track typically regain it by auditory scanning—sudden, wide deviations in their heading—and females initially facing away from the stimulus quickly change their general heading toward it, orientation indicating superior ability to resolve the front–rear ambiguity in source location. X-ray computer-aided tomography (CT) scans of the moths did not reveal any internal coupling between the two ears, confirming that an acoustic insect can localize a sound source based solely on the distinct features of each ear. PMID:27849607

  17. Numerical and Experimental Determination of the Geometric Far Field for Round Jets

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle; Bridges, James; Brown, Cliff; Khavaran, Abbas

    2003-01-01

    To reduce ambiguity in the reporting of far field jet noise, three round jets operating at subsonic conditions have recently been studied at the NASA Glenn Research Center. The goal of the investigation was to determine the location of the geometric far field both numerically and experimentally. The combination of the WIND Reynolds-Averaged Navier-Stokes solver and the MGBK jet noise prediction code was used for the computations, and the experimental data was collected in the Aeroacoustic Propulsion Laboratory. While noise sources are distributed throughout the jet plume, at great distances from the nozzle the noise will appear to be emanating from a point source and the assumption of linear propagation is valid. Closer to the jet, nonlinear propagation may be a problem, along with the known geometric issues. By comparing sound spectra at different distances from the jet, both from computational methods that assume linear propagation, and from experiments, the contributions of geometry and nonlinearity can be separately ascertained and the required measurement distance for valid experiments can be established. It is found that while the shortest arc considered here (approx. 8D) was already in the geometric far field for the high frequency sound (St greater than 2.0), the low frequency noise due to its extended source distribution reached the geometric far field at or about 50D. It is also found that sound spectra at far downstream angles does not strictly scale on Strouhal number, an observation that current modeling does not capture.

  18. Effect of the spectrum of a high-intensity sound source on the sound-absorbing properties of a resonance-type acoustic lining

    NASA Astrophysics Data System (ADS)

    Ipatov, M. S.; Ostroumov, M. N.; Sobolev, A. F.

    2012-07-01

    Experimental results are presented on the effect of both the sound pressure level and the type of spectrum of a sound source on the impedance of an acoustic lining. The spectra under study include those of white noise, a narrow-band signal, and a signal with a preset waveform. It is found that, to obtain reliable data on the impedance of an acoustic lining from the results of interferometric measurements, the total sound pressure level of white noise or the maximal sound pressure level of a pure tone (at every oscillation frequency) needs to be identical to the total sound pressure level of the actual source at the site of acoustic lining on the channel wall.

  19. Another ``new'' metric for outdoor amphitheater criteria

    NASA Astrophysics Data System (ADS)

    Berens, Robert S.

    2005-09-01

    Since the late 1960s, when amplified musical performances began being held there, Atlanta's open-air Chastain Park Amphitheater has been the source of enormous friction between the City, the venue's owner, and the wealthy, politically-connected residential community abutting the Park. To identify the characteristics of concert event sound to which neighbors are particularly sensitive, sound levels were monitored during 17 concerts, ranging from quiet jazz and classical performances to rock-and-roll and hip-hop. Community sound levels were monitored at 25 locations, including nine where measurements were made simultaneously inside and outside homes. The study team confirmed that low-frequency sound was the one feature of concert-related sound that community residents identified as most problematic, but that only a relatively small proportion of the Chastain concerts resulted in any significant community annoyance. After assessing the spectral characteristics of the most troublesome concerts, a new compliance metric has been proposed to address low-frequency annoyance issues: a two-tiered exceedence threshold, based on 1-minute LEQ levels in the 63 Hz octave band measured at the rear of the amphitheater, with a concert-event ``exceedence'' defined to be either a 1-minute LEQ(63 Hz) level greater than 95 dB or more than ten 1-minute LEQ(63 Hz) levels greater than 90 dB.

  20. Mechanisms underlying the temporal precision of sound coding at the inner hair cell ribbon synapse

    PubMed Central

    Moser, Tobias; Neef, Andreas; Khimich, Darina

    2006-01-01

    Our auditory system is capable of perceiving the azimuthal location of a low frequency sound source with a precision of a few degrees. This requires the auditory system to detect time differences in sound arrival between the two ears down to tens of microseconds. The detection of these interaural time differences relies on network computation by auditory brainstem neurons sharpening the temporal precision of the afferent signals. Nevertheless, the system requires the hair cell synapse to encode sound with the highest possible temporal acuity. In mammals, each auditory nerve fibre receives input from only one inner hair cell (IHC) synapse. Hence, this single synapse determines the temporal precision of the fibre. As if this was not enough of a challenge, the auditory system is also capable of maintaining such high temporal fidelity with acoustic signals that vary greatly in their intensity. Recent research has started to uncover the cellular basis of sound coding. Functional and structural descriptions of synaptic vesicle pools and estimates for the number of Ca2+ channels at the ribbon synapse have been obtained, as have insights into how the receptor potential couples to the release of synaptic vesicles. Here, we review current concepts about the mechanisms that control the timing of transmitter release in inner hair cells of the cochlea. PMID:16901948

  1. Snapshot recordings provide a first description of the acoustic signatures of deeper habitats adjacent to coral reefs of Moorea

    PubMed Central

    Parmentier, Eric; Berthe, Cécile; Besson, Marc; Hawkins, Anthony D.; Aubin, Thierry; Lecchini, David

    2017-01-01

    Acoustic recording has been recognized as a valuable tool for non-intrusive monitoring of the marine environment, complementing traditional visual surveys. Acoustic surveys conducted on coral ecosystems have so far been restricted to barrier reefs and to shallow depths (10–30 m). Since they may provide refuge for coral reef organisms, the monitoring of outer reef slopes and describing of the soundscapes of deeper environment could provide insights into the characteristics of different biotopes of coral ecosystems. In this study, the acoustic features of four different habitats, with different topographies and substrates, located at different depths from 10 to 100 m, were recorded during day-time on the outer reef slope of the north Coast of Moorea Island (French Polynesia). Barrier reefs appeared to be the noisiest habitats whereas the average sound levels at other habitats decreased with their distance from the reef and with increasing depth. However, sound levels were higher than expected by propagation models, supporting that these habitats possess their own sound sources. While reef sounds are known to attract marine larvae, sounds from deeper habitats may then also have a non-negligible attractive potential, coming into play before the reef itself. PMID:29158970

  2. A Analysis of the Low Frequency Sound Field in Non-Rectangular Enclosures Using the Finite Element Method.

    NASA Astrophysics Data System (ADS)

    Geddes, Earl Russell

    The details of the low frequency sound field for a rectangular room can be studied by the use of an established analytic technique--separation of variables. The solution is straightforward and the results are well-known. A non -rectangular room has boundary conditions which are not separable and therefore other solution techniques must be used. This study shows that the finite element method can be adapted for use in the study of sound fields in arbitrary shaped enclosures. The finite element acoustics problem is formulated and the modification of a standard program, which is necessary for solving acoustic field problems, is examined. The solution of the semi-non-rectangular room problem (one where the floor and ceiling remain parallel) is carried out by a combined finite element/separation of variables approach. The solution results are used to construct the Green's function for the low frequency sound field in five rooms (or data cases): (1) a rectangular (Louden) room; (2) The smallest wall of the Louden room canted 20 degrees from normal; (3) The largest wall of the Louden room canted 20 degrees from normal; (4) both the largest and the smallest walls are canted 20 degrees; and (5) a five-sided room variation of Case 4. Case 1, the rectangular room was calculated using both the finite element method and the separation of variables technique. The results for the two methods are compared in order to access the accuracy of the finite element method models. The modal damping coefficient are calculated and the results examined. The statistics of the source and receiver average normalized RMS P('2) responses in the 80 Hz, 100 Hz, and 125 Hz one-third octave bands are developed. The receiver averaged pressure response is developed to determine the effect of the source locations on the response. Twelve source locations are examined and the results tabulated for comparison. The effect of a finite sized source is looked at briefly. Finally, the standard deviation of the spatial pressure response is studied. The results for this characteristic show that it not significantly different in any of the rooms. The conclusions of the study are that only the frequency variations of the pressure response are affected by a room's shape. Further, in general, the simplest modification of a rectangular room (i.e., changing the angle of only one of the smallest walls), produces the most pronounced decrease of the pressure response variations in the low frequency region.

  3. Sources of Nutrients to Nearshore Areas of a Eutrophic Estuary: Implications for Nutrient-Enhanced Acidification in Puget Sound

    NASA Astrophysics Data System (ADS)

    Pacella, S. R.

    2016-02-01

    Ocean acidification has recently been highlighted as a major stressor for coastal organisms. Further work is needed to assess the role of anthropogenic nutrient additions in eutrophied systems on local biological processes, and how this interacts with CO2 emission-driven acidification. This study sought to distinguish changes in pH caused by natural versus anthropogenically affected processes. We quantified the variability in water column pH attributable to primary production and respiration fueled by anthropogenically derived nitrogen in a shallow nearshore area. Two study sites were located in shallow subtidal areas of the Snohomish River estuary, a eutrophic system located in central Puget Sound, Washington. These sites were chosen due to the presence of heavy agricultural activity, urbanized areas with associated waste water treatment, as well as influence from deep, high CO2 marine waters transported through the Strait of Juan de Fuca and upwelled into the area during spring and summer. Data was collected from July-December 2015 utilizing continuous moorings and discrete water column sampling. Analysis of stable isotopes, δ15N, δ18O-NO3, δ15N-NH4, was used to estimate the relative contributions of anthropogenic versus upwelled marine nitrogen sources. Continuous monitoring of pH, dissolved oxygen, temperature, and salinity was conducted at both study sites to link changes in nutrient source and availability with changes in pH. We predicted that isotope data would indicate greater contributions of nitrogen from agriculture and wastewater rather than upwelling in the shallow, nearshore study sites. This study seeks to distinguish the relative magnitude of pH change stimulated by anthropogenic versus natural sources of nitrogen to inform public policy decisions in critically important nearshore ecosystems.

  4. Identification of impact force acting on composite laminated plates using the radiated sound measured with microphones

    NASA Astrophysics Data System (ADS)

    Atobe, Satoshi; Nonami, Shunsuke; Hu, Ning; Fukunaga, Hisao

    2017-09-01

    Foreign object impact events are serious threats to composite laminates because impact damage leads to significant degradation of the mechanical properties of the structure. Identification of the location and force history of the impact that was applied to the structure can provide useful information for assessing the structural integrity. This study proposes a method for identifying impact forces acting on CFRP (carbon fiber reinforced plastic) laminated plates on the basis of the sound radiated from the impacted structure. Identification of the impact location and force history is performed using the sound pressure measured with microphones. To devise a method for identifying the impact location from the difference in the arrival times of the sound wave detected with the microphones, the propagation path of the sound wave from the impacted point to the sensor is examined. For the identification of the force history, an experimentally constructed transfer matrix is employed to relate the force history to the corresponding sound pressure. To verify the validity of the proposed method, impact tests are conducted by using a CFRP cross-ply laminate as the specimen, and an impulse hammer as the impactor. The experimental results confirm the validity of the present method for identifying the impact location from the arrival time of the sound wave detected with the microphones. Moreover, the results of force history identification show the feasibility of identifying the force history accurately from the measured sound pressure using the experimental transfer matrix.

  5. Techniques and instrumentation for the measurement of transient sound energy flux

    NASA Astrophysics Data System (ADS)

    Watkinson, P. S.; Fahy, F. J.

    1983-12-01

    The evaluation of sound intensity distributions, and sound powers, of essentially continuous sources such as automotive engines, electric motors, production line machinery, furnaces, earth moving machinery and various types of process plants were studied. Although such systems are important sources of community disturbance and, to a lesser extent, of industrial health hazard, the most serious sources of hearing hazard in industry are machines operating on an impact principle, such as drop forges, hammers and punches. Controlled experiments to identify major noise source regions and mechanisms are difficult because it is normally impossible to install them in quiet, anechoic environments. The potential for sound intensity measurement to provide a means of overcoming these difficulties has given promising results, indicating the possibility of separation of directly radiated and reverberant sound fields. However, because of the complexity of transient sound fields, a fundamental investigation is necessary to establish the practicability of intensity field decomposition, which is basic to source characterization techniques.

  6. Perceptual constancy in auditory perception of distance to railway tracks.

    PubMed

    De Coensel, Bert; Nilsson, Mats E; Berglund, Birgitta; Brown, A L

    2013-07-01

    Distance to a sound source can be accurately estimated solely from auditory information. With a sound source such as a train that is passing by at a relatively large distance, the most important auditory information for the listener for estimating its distance consists of the intensity of the sound, spectral changes in the sound caused by air absorption, and the motion-induced rate of change of intensity. However, these cues are relative because prior information/experience of the sound source-its source power, its spectrum and the typical speed at which it moves-is required for such distance estimates. This paper describes two listening experiments that allow investigation of further prior contextual information taken into account by listeners-viz., whether they are indoors or outdoors. Asked to estimate the distance to the track of a railway, it is shown that listeners assessing sounds heard inside the dwelling based their distance estimates on the expected train passby sound level outdoors rather than on the passby sound level actually experienced indoors. This form of perceptual constancy may have consequences for the assessment of annoyance caused by railway noise.

  7. Recent paleoseismicity record in Prince William Sound, Alaska, USA

    NASA Astrophysics Data System (ADS)

    Kuehl, Steven A.; Miller, Eric J.; Marshall, Nicole R.; Dellapenna, Timothy M.

    2017-12-01

    Sedimentological and geochemical investigation of sediment cores collected in the deep (>400 m) central basin of Prince William Sound, along with geochemical fingerprinting of sediment source areas, are used to identify earthquake-generated sediment gravity flows. Prince William Sound receives sediment from two distinct sources: from offshore (primarily Copper River) through Hinchinbrook Inlet, and from sources within the Sound (primarily Columbia Glacier). These sources are found to have diagnostic elemental ratios indicative of provenance; Copper River Basin sediments were significantly higher in Sr/Pb and Cu/Pb, whereas Prince William Sound sediments were significantly higher in K/Ca and Rb/Sr. Within the past century, sediment gravity flows deposited within the deep central channel of Prince William Sound have robust geochemical (provenance) signatures that can be correlated with known moderate to large earthquakes in the region. Given the thick Holocene sequence in the Sound ( 200 m) and correspondingly high sedimentation rates (>1 cm year-1), this relationship suggests that sediments within the central basin of Prince William Sound may contain an extraordinary high-resolution record of paleoseismicity in the region.

  8. Explosion localization and characterization via infrasound using numerical modeling

    NASA Astrophysics Data System (ADS)

    Fee, D.; Kim, K.; Iezzi, A. M.; Matoza, R. S.; Jolly, A. D.; De Angelis, S.; Diaz Moreno, A.; Szuberla, C.

    2017-12-01

    Numerous methods have been applied to locate, detect, and characterize volcanic and anthropogenic explosions using infrasound. Far-field localization techniques typically use back-azimuths from multiple arrays (triangulation) or Reverse Time Migration (RTM, or back-projection). At closer ranges, networks surrounding a source may use Time Difference of Arrival (TDOA), semblance, station-pair double difference, etc. However, at volcanoes and regions with topography or obstructions that block the direct path of sound, recent studies have shown that numerical modeling is necessary to provide an accurate source location. A heterogeneous and moving atmosphere (winds) may also affect the location. The time reversal mirror (TRM) application of Kim et al. (2015) back-propagates the wavefield using a Finite Difference Time Domain (FDTD) algorithm, with the source corresponding to the location of peak convergence. Although it provides high-resolution source localization and can account for complex wave propagation, TRM is computationally expensive and limited to individual events. Here we present a new technique, termed RTM-FDTD, which integrates TRM and FDTD. Travel time and transmission loss information is computed from each station to the entire potential source grid from 3-D Green's functions derived via FDTD. The wave energy is then back-projected and stacked at each grid point, with the maximum corresponding to the likely source. We apply our method to detect and characterize thousands of explosions from Yasur Volcano, Vanuatu and Etna Volcano, Italy, which both provide complex wave propagation and multiple source locations. We compare our results with those from more traditional methods (e.g. semblance), and suggest our method is preferred as it is computationally less expensive than TRM but still integrates numerical modeling. RTM-FDTD could be applied to volcanic other anthropogenic sources at a wide variety of ranges and scenarios. Kim, K., Lees, J.M., 2015. Imaging volcanic infrasound sources using time reversal mirror algorithm. Geophysical Journal International 202, 1663-1676.

  9. Multiple sound source localization using gammatone auditory filtering and direct sound componence detection

    NASA Astrophysics Data System (ADS)

    Chen, Huaiyu; Cao, Li

    2017-06-01

    In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.

  10. Interior sound field control using generalized singular value decomposition in the frequency domain.

    PubMed

    Pasco, Yann; Gauthier, Philippe-Aubert; Berry, Alain; Moreau, Stéphane

    2017-01-01

    The problem of controlling a sound field inside a region surrounded by acoustic control sources is considered. Inspired by the Kirchhoff-Helmholtz integral, the use of double-layer source arrays allows such a control and avoids the modification of the external sound field by the control sources by the approximation of the sources as monopole and radial dipole transducers. However, the practical implementation of the Kirchhoff-Helmholtz integral in physical space leads to large numbers of control sources and error sensors along with excessive controller complexity in three dimensions. The present study investigates the potential of the Generalized Singular Value Decomposition (GSVD) to reduce the controller complexity and separate the effect of control sources on the interior and exterior sound fields, respectively. A proper truncation of the singular basis provided by the GSVD factorization is shown to lead to effective cancellation of the interior sound field at frequencies below the spatial Nyquist frequency of the control sources array while leaving the exterior sound field almost unchanged. Proofs of concept are provided through simulations achieved for interior problems by simulations in a free field scenario with circular arrays and in a reflective environment with square arrays.

  11. Series expansions of rotating two and three dimensional sound fields.

    PubMed

    Poletti, M A

    2010-12-01

    The cylindrical and spherical harmonic expansions of oscillating sound fields rotating at a constant rate are derived. These expansions are a generalized form of the stationary sound field expansions. The derivations are based on the representation of interior and exterior sound fields using the simple source approach and determination of the simple source solutions with uniform rotation. Numerical simulations of rotating sound fields are presented to verify the theory.

  12. Characterizing large river sounds: Providing context for understanding the environmental effects of noise produced by hydrokinetic turbines.

    PubMed

    Bevelhimer, Mark S; Deng, Z Daniel; Scherelis, Constantin

    2016-01-01

    Underwater noise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels and natural underwater sound sources in static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines on the Mississippi River, where sounds created by flowing water are part of all measurements, both natural ambient and anthropogenic sources. Vessel sizes ranged from a small fishing boat with 60 hp outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, which were, on average, 40 dB greater than the sound created by an operating hydrokinetic turbine. A comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed sound attenuation.

  13. Seasonal variations of infrasonic arrivals from long-term ground truth observations in Nevada and implication for event location

    NASA Astrophysics Data System (ADS)

    Negraru, Petru; Golden, Paul

    2017-04-01

    Long-term ground truth observations were collected at two infrasound arrays in Nevada to investigate how seasonal atmospheric variations affect the detection, traveltime and signal characteristics (azimuth, trace velocity, frequency content and amplitudes) of infrasonic arrivals at regional distances. The arrays were located in different azimuthal directions from a munition disposal facility in Nevada. FNIAR, located 154 km north of the source has a high detection rate throughout the year. Over 90 per cent of the detonations have traveltimes indicative of stratospheric arrivals, while tropospheric waveguides are observed from only 27 per cent of the detonations. The second array, DNIAR, located 293 km southeast of the source exhibits strong seasonal variations with high stratospheric detection rates in winter and the virtual absence of stratospheric arrivals in summer. Tropospheric waveguides and thermospheric arrivals are also observed for DNIAR. Modeling through the Naval Research Laboratory Ground to Space atmospheric sound speeds leads to mixed results: FNIAR arrivals are usually not predicted to be present at all (either stratospheric or tropospheric), while DNIAR arrivals are usually correctly predicted, but summer arrivals show a consistent traveltime bias. In the end, we show the possible improvement in location using empirically calibrated traveltime and azimuth observations. Using the Bayesian Infrasound Source Localization we show that we can decrease the area enclosed by the 90 per cent credibility contours by a factor of 2.5.

  14. Modeling of Turbulence Generated Noise in Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2004-01-01

    A numerically calculated Green's function is used to predict jet noise spectrum and its far-field directivity. A linearized form of Lilley's equation governs the non-causal Green s function of interest, with the non-linear terms on the right hand side identified as the source. In this paper, contributions from the so-called self- and shear-noise source terms will be discussed. A Reynolds-averaged Navier-Stokes solution yields the required mean flow as well as time- and length scales of a noise-generating turbulent eddy. A non-compact source, with exponential temporal and spatial functions, is used to describe the turbulence velocity correlation tensors. It is shown that while an exact non-causal Green's function accurately predicts the observed shift in the location of the spectrum peak with angle as well as the angularity of sound at moderate Mach numbers, at high subsonic and supersonic acoustic Mach numbers the polar directivity of radiated sound is not entirely captured by this Green's function. Results presented for Mach 0.5 and 0.9 isothermal jets, as well as a Mach 0.8 hot jet conclude that near the peak radiation angle a different source/Green's function convolution integral may be required in order to capture the peak observed directivity of jet noise.

  15. Effects of sound source location and direction on acoustic parameters in Japanese churches.

    PubMed

    Soeta, Yoshiharu; Ito, Ken; Shimokura, Ryota; Sato, Shin-ichi; Ohsawa, Tomohiro; Ando, Yoichi

    2012-02-01

    In 1965, the Catholic Church liturgy changed to allow priests to face the congregation. Whereas Church tradition, teaching, and participation have been much discussed with respect to priest orientation at Mass, the acoustical changes in this regard have not yet been examined scientifically. To discuss acoustic desired within churches, it is necessary to know the acoustical characteristics appropriate for each phase of the liturgy. In this study, acoustic measurements were taken at various source locations and directions using both old and new liturgies performed in Japanese churches. A directional loudspeaker was used as the source to provide vocal and organ acoustic fields, and impulse responses were measured. Various acoustical parameters such as reverberation time and early decay time were analyzed. The speech transmission index was higher for the new Catholic liturgy, suggesting that the change in liturgy has improved speech intelligibility. Moreover, the interaural cross-correlation coefficient and early lateral energy fraction were higher and lower, respectively, suggesting that the change in liturgy has made the apparent source width smaller. © 2012 Acoustical Society of America

  16. Sound reduction by metamaterial-based acoustic enclosure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Shanshan; Li, Pei; Zhou, Xiaoming

    In many practical systems, acoustic radiation control on noise sources contained within a finite volume by an acoustic enclosure is of great importance, but difficult to be accomplished at low frequencies due to the enhanced acoustic-structure interaction. In this work, we propose to use acoustic metamaterials as the enclosure to efficiently reduce sound radiation at their negative-mass frequencies. Based on a circularly-shaped metamaterial model, sound radiation properties by either central or eccentric sources are analyzed by numerical simulations for structured metamaterials. The parametric analyses demonstrate that the barrier thickness, the cavity size, the source type, and the eccentricity of themore » source have a profound effect on the sound reduction. It is found that increasing the thickness of the metamaterial barrier is an efficient approach to achieve large sound reduction over the negative-mass frequencies. These results are helpful in designing highly efficient acoustic enclosures for blockage of sound in low frequencies.« less

  17. A visual stethoscope to detect the position of the tracheal tube.

    PubMed

    Kato, Hiromi; Suzuki, Akira; Nakajima, Yoshiki; Makino, Hiroshi; Sanjo, Yoshimitsu; Nakai, Takayoshi; Shiraishi, Yoshito; Katoh, Takasumi; Sato, Shigehito

    2009-12-01

    Advancing a tracheal tube into the bronchus produces unilateral breath sounds. We created a Visual Stethoscope that allows real-time fast Fourier transformation of the sound signal and 3-dimensional (frequency-amplitude-time) color rendering of the results on a personal computer with simultaneous processing of 2 individual sound signals. The aim of this study was to evaluate whether the Visual Stethoscope can detect bronchial intubation in comparison with auscultation. After induction of general anesthesia, the trachea was intubated with a tracheal tube. The distance from the incisors to the carina was measured using a fiberoptic bronchoscope. While the anesthesiologist advanced the tracheal tube from the trachea to the bronchus, another anesthesiologist auscultated breath sounds to detect changes of the breath sounds and/or disappearance of bilateral breath sounds for every 1 cm that the tracheal tube was advanced. Two precordial stethoscopes placed at the left and right sides of the chest were used to record breath sounds simultaneously. Subsequently, at a later date, we randomly entered the recorded breath sounds into the Visual Stethoscope. The same anesthesiologist observed the visualized breath sounds on the personal computer screen processed by the Visual Stethoscope to examine changes of breath sounds and/or disappearance of bilateral breath sound. We compared the decision made based on auscultation with that made based on the results of the visualized breath sounds using the Visual Stethoscope. Thirty patients were enrolled in the study. When irregular breath sounds were auscultated, the tip of the tracheal tube was located at 0.6 +/- 1.2 cm on the bronchial side of the carina. Using the Visual Stethoscope, when there were any changes of the shape of the visualized breath sound, the tube was located at 0.4 +/- 0.8 cm on the tracheal side of the carina (P < 0.01). When unilateral breath sounds were auscultated, the tube was located at 2.6 +/- 1.2 cm on the bronchial side of the carina. The tube was also located at 2.3 +/- 1.0 cm on the bronchial side of the carina when a unilateral shape of visualized breath sounds was obtained using the Visual Stethoscope (not significant). During advancement of the tracheal tube, alterations of the shape of the visualized breath sounds using the Visual Stethoscope appeared before the changes of the breath sounds were detected by auscultation. Bilateral breath sounds disappeared when the tip of the tracheal tube was advanced beyond the carina in both groups.

  18. The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl.

    PubMed

    Baxter, Caitlin S; Nelson, Brian S; Takahashi, Terry T

    2013-02-01

    Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643-655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map.

  19. Automatic adventitious respiratory sound analysis: A systematic review

    PubMed Central

    Bowyer, Stuart; Rodriguez-Villegas, Esther

    2017-01-01

    Background Automatic detection or classification of adventitious sounds is useful to assist physicians in diagnosing or monitoring diseases such as asthma, Chronic Obstructive Pulmonary Disease (COPD), and pneumonia. While computerised respiratory sound analysis, specifically for the detection or classification of adventitious sounds, has recently been the focus of an increasing number of studies, a standardised approach and comparison has not been well established. Objective To provide a review of existing algorithms for the detection or classification of adventitious respiratory sounds. This systematic review provides a complete summary of methods used in the literature to give a baseline for future works. Data sources A systematic review of English articles published between 1938 and 2016, searched using the Scopus (1938-2016) and IEEExplore (1984-2016) databases. Additional articles were further obtained by references listed in the articles found. Search terms included adventitious sound detection, adventitious sound classification, abnormal respiratory sound detection, abnormal respiratory sound classification, wheeze detection, wheeze classification, crackle detection, crackle classification, rhonchi detection, rhonchi classification, stridor detection, stridor classification, pleural rub detection, pleural rub classification, squawk detection, and squawk classification. Study selection Only articles were included that focused on adventitious sound detection or classification, based on respiratory sounds, with performance reported and sufficient information provided to be approximately repeated. Data extraction Investigators extracted data about the adventitious sound type analysed, approach and level of analysis, instrumentation or data source, location of sensor, amount of data obtained, data management, features, methods, and performance achieved. Data synthesis A total of 77 reports from the literature were included in this review. 55 (71.43%) of the studies focused on wheeze, 40 (51.95%) on crackle, 9 (11.69%) on stridor, 9 (11.69%) on rhonchi, and 18 (23.38%) on other sounds such as pleural rub, squawk, as well as the pathology. Instrumentation used to collect data included microphones, stethoscopes, and accelerometers. Several references obtained data from online repositories or book audio CD companions. Detection or classification methods used varied from empirically determined thresholds to more complex machine learning techniques. Performance reported in the surveyed works were converted to accuracy measures for data synthesis. Limitations Direct comparison of the performance of surveyed works cannot be performed as the input data used by each was different. A standard validation method has not been established, resulting in different works using different methods and performance measure definitions. Conclusion A review of the literature was performed to summarise different analysis approaches, features, and methods used for the analysis. The performance of recent studies showed a high agreement with conventional non-automatic identification. This suggests that automated adventitious sound detection or classification is a promising solution to overcome the limitations of conventional auscultation and to assist in the monitoring of relevant diseases. PMID:28552969

  20. Sensitivity to Angular and Radial Source Movements as a Function of Acoustic Complexity in Normal and Impaired Hearing

    PubMed Central

    Grimm, Giso; Hohmann, Volker; Laugesen, Søren; Neher, Tobias

    2017-01-01

    In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far (‘radial’) and left-right (‘angular’) movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup. PMID:28675088

  1. Sensitivity to Angular and Radial Source Movements as a Function of Acoustic Complexity in Normal and Impaired Hearing.

    PubMed

    Lundbeck, Micha; Grimm, Giso; Hohmann, Volker; Laugesen, Søren; Neher, Tobias

    2017-01-01

    In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far ('radial') and left-right ('angular') movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup.

  2. Tonal response on the stairway of the main pyramid at La Ciudadela, Teotihuacan archaeological site

    NASA Astrophysics Data System (ADS)

    Beristain, Sergio; Coss, Cecilia; Aquino, Gabriela; Negrete, Jose; Lizana, Pablo

    2002-11-01

    This paper presents new research on the very interesting audible effects produced by the stairways of many archaeological sites in Mexico. This investigation was made at the main stairway of the pyramid at La Ciudadela, Teotihuacan archaeological site. The effect previously studied was a chirped echo reflected from the stairway at normal incidence, which resembles the singing of the Quetzal. Now it is presented with the impulsive sound source and the listeners located at different angles, where apart from the characteristic chirped sound, several musical notes could be obtained and identified, covering a range of at least one half an octave. This evaluation was made at the site, where the effect is clearly audible, and it is supported with simple mathematics.

  3. Investigation of the effects of a moving acoustic medium on jet noise measurements

    NASA Technical Reports Server (NTRS)

    Cole, J. E., III; Palmer, D. W.

    1976-01-01

    Noise from an unheated sonic jet in the presence of an external flow is measured in a free-jet wind tunnel using microphones located both inside and outside the flow. Comparison of the data is made with results of similar studies. The results are also compared with theoretical predictions of the source strength for jet noise in the presence of flow and of the effects of sound propagation through a shear layer.

  4. Phonotactic flight of the parasitoid fly Emblemasoma auditrix (Diptera: Sarcophagidae).

    PubMed

    Tron, Nanina; Lakes-Harlan, Reinhard

    2017-01-01

    The parasitoid fly Emblemasoma auditrix locates its hosts using acoustic cues from sound producing males of the cicada Okanagana rimosa. Here, we experimentally analysed the flight path of the phonotaxis from a landmark to the target, a hidden loudspeaker in the field. During flight, the fly showed only small lateral deviations. The vertical flight direction angles were initially negative (directed downwards relative to starting position), grew positive (directed upwards) in the second half of the flight, and finally flattened (directed horizontally or slightly upwards), typically resulting in a landing above the loudspeaker. This phonotactic flight pattern was largely independent from sound pressure level or target distance, but depended on the elevation of the sound source. The flight velocity was partially influenced by sound pressure level and distance, but also by elevation. The more elevated the target, the lower was the speed. The accuracy of flight increased with elevation of the target as well as the landing precision. The minimal vertical angle difference eliciting differences in behaviour was 10°. By changing the elevation of the acoustic target after take-off, we showed that the fly is able to orientate acoustically while flying.

  5. Intelligibility of speech in a virtual 3-D environment.

    PubMed

    MacDonald, Justin A; Balakrishnan, J D; Orosz, Michael D; Karplus, Walter J

    2002-01-01

    In a simulated air traffic control task, improvement in the detection of auditory warnings when using virtual 3-D audio depended on the spatial configuration of the sounds. Performance improved substantially when two of four sources were placed to the left and the remaining two were placed to the right of the participant. Surprisingly, little or no benefits were observed for configurations involving the elevation or transverse (front/back) dimensions of virtual space, suggesting that position on the interaural (left/right) axis is the crucial factor to consider in auditory display design. The relative importance of interaural spacing effects was corroborated in a second, free-field (real space) experiment. Two additional experiments showed that (a) positioning signals to the side of the listener is superior to placing them in front even when two sounds are presented in the same location, and (b) the optimal distance on the interaural axis varies with the amplitude of the sounds. These results are well predicted by the behavior of an ideal observer under the different display conditions. This suggests that guidelines for auditory display design that allow for effective perception of speech information can be developed from an analysis of the physical sound patterns.

  6. A geospatial model of ambient sound pressure levels in the contiguous United States.

    PubMed

    Mennitt, Daniel; Sherrill, Kirk; Fristrup, Kurt

    2014-05-01

    This paper presents a model that predicts measured sound pressure levels using geospatial features such as topography, climate, hydrology, and anthropogenic activity. The model utilizes random forest, a tree-based machine learning algorithm, which does not incorporate a priori knowledge of source characteristics or propagation mechanics. The response data encompasses 270 000 h of acoustical measurements from 190 sites located in National Parks across the contiguous United States. The explanatory variables were derived from national geospatial data layers and cross validation procedures were used to evaluate model performance and identify variables with predictive power. Using the model, the effects of individual explanatory variables on sound pressure level were isolated and quantified to reveal systematic trends across environmental gradients. Model performance varies by the acoustical metric of interest; the seasonal L50 can be predicted with a median absolute deviation of approximately 3 dB. The primary application for this model is to generalize point measurements to maps expressing spatial variation in ambient sound levels. An example of this mapping capability is presented for Zion National Park and Cedar Breaks National Monument in southwestern Utah.

  7. Physiological and Psychophysical Modeling of the Precedence Effect

    PubMed Central

    Xia, Jing; Brughera, Andrew; Colburn, H. Steven

    2010-01-01

    Many past studies of sound localization explored the precedence effect (PE), in which a pair of brief, temporally close sounds from different directions is perceived as coming from a location near that of the first-arriving sound. Here, a computational model of low-frequency inferior colliculus (IC) neurons accounts for both physiological and psychophysical responses to PE click stimuli. In the model, IC neurons have physiologically plausible inputs, receiving excitation from the ipsilateral medial superior olive (MSO) and long-lasting inhibition from both ipsilateral and contralateral MSOs, relayed through the dorsal nucleus of the lateral lemniscus. In this model, physiological suppression of the lagging response depends on the inter-stimulus delay (ISD) between the lead and lag as well as their relative locations. Psychophysical predictions are generated from a population of model neurons. At all ISDs, predicted lead localization is good. At short ISDs, the estimated location of the lag is near that of the lead, consistent with subjects perceiving both lead and lag from the lead location. As ISD increases, the estimated lag location moves closer to the true lag location, consistent with listeners’ perception of two sounds from separate locations. Together, these simulations suggest that location-dependent suppression in IC neurons can explain the behavioral phenomenon known as the precedence effect. PMID:20358242

  8. Do you remember where sounds, pictures and words came from? The role of the stimulus format in object location memory.

    PubMed

    Delogu, Franco; Lilla, Christopher C

    2017-11-01

    Contrasting results in visual and auditory spatial memory stimulate the debate over the role of sensory modality and attention in identity-to-location binding. We investigated the role of sensory modality in the incidental/deliberate encoding of the location of a sequence of items. In 4 separated blocks, 88 participants memorised sequences of environmental sounds, spoken words, pictures and written words, respectively. After memorisation, participants were asked to recognise old from new items in a new sequence of stimuli. They were also asked to indicate from which side of the screen (visual stimuli) or headphone channel (sounds) the old stimuli were presented in encoding. In the first block, participants were not aware of the spatial requirement while, in blocks 2, 3 and 4 they knew that their memory for item location was going to be tested. Results show significantly lower accuracy of object location memory for the auditory stimuli (environmental sounds and spoken words) than for images (pictures and written words). Awareness of spatial requirement did not influence localisation accuracy. We conclude that: (a) object location memory is more effective for visual objects; (b) object location is implicitly associated with item identity during encoding and (c) visual supremacy in spatial memory does not depend on the automaticity of object location binding.

  9. The Importance of "What": Infants Use Featural Information to Index Events

    ERIC Educational Resources Information Center

    Kirkham, Natasha Z.; Richardson, Daniel C.; Wu, Rachel; Johnson, Scott P.

    2012-01-01

    Dynamic spatial indexing is the ability to encode, remember, and track the location of complex events. For example, in a previous study, 6-month-old infants were familiarized to a toy making a particular sound in a particular location, and later they fixated that empty location when they heard the sound presented alone ("Journal of Experimental…

  10. Joint seismic-infrasonic processing of recordings from a repeating source of atmospheric explosions.

    PubMed

    Gibbons, Steven J; Ringdal, Frode; Kvaerna, Tormod

    2007-11-01

    A database has been established of seismic and infrasonic recordings from more than 100 well-constrained surface explosions, conducted by the Finnish military to destroy old ammunition. The recorded seismic signals are essentially identical and indicate that the variation in source location and magnitude is negligible. In contrast, the infrasonic arrivals on both seismic and infrasound sensors exhibit significant variation both with regard to the number of detected phases, phase travel times, and phase amplitudes, which would be attributable to atmospheric factors. This data set provides an excellent database for studies in sound propagation, infrasound array detection, and direction estimation.

  11. Salient, Irrelevant Sounds Reflexively Induce Alpha Rhythm Desynchronization in Parallel with Slow Potential Shifts in Visual Cortex.

    PubMed

    Störmer, Viola; Feng, Wenfeng; Martinez, Antigona; McDonald, John; Hillyard, Steven

    2016-03-01

    Recent findings suggest that a salient, irrelevant sound attracts attention to its location involuntarily and facilitates processing of a colocalized visual event [McDonald, J. J., Störmer, V. S., Martinez, A., Feng, W. F., & Hillyard, S. A. Salient sounds activate human visual cortex automatically. Journal of Neuroscience, 33, 9194-9201, 2013]. Associated with this cross-modal facilitation is a sound-evoked slow potential over the contralateral visual cortex termed the auditory-evoked contralateral occipital positivity (ACOP). Here, we further tested the hypothesis that a salient sound captures visual attention involuntarily by examining sound-evoked modulations of the occipital alpha rhythm, which has been strongly associated with visual attention. In two purely auditory experiments, lateralized irrelevant sounds triggered a bilateral desynchronization of occipital alpha-band activity (10-14 Hz) that was more pronounced in the hemisphere contralateral to the sound's location. The timing of the contralateral alpha-band desynchronization overlapped with that of the ACOP (∼240-400 msec), and both measures of neural activity were estimated to arise from neural generators in the ventral-occipital cortex. The magnitude of the lateralized alpha desynchronization was correlated with ACOP amplitude on a trial-by-trial basis and between participants, suggesting that they arise from or are dependent on a common neural mechanism. These results support the hypothesis that the sound-induced alpha desynchronization and ACOP both reflect the involuntary cross-modal orienting of spatial attention to the sound's location.

  12. Electrical Conductivity Imaging Using Controlled Source Electromagnetics for Subsurface Characterization

    NASA Astrophysics Data System (ADS)

    Miller, C. R.; Routh, P. S.; Donaldson, P. R.

    2004-05-01

    Controlled Source Audio-Frequency Magnetotellurics (CSAMT) is a frequency domain electromagnetic (EM) sounding technique. CSAMT typically uses a grounded horizontal electric dipole approximately one to two kilometers in length as a source. Measurements of electric and magnetic field components are made at stations located ideally at least four skin depths away from the transmitter to approximate plane wave characteristics of the source. Data are acquired in a broad band frequency range that is sampled logarithmically from 0.1 Hz to 10 kHz. The usefulness of CSAMT soundings is to detect and map resistivity contrasts in the top two to three km of the Earth's surface. Some practical applications that CSAMT soundings have been used for include mapping ground water resources; mineral/precious metals exploration; geothermal reservoir mapping and monitoring; petroleum exploration; and geotechnical investigations. Higher frequency data can be used to image shallow features and lower frequency data are sensitive to deeper structures. We have a 3D CSAMT data set consisting of phase and amplitude measurements of the Ex and Hy components of the electric and magnetic fields respectively. The survey area is approximately 3 X 5 km. Receiver stations are situated 50 meters apart along a total of 13 lines with 8 lines bearing approximately N60E and the remainder of the lines oriented orthogonal to these 8 lines. We use an unconstrained Gauss-Newton method with positivity to invert the data. Inversion results will consist of conductivity versus depth profiles beneath each receiver station. These 1D profiles will be combined into a 3D subsurface conductivity image. We will include our interpretation of the subsurface conductivity structure and quantify the uncertainties associated with this interpretation.

  13. Static tests of excess ground attenuation at Wallops Flight Center

    NASA Astrophysics Data System (ADS)

    Sutherland, L. C.; Brown, R.

    1981-06-01

    An extensive experimental measurement program which evaluated the attenuation of sound for close to horizontal propagation over the ground was designed to replicate, under static conditions, results of the flight measurements carried out earlier by NASA at the same site (Wallops Flight Center). The program consisted of a total of 41 measurement runs of attenuation, in excess of spreading and air absorption losses, for one third octave bands over a frequency range of 50 to 4000 Hz. Each run consisted of measurements at 10 locations up to 675 m, from a source located at nominal elevations of 2.5, or 10 m over either a grassy surface or an adjacent asphalt concrete runway surface. The tests provided a total of over 8100 measurements of attenuation under conditions of low wind speed averaging about 1 m/s and, for most of the tests, a slightly positive temperature gradient, averaging about 0.3 C/m from 1.2 to 7 m. The results of the measurements are expected to provide useful experimental background for the further development of prediction models of near grazing incidence sound propagation losses.

  14. Static tests of excess ground attenuation at Wallops Flight Center

    NASA Technical Reports Server (NTRS)

    Sutherland, L. C.; Brown, R.

    1981-01-01

    An extensive experimental measurement program which evaluated the attenuation of sound for close to horizontal propagation over the ground was designed to replicate, under static conditions, results of the flight measurements carried out earlier by NASA at the same site (Wallops Flight Center). The program consisted of a total of 41 measurement runs of attenuation, in excess of spreading and air absorption losses, for one third octave bands over a frequency range of 50 to 4000 Hz. Each run consisted of measurements at 10 locations up to 675 m, from a source located at nominal elevations of 2.5, or 10 m over either a grassy surface or an adjacent asphalt concrete runway surface. The tests provided a total of over 8100 measurements of attenuation under conditions of low wind speed averaging about 1 m/s and, for most of the tests, a slightly positive temperature gradient, averaging about 0.3 C/m from 1.2 to 7 m. The results of the measurements are expected to provide useful experimental background for the further development of prediction models of near grazing incidence sound propagation losses.

  15. Two-microphone spatial filtering provides speech reception benefits for cochlear implant users in difficult acoustic environments

    PubMed Central

    Goldsworthy, Raymond L.; Delhorne, Lorraine A.; Desloge, Joseph G.; Braida, Louis D.

    2014-01-01

    This article introduces and provides an assessment of a spatial-filtering algorithm based on two closely-spaced (∼1 cm) microphones in a behind-the-ear shell. The evaluated spatial-filtering algorithm used fast (∼10 ms) temporal-spectral analysis to determine the location of incoming sounds and to enhance sounds arriving from straight ahead of the listener. Speech reception thresholds (SRTs) were measured for eight cochlear implant (CI) users using consonant and vowel materials under three processing conditions: An omni-directional response, a dipole-directional response, and the spatial-filtering algorithm. The background noise condition used three simultaneous time-reversed speech signals as interferers located at 90°, 180°, and 270°. Results indicated that the spatial-filtering algorithm can provide speech reception benefits of 5.8 to 10.7 dB SRT compared to an omni-directional response in a reverberant room with multiple noise sources. Given the observed SRT benefits, coupled with an efficient design, the proposed algorithm is promising as a CI noise-reduction solution. PMID:25096120

  16. Characterizing large river sounds: Providing context for understanding the environmental effects of noise produced by hydrokinetic turbines

    DOE PAGES

    Bevelhimer, Mark S.; Deng, Z. Daniel; Scherelis, Constantin C.

    2016-01-06

    Underwaternoise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels and natural underwater sound sources in static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines on the Mississippi River, where soundsmore » created by flowing water are part of all measurements, both natural ambient and anthropogenic sources. Vessel sizes ranged from a small fishing boat with 60 hp outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, which were, on average, 40 dB greater than the sound created by an operating hydrokinetic turbine. As a result, a comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed sound attenuation.« less

  17. Underwater auditory localization by a swimming harbor seal (Phoca vitulina).

    PubMed

    Bodson, Anais; Miersch, Lars; Mauck, Bjoern; Dehnhardt, Guido

    2006-09-01

    The underwater sound localization acuity of a swimming harbor seal (Phoca vitulina) was measured in the horizontal plane at 13 different positions. The stimulus was either a double sound (two 6-kHz pure tones lasting 0.5 s separated by an interval of 0.2 s) or a single continuous sound of 1.2 s. Testing was conducted in a 10-m-diam underwater half circle arena with hidden loudspeakers installed at the exterior perimeter. The animal was trained to swim along the diameter of the half circle and to change its course towards the sound source as soon as the signal was given. The seal indicated the sound source by touching its assumed position at the board of the half circle. The deviation of the seals choice from the actual sound source was measured by means of video analysis. In trials with the double sound the seal localized the sound sources with a mean deviation of 2.8 degrees and in trials with the single sound with a mean deviation of 4.5 degrees. In a second experiment minimum audible angles of the stationary animal were found to be 9.8 degrees in front and 9.7 degrees in the back of the seal's head.

  18. Personal sound zone reproduction with room reflections

    NASA Astrophysics Data System (ADS)

    Olik, Marek

    Loudspeaker-based sound systems, capable of a convincing reproduction of different audio streams to listeners in the same acoustic enclosure, are a convenient alternative to headphones. Such systems aim to generate "sound zones" in which target sound programmes are to be reproduced with minimum interference from any alternative programmes. This can be achieved with appropriate filtering of the source (loudspeaker) signals, so that the target sound's energy is directed to the chosen zone while being attenuated elsewhere. The existing methods are unable to produce the required sound energy ratio (acoustic contrast) between the zones with a small number of sources when strong room reflections are present. Optimization of parameters is therefore required for systems with practical limitations to improve their performance in reflective acoustic environments. One important parameter is positioning of sources with respect to the zones and room boundaries. The first contribution of this thesis is a comparison of the key sound zoning methods implemented on compact and distributed geometrical source arrangements. The study presents previously unpublished detailed evaluation and ranking of such arrangements for systems with a limited number of sources in a reflective acoustic environment similar to a domestic room. Motivated by the requirement to investigate the relationship between source positioning and performance in detail, the central contribution of this thesis is a study on optimizing source arrangements when strong individual room reflections occur. Small sound zone systems are studied analytically and numerically to reveal relationships between the geometry of source arrays and performance in terms of acoustic contrast and array effort (related to system efficiency). Three novel source position optimization techniques are proposed to increase the contrast, and geometrical means of reducing the effort are determined. Contrary to previously published case studies, this work presents a systematic examination of the key problem of first order reflections and proposes general optimization techniques, thus forming an important contribution. The remaining contribution considers evaluation and comparison of the proposed techniques with two alternative approaches to sound zone generation under reflective conditions: acoustic contrast control (ACC) combined with anechoic source optimization and sound power minimization (SPM). The study provides a ranking of the examined approaches which could serve as a guideline for method selection for rooms with strong individual reflections.

  19. Marine mammal audibility of selected shallow-water survey sources.

    PubMed

    MacGillivray, Alexander O; Racca, Roberto; Li, Zizheng

    2014-01-01

    Most attention about the acoustic effects of marine survey sound sources on marine mammals has focused on airgun arrays, with other common sources receiving less scrutiny. Sound levels above hearing threshold (sensation levels) were modeled for six marine mammal species and seven different survey sources in shallow water. The model indicated that odontocetes were most likely to hear sounds from mid-frequency sources (fishery, communication, and hydrographic systems), mysticetes from low-frequency sources (sub-bottom profiler and airguns), and pinnipeds from both mid- and low-frequency sources. High-frequency sources (side-scan and multibeam) generated the lowest estimated sensation levels for all marine mammal species groups.

  20. Performance of active feedforward control systems in non-ideal, synthesized diffuse sound fields.

    PubMed

    Misol, Malte; Bloch, Christian; Monner, Hans Peter; Sinapius, Michael

    2014-04-01

    The acoustic performance of passive or active panel structures is usually tested in sound transmission loss facilities. A reverberant sending room, equipped with one or a number of independent sound sources, is used to generate a diffuse sound field excitation which acts as a disturbance source on the structure under investigation. The spatial correlation and coherence of such a synthesized non-ideal diffuse-sound-field excitation, however, might deviate significantly from the ideal case. This has consequences for the operation of an active feedforward control system which heavily relies on the acquisition of coherent disturbance source information. This work, therefore, evaluates the spatial correlation and coherence of ideal and non-ideal diffuse sound fields and considers the implications on the performance of a feedforward control system. The system under consideration is an aircraft-typical double panel system, equipped with an active sidewall panel (lining), which is realized in a transmission loss facility. Experimental results for different numbers of sound sources in the reverberation room are compared to simulation results of a comparable generic double panel system excited by an ideal diffuse sound field. It is shown that the number of statistically independent noise sources acting on the primary structure of the double panel system depends not only on the type of diffuse sound field but also on the sample lengths of the processed signals. The experimental results show that the number of reference sensors required for a defined control performance exhibits an inverse relationship to control filter length.

  1. Using Sediment Records to Reconstruct Historical Inputs Combustion-Derived Contaminants to Urban Airsheds/Watersheds: A Case Study From the Puget Sound

    NASA Astrophysics Data System (ADS)

    Louchouarn, P. P.; Kuo, L.; Brandenberger, J.; Marcantonio, F.; Wade, T. L.; Crecelius, E.; Gobeil, C.

    2008-12-01

    Urban centers are major sources of combustion-derived particulate matter (e.g. black carbon (BC), polycyclic aromatic hydrocarbons (PAH), anhydrosugars) and volatile organic compounds to the atmosphere. Evidence is mounting that atmospheric emissions from combustion sources remain major contributors to air pollution of urban systems. For example, recent historical reconstructions of depositional fluxes for pyrogenic PAHs close to urban systems have shown an unanticipated reversal in the trends of decreasing emissions initiated during the mid-20th Century. Here we compare a series of historical reconstructions of combustion emission in urban and rural airsheds over the last century using sedimentary records. A complex suite of combustion proxies (BC, PAHs, anhydrosugars, stable lead concentrations and isotope signatures) assisted in elucidating major changes in the type of atmospheric aerosols originating from specific processes (i.e. biomass burning vs. fossil fuel combustion) or fuel sources (wood vs. coal vs. oil). In all studied locations, coal continues to be a major source of combustion-derived aerosols since the early 20th Century. Recently, however, oil and biomass combustion have become substantial additional sources of atmospheric contamination. In the Puget Sound basin, along the Pacific Northwest region of the U.S., rural locations not impacted by direct point sources of contamination have helped assess the influence of catalytic converters on concentrations of oil-derived PAH and lead inputs since the early 1970s. Although atmospheric deposition of lead has continued to drop since the introduction of catalytic converters and ban on leaded gasoline, PAH inputs have "rebounded" in the last decade. A similar steady and recent rise in PAH accumulations in urban systems has been ascribed to continued urban sprawl and increasing vehicular traffic. In the U.S., automotive emissions, whether from gasoline or diesel combustion, are becoming a major source of combustion-derived PM and BC to the atmosphere and have started to replace coal as the major source in some surficial reservoirs. This increased urban influence of gasoline and diesel combustion on BC emissions was also observed in Europe both from model estimates as well as from measured fluxes in recent lake sediments.

  2. The Scaling of Broadband Shock-Associated Noise with Increasing Temperature

    NASA Technical Reports Server (NTRS)

    Miller, Steven A.

    2012-01-01

    A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. For this purpose the acoustic analogy of Morris and Miller is examined. To isolate the relevant physics, the scaling of BBSAN at the peak intensity level at the sideline ( = 90 degrees) observer location is examined. Scaling terms are isolated from the acoustic analogy and the result is compared using a convergent nozzle with the experiments of Bridges and Brown and using a convergent-divergent nozzle with the experiments of Kuo, McLaughlin, and Morris at four nozzle pressure ratios in increments of total temperature ratios from one to four. The equivalent source within the framework of the acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green s function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for the accurate saturation of BBSAN with increasing stagnation temperature. This is a minor change to the source model relative to the previously developed models. The full development of the scaling term is shown. The sources and vector Green s function solver are informed by steady Reynolds-Averaged Navier-Stokes solutions. These solutions are examined as a function of stagnation temperature at the first shock wave shear layer interaction. It is discovered that saturation of BBSAN with increasing jet stagnation temperature occurs due to a balance between the amplification of the sound propagation through the shear layer and the source term scaling.A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. For this purpose the acoustic analogy of Morris and Miller is examined. To isolate the relevant physics, the scaling of BBSAN at the peak intensity level at the sideline psi = 90 degrees) observer location is examined. Scaling terms are isolated from the acoustic analogy and the result is compared using a convergent nozzle with the experiments of Bridges and Brown and using a convergent-divergent nozzle with the experiments of Kuo, McLaughlin, and Morris at four nozzle pressure ratios in increments of total temperature ratios from one to four. The equivalent source within the framework of the acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green s function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for the accurate saturation of BBSAN with increasing stagnation temperature. This is a minor change to the source model relative to the previously developed models. The full development of the scaling term is shown. The sources and vector Green s function solver are informed by steady Reynolds-Averaged Navier-Stokes solutions. These solutions are examined as a function of stagnation temperature at the first shock wave shear layer interaction. It is discovered that saturation of BBSAN with increasing jet stagnation temperature occurs due to a balance between the amplification of the sound propagation through the shear layer and the source term scaling.

  3. Electromagnetic sounding of the Earth's crust in the region of superdeep boreholes of Yamal-Nenets autonomous district using the fields of natural and controlled sources

    NASA Astrophysics Data System (ADS)

    Zhamaletdinov, A. A.; Petrishchev, M. S.; Shevtsov, A. N.; Kolobov, V. V.; Selivanov, V. N.; Barannik, M. B.; Tereshchenko, E. D.; Grigoriev, V. F.; Sergushin, P. A.; Kopytenko, E. A.; Biryulya, M. A.; Skorokhodov, A. A.; Esipko, O. A.; Damaskin, R. V.

    2013-11-01

    Electromagnetic soundings with the fields of natural (magnetotelluric (MT), and audio magnetotelluric (AMT)) and high-power controlled sources have been carried out in the region of the SG-6 (Tyumen) and SG-7 (En-Yakhin) superdeep boreholes in the Yamal-Nenets autonomous district (YaNAD). In the controlled-source soundings, the electromagnetic field was generated by the VL Urengoi-Pangody 220-kV industrial power transmission line (PTL), which has a length of 114 km, and ultralow-frequency (ULF) Zevs radiating antenna located at a distance of 2000 km from the signal recording sites. In the soundings with the Urengoi-Pangody PTL, the Energiya-2 generator capable of supplying up to 200 kW of power and Energiya-3 portable generator with a power of 2 kW were used as the sources. These generators were designed and manufactured at the Kola Science Center of the Russian Academy of Sciences. The soundings with the Energiya-2 generator were conducted in the frequency range from 0.38 to 175 Hz. The external generator was connected to the PTL in upon the agreement with the Yamal-Nenets Enterprise of Main Electric Networks, a branch of OAO FSK ES of Western Siberia. The connection was carried out by the wire-ground scheme during the routine maintenance of PTL in the nighttime. The highest-quality signals were recorded in the region of the SG-7 (En-Yakhin) superdeep borehole, where the industrial noise is lowest. The results of the inversion of the soundings with PTL and Zevs ULF transmitter completely agree with each other and with the data of electric logging. The MT-AMT data provide additional information about the deep structure of the region in the low-frequency range (below 1Hz). It is established that the section of SG-6 and SG-7 boreholes contains conductive layers in the depth intervals from 0.15 to 0.3 km and from 1 to 1.5 km. These layers are associated with the variations in the lithological composition, porosity, and fluid saturation of the rocks. The top of the poorly conductive Permian-Triassic complex is identified at a depth of about 7 km. On the basis of the MT data in the lowest frequency band (hourly and longer periods) with the observations at the Novosibirsk observatory taken into account, the distribution of electric resistivity up to a depth of 800 km is reconstructed. This distribution can be used as additional information when calculating the temperature and rheology of the lithosphere and upper mantle in West Siberia. The results of our studies demonstrate the high potential of the complex electromagnetic soundings with natural and controlled sources in the study of deep structure of the lithosphere and tracing deep oil-and-gas-bearing horizons in the sedimentary cover of the West Siberian Platform within the Yamal-Nenets autonomous district.

  4. Active control of turbulent boundary layer-induced sound transmission through the cavity-backed double panels

    NASA Astrophysics Data System (ADS)

    Caiazzo, A.; Alujević, N.; Pluymers, B.; Desmet, W.

    2018-05-01

    This paper presents a theoretical study of active control of turbulent boundary layer (TBL) induced sound transmission through the cavity-backed double panels. The aerodynamic model used is based on the Corcos wall pressure distribution. The structural-acoustic model encompasses a source panel (skin panel), coupled through an acoustic cavity to the radiating panel (trim panel). The radiating panel is backed by a larger acoustic enclosure (the back cavity). A feedback control unit is located inside the acoustic cavity between the two panels. It consists of a control force actuator and a sensor mounted at the actuator footprint on the radiating panel. The control actuator can react off the source panel. It is driven by an amplified velocity signal measured by the sensor. A fully coupled analytical structural-acoustic model is developed to study the effects of the active control on the sound transmission into the back cavity. The stability and performance of the active control system are firstly studied on a reduced order model. In the reduced order model only two fundamental modes of the fully coupled system are assumed. Secondly, a full order model is considered with a number of modes large enough to yield accurate simulation results up to 1000 Hz. It is shown that convincing reductions of the TBL-induced vibrations of the radiating panel and the sound pressure inside the back cavity can be expected. The reductions are more pronounced for a certain class of systems, which is characterised by the fundamental natural frequency of the skin panel larger than the fundamental natural frequency of the trim panel.

  5. The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl

    PubMed Central

    Baxter, Caitlin S.; Takahashi, Terry T.

    2013-01-01

    Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643–655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map. PMID:23175801

  6. Acoustic leak-detection system for railroad transportation security

    NASA Astrophysics Data System (ADS)

    Womble, P. C.; Spadaro, J.; Harrison, M. A.; Barzilov, A.; Harper, D.; Hopper, L.; Houchins, E.; Lemoff, B.; Martin, R.; McGrath, C.; Moore, R.; Novikov, I.; Paschal, J.; Rogers, S.

    2007-04-01

    Pressurized rail tank cars transport large volumes of volatile liquids and gases throughout the country, much of which is hazardous and/or flammable. These gases, once released in the atmosphere, can wreak havoc with the environment and local populations. We developed a system which can non-intrusively and non-invasively detect and locate pinhole-sized leaks in pressurized rail tank cars using acoustic sensors. The sound waves from a leak are produced by turbulence from the gas leaking to the atmosphere. For example, a 500 μm hole in an air tank pressurized to 689 kPa produces a broad audio frequency spectrum with a peak near 40 kHz. This signal is detectable at 10 meters with a sound pressure level of 25 dB. We are able to locate a leak source using triangulation techniques. The prototype of the system consists of a network of acoustic sensors and is located approximately 10 meters from the center of the rail-line. The prototype has two types of acoustic sensors, each with different narrow frequency response band: 40 kHz and 80 kHz. The prototype is connected to the Internet using WiFi (802.11g) transceiver and can be remotely operated from anywhere in the world. The paper discusses the construction, operation and performance of the system.

  7. Marine Mammals Monitoring for Northwest Fisheries: 2005 Field Year

    DTIC Science & Technology

    2007-07-01

    killer whale ( orca ) pods (Pods J, K, and L) of Puget Sound . Collectively these three groups of animals are known as Southern Residents (SR). The...with a visual observation program for SR killer whales in Puget Sound (D. Bain). Table 1. Locations for PAL moorings in 2005 Location PAL ID...specific whale pods. In particular, the SR killer whales of Puget Sound have been monitored extensively over many years. Specific call

  8. Measurement of Model Noise in a Hard-Wall Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.

    2006-01-01

    Identification, analysis, and control of fluid-mechanically-generated sound from models of aircraft and automobiles in special low-noise, semi-anechoic wind tunnels are an important research endeavor. Such studies can also be done in aerodynamic wind tunnels that have hard walls if phased microphone arrays are used to focus on the noise-source regions and reject unwanted reflections or background noise. Although it may be difficult to simulate the total flyover or drive-by noise in a closed wind tunnel, individual noise sources can be isolated and analyzed. An acoustic and aerodynamic study was made of a 7-percent-scale aircraft model in a NASA Ames 7-by-10-ft (about 2-by-3-m) wind tunnel for the purpose of identifying and attenuating airframe noise sources. Simulated landing, takeoff, and approach configurations were evaluated at Mach 0.26. Using a phased microphone array mounted in the ceiling over the inverted model, various noise sources in the high-lift system, landing gear, fins, and miscellaneous other components were located and compared for sound level and frequency at one flyover location. Numerous noise-alleviation devices and modifications of the model were evaluated. Simultaneously with acoustic measurements, aerodynamic forces were recorded to document aircraft conditions and any performance changes caused by geometric modifications. Most modern microphone-array systems function in the frequency domain in the sense that spectra of the microphone outputs are computed, then operations are performed on the matrices of microphone-signal cross-spectra. The entire acoustic field at one station in such a system is acquired quickly and interrogated during postprocessing. Beam-forming algorithms are employed to scan a plane near the model surface and locate noise sources while rejecting most background noise and spurious reflections. In the case of the system used in this study, previous studies in the wind tunnel have identified noise sources up to 19 dB below the normal background noise of the wind tunnel. Theoretical predictions of array performance are used to minimize the width and the side lobes of the beam pattern of the microphone array for a given test arrangement. To capture flyover noise of the inverted model, a 104-element microphone array in a 622-mm-diameter cluster was installed in a 19-mm-thick poly(methyl methacrylate) plate in the ceiling of the test section of the wind tunnel above the aircraft model (see Figure 1). The microphones were of the condenser type, and their diaphragms were mounted flush in the array plate, which was recessed 12.7 mm into the ceiling and covered by a porous aromatic polyamide cloth (not shown in the figure) to minimize boundary-layer noise. This design caused the level of flow noise to be much less than that of flush-mount designs. The drawback of this design was that the cloth attenuated sound somewhat and created acoustic resonances that could grow to several dB at a frequency of 10 kHz.

  9. Structure of supersonic jet flow and its radiated sound

    NASA Technical Reports Server (NTRS)

    Mankbadi, Reda R.; Hayer, M. Ehtesham; Povinelli, Louis A.

    1994-01-01

    The present paper explores the use of large-eddy simulations as a tool for predicting noise from first principles. A high-order numerical scheme is used to perform large-eddy simulations of a supersonic jet flow with emphasis on capturing the time-dependent flow structure representating the sound source. The wavelike nature of this structure under random inflow disturbances is demonstrated. This wavelike structure is then enhanced by taking the inflow disturbances to be purely harmonic. Application of Lighthill's theory to calculate the far-field noise, with the sound source obtained from the calculated time-dependent near field, is demonstrated. Alternative approaches to coupling the near-field sound source to the far-field sound are discussed.

  10. Underwater sound of rigid-hulled inflatable boats.

    PubMed

    Erbe, Christine; Liong, Syafrin; Koessler, Matthew Walter; Duncan, Alec J; Gourlay, Tim

    2016-06-01

    Underwater sound of rigid-hulled inflatable boats was recorded 142 times in total, over 3 sites: 2 in southern British Columbia, Canada, and 1 off Western Australia. Underwater sound peaked between 70 and 400 Hz, exhibiting strong tones in this frequency range related to engine and propeller rotation. Sound propagation models were applied to compute monopole source levels, with the source assumed 1 m below the sea surface. Broadband source levels (10-48 000 Hz) increased from 134 to 171 dB re 1 μPa @ 1 m with speed from 3 to 16 m/s (10-56 km/h). Source power spectral density percentile levels and 1/3 octave band levels are given for use in predictive modeling of underwater sound of these boats as part of environmental impact assessments.

  11. Control of Toxic Chemicals in Puget Sound, Phase 3: Study Of Atmospheric Deposition of Air Toxics to the Surface of Puget Sound

    DTIC Science & Technology

    2007-01-01

    deposition directly to Puget Sound was an important source of PAHs, polybrominated diphenyl ethers (PBDEs), and heavy metals . In most cases, atmospheric...versus Atmospheric Fluxes ........................................................................66  PAH Source Apportionment ...temperature inversions) on air quality during the wet season. A semi-quantitative apportionment study permitted a first-order characterization of source

  12. Binaural Processing of Multiple Sound Sources

    DTIC Science & Technology

    2016-08-18

    Sound Source Localization Identification, and Sound Source Localization When Listeners Move. The CI research was also supported by an NIH grant...8217Cochlear Implant Performance in Realistic Listening Environments,’ Dr. Michael Dorman, Principal Investigator, Dr. William Yost unpaid advisor. The other... Listeners Move. The CI research was also supported by an NIH grant (“Cochlear Implant Performance in Realistic Listening Environments,” Dr. Michael Dorman

  13. Acoustic signatures of sound source-tract coupling.

    PubMed

    Arneodo, Ezequiel M; Perl, Yonatan Sanz; Mindlin, Gabriel B

    2011-04-01

    Birdsong is a complex behavior, which results from the interaction between a nervous system and a biomechanical peripheral device. While much has been learned about how complex sounds are generated in the vocal organ, little has been learned about the signature on the vocalizations of the nonlinear effects introduced by the acoustic interactions between a sound source and the vocal tract. The variety of morphologies among bird species makes birdsong a most suitable model to study phenomena associated to the production of complex vocalizations. Inspired by the sound production mechanisms of songbirds, in this work we study a mathematical model of a vocal organ, in which a simple sound source interacts with a tract, leading to a delay differential equation. We explore the system numerically, and by taking it to the weakly nonlinear limit, we are able to examine its periodic solutions analytically. By these means we are able to explore the dynamics of oscillatory solutions of a sound source-tract coupled system, which are qualitatively different from those of a sound source-filter model of a vocal organ. Nonlinear features of the solutions are proposed as the underlying mechanisms of observed phenomena in birdsong, such as unilaterally produced "frequency jumps," enhancement of resonances, and the shift of the fundamental frequency observed in heliox experiments. ©2011 American Physical Society

  14. Acoustic signatures of sound source-tract coupling

    PubMed Central

    Arneodo, Ezequiel M.; Perl, Yonatan Sanz; Mindlin, Gabriel B.

    2014-01-01

    Birdsong is a complex behavior, which results from the interaction between a nervous system and a biomechanical peripheral device. While much has been learned about how complex sounds are generated in the vocal organ, little has been learned about the signature on the vocalizations of the nonlinear effects introduced by the acoustic interactions between a sound source and the vocal tract. The variety of morphologies among bird species makes birdsong a most suitable model to study phenomena associated to the production of complex vocalizations. Inspired by the sound production mechanisms of songbirds, in this work we study a mathematical model of a vocal organ, in which a simple sound source interacts with a tract, leading to a delay differential equation. We explore the system numerically, and by taking it to the weakly nonlinear limit, we are able to examine its periodic solutions analytically. By these means we are able to explore the dynamics of oscillatory solutions of a sound source-tract coupled system, which are qualitatively different from those of a sound source-filter model of a vocal organ. Nonlinear features of the solutions are proposed as the underlying mechanisms of observed phenomena in birdsong, such as unilaterally produced “frequency jumps,” enhancement of resonances, and the shift of the fundamental frequency observed in heliox experiments. PMID:21599213

  15. Adaptive and Collaborative Exploitation of 3 Dimensional Environmental Acoustics in Distributed Undersea Networks

    DTIC Science & Technology

    2015-09-30

    experiment was conducted in Broad Sound of Massachusetts Bay using the AUV Unicorn, a 147dB omnidirectional Lubell source, and an open-ended steel pipe... steel pipe target (Figure C) was dropped at an approximate local coordinate position of (x,y)=(170,155). The location was estimated using ship...position when the target was dropped, but was only accurate within 10-15m. The orientation of the target was unknown. Figure C: Open-ended steel

  16. The Measurement of the Effects of Helmet Form on Sound Source Detection and Localization Using a Portable Four-Loudspeaker Test Array

    DTIC Science & Technology

    2013-05-01

    to data collection, a rough estimate of each listener’s binaural hearing threshold (with a bare head) was obtained for each of the test frequencies...spectral information that allows disambiguation of binaural cues lies primarily in the higher frequencies. For the analysis shown in the second...Moore, 2012). The binaural cues of level and phase differences are fairly robust; however, they can only help to determine locations on the left-right

  17. Sound localization and word discrimination in reverberant environment in children with developmental dyslexia.

    PubMed

    Castro-Camacho, Wendy; Peñaloza-López, Yolanda; Pérez-Ruiz, Santiago J; García-Pedroza, Felipe; Padilla-Ortiz, Ana L; Poblano, Adrián; Villarruel-Rivas, Concepción; Romero-Díaz, Alfredo; Careaga-Olvera, Aidé

    2015-04-01

    Compare if localization of sounds and words discrimination in reverberant environment is different between children with dyslexia and controls. We studied 30 children with dyslexia and 30 controls. Sound and word localization and discrimination was studied in five angles from left to right auditory fields (-90o, -45o, 0o, +45o, +90o), under reverberant and no-reverberant conditions; correct answers were compared. Spatial location of words in no-reverberant test was deficient in children with dyslexia at 0º and +90o. Spatial location for reverberant test was altered in children with dyslexia at all angles, except –-90o. Word discrimination in no-reverberant test in children with dyslexia had a poor performance at left angles. In reverberant test, children with dyslexia exhibited deficiencies at -45o, -90o, and +45o angles. Children with dyslexia could had problems when have to locate sound, and discriminate words in extreme locations of the horizontal plane in classrooms with reverberation.

  18. Intensity-invariant coding in the auditory system.

    PubMed

    Barbour, Dennis L

    2011-11-01

    The auditory system faithfully represents sufficient details from sound sources such that downstream cognitive processes are capable of acting upon this information effectively even in the face of signal uncertainty, degradation or interference. This robust sound source representation leads to an invariance in perception vital for animals to interact effectively with their environment. Due to unique nonlinearities in the cochlea, sound representations early in the auditory system exhibit a large amount of variability as a function of stimulus intensity. In other words, changes in stimulus intensity, such as for sound sources at differing distances, create a unique challenge for the auditory system to encode sounds invariantly across the intensity dimension. This challenge and some strategies available to sensory systems to eliminate intensity as an encoding variable are discussed, with a special emphasis upon sound encoding. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Bowhead whale localization using asynchronous hydrophones in the Chukchi Sea.

    PubMed

    Warner, Graham A; Dosso, Stan E; Hannay, David E; Dettmer, Jan

    2016-07-01

    This paper estimates bowhead whale locations and uncertainties using non-linear Bayesian inversion of their modally-dispersed calls recorded on asynchronous recorders in the Chukchi Sea, Alaska. Bowhead calls were recorded on a cluster of 7 asynchronous ocean-bottom hydrophones that were separated by 0.5-9.2 km. A warping time-frequency analysis is used to extract relative mode arrival times as a function of frequency for nine frequency-modulated whale calls that dispersed in the shallow water environment. Each call was recorded on multiple hydrophones and the mode arrival times are inverted for: the whale location in the horizontal plane, source instantaneous frequency (IF), water sound-speed profile, seabed geoacoustic parameters, relative recorder clock drifts, and residual error standard deviations, all with estimated uncertainties. A simulation study shows that accurate prior environmental knowledge is not required for accurate localization as long as the inversion treats the environment as unknown. Joint inversion of multiple recorded calls is shown to substantially reduce uncertainties in location, source IF, and relative clock drift. Whale location uncertainties are estimated to be 30-160 m and relative clock drift uncertainties are 3-26 ms.

  20. Emergence of Spatial Stream Segregation in the Ascending Auditory Pathway.

    PubMed

    Yao, Justin D; Bremen, Peter; Middlebrooks, John C

    2015-12-09

    Stream segregation enables a listener to disentangle multiple competing sequences of sounds. A recent study from our laboratory demonstrated that cortical neurons in anesthetized cats exhibit spatial stream segregation (SSS) by synchronizing preferentially to one of two sequences of noise bursts that alternate between two source locations. Here, we examine the emergence of SSS along the ascending auditory pathway. Extracellular recordings were made in anesthetized rats from the inferior colliculus (IC), the nucleus of the brachium of the IC (BIN), the medial geniculate body (MGB), and the primary auditory cortex (A1). Stimuli consisted of interleaved sequences of broadband noise bursts that alternated between two source locations. At stimulus presentation rates of 5 and 10 bursts per second, at which human listeners report robust SSS, neural SSS is weak in the central nucleus of the IC (ICC), it appears in the nucleus of the brachium of the IC (BIN) and in approximately two-thirds of neurons in the ventral MGB (MGBv), and is prominent throughout A1. The enhancement of SSS at the cortical level reflects both increased spatial sensitivity and increased forward suppression. We demonstrate that forward suppression in A1 does not result from synaptic inhibition at the cortical level. Instead, forward suppression might reflect synaptic depression in the thalamocortical projection. Together, our findings indicate that auditory streams are increasingly segregated along the ascending auditory pathway as distinct mutually synchronized neural populations. Listeners are capable of disentangling multiple competing sequences of sounds that originate from distinct sources. This stream segregation is aided by differences in spatial location between the sources. A possible substrate of spatial stream segregation (SSS) has been described in the auditory cortex, but the mechanisms leading to those cortical responses are unknown. Here, we investigated SSS in three levels of the ascending auditory pathway with extracellular unit recordings in anesthetized rats. We found that neural SSS emerges within the ascending auditory pathway as a consequence of sharpening of spatial sensitivity and increasing forward suppression. Our results highlight brainstem mechanisms that culminate in SSS at the level of the auditory cortex. Copyright © 2015 Yao et al.

  1. Remote listening and passive acoustic detection in a 3-D environment

    NASA Astrophysics Data System (ADS)

    Barnhill, Colin

    Teleconferencing environments are a necessity in business, education and personal communication. They allow for the communication of information to remote locations without the need for travel and the necessary time and expense required for that travel. Visual information can be communicated using cameras and monitors. The advantage of visual communication is that an image can capture multiple objects and convey them, using a monitor, to a large group of people regardless of the receiver's location. This is not the case for audio. Currently, most experimental teleconferencing systems' audio is based on stereo recording and reproduction techniques. The problem with this solution is that it is only effective for one or two receivers. To accurately capture a sound environment consisting of multiple sources and to recreate that for a group of people is an unsolved problem. This work will focus on new methods of multiple source 3-D environment sound capture and applications using these captured environments. Using spherical microphone arrays, it is now possible to capture a true 3-D environment A spherical harmonic transform on the array's surface allows us to determine the basis functions (spherical harmonics) for all spherical wave solutions (up to a fixed order). This spherical harmonic decomposition (SHD) allows us to not only look at the time and frequency characteristics of an audio signal but also the spatial characteristics of an audio signal. In this way, a spherical harmonic transform is analogous to a Fourier transform in that a Fourier transform transforms a signal into the frequency domain and a spherical harmonic transform transforms a signal into the spatial domain. The SHD also decouples the input signals from the microphone locations. Using the SHD of a soundfield, new algorithms are available for remote listening, acoustic detection, and signal enhancement The new algorithms presented in this paper show distinct advantages over previous detection and listening algorithms especially for multiple speech sources and room environments. The algorithms use high order (spherical harmonic) beamforming and power signal characteristics for source localization and signal enhancement These methods are applied to remote listening, surveillance, and teleconferencing.

  2. Tectonic evolution, structural styles, and oil habitat in Campeche Sound, Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angeles-Aquino, F.J.; Reyes-Nunez, J.; Quezada-Muneton, J.M.

    1994-12-31

    Campeche Sound is located in the southern part of the Gulf of Mexico. This area is Mexico`s most important petroleum province. The Mesozoic section includes Callovian salt deposits; Upper Jurassic sandstones, anhydrites, limestones, and shales; and Cretaceous limestones, dolomites, shales, and carbonate breccias. The Cenozoic section is formed by bentonitic shales and minor sandstones and carbonate breccias. Campeche Sound has been affected by three episodes of deformation: first extensional tectonism, then compressional tectonism, and finally extensional tectonism again. The first period of deformation extended from the middle Jurassic to late Jurassic and is related to the opening of the Gulfmore » of Mexico. During this regime, tilted block faults trending northwest-southwest were dominant. The subsequent compressional regime occurred during the middle Miocene, and it was related to northeast tangential stresses that induced further flow of Callovian salt and gave rise to large faulted, and commonly overturned, anticlines. The last extensional regime lasted throughout the middle and late Miocene, and it is related to salt tectonics and growth faults that have a middle Miocene shaly horizon as the main detachment surface. The main source rocks are Tithonian shales and shaly limestones. Oolite bars, slope and shelf carbonates, and regressive sandstones form the main reservoirs. Evaporites and shales are the regional seals. Recent information indicates that Oxfordian shaly limestones are also important source rocks.« less

  3. Optical analysis of enamel and dentin caries in relation to mineral density using swept-source optical coherence tomography

    PubMed Central

    Ueno, Tomoka; Shimada, Yasushi; Matin, Khairul; Zhou, Yuan; Wada, Ikumi; Sadr, Alireza; Sumi, Yasunori; Tagami, Junji

    2016-01-01

    Abstract. The aim of this study was to evaluate the signal intensity and signal attenuation of swept source optical coherence tomography (SS-OCT) for dental caries in relation to the variation of mineral density. SS-OCT observation was performed on the enamel and dentin artificial demineralization and on natural caries. The artificial caries model on enamel and dentin surfaces was created using Streptococcus mutans biofilms incubated in an oral biofilm reactor. The lesions were centrally cross sectioned and SS-OCT scans were obtained in two directions to construct a three-dimensional data set, from the lesion surface (sagittal scan) and parallel to the lesion surface (horizontal scan). The integrated signal up to 200  μm in depth (IS200) and the attenuation coefficient (μ) of the enamel and dentin lesions were calculated from the SS-OCT signal in horizontal scans at five locations of lesion depth. The values were compared with the mineral density obtained from transverse microradiography. Both enamel and dentin demineralization showed significantly higher IS200 and μ than the sound tooth substrate from the sagittal scan. Enamel demineralization showed significantly higher IS200 than sound enamel, even with low levels of demineralization. In demineralized dentin, the μ from the horizontal scan consistently trended downward compared to the sound dentin. PMID:27704033

  4. Acoustic Location of Lightning Using Interferometric Techniques

    NASA Astrophysics Data System (ADS)

    Erives, H.; Arechiga, R. O.; Stock, M.; Lapierre, J. L.; Edens, H. E.; Stringer, A.; Rison, W.; Thomas, R. J.

    2013-12-01

    Acoustic arrays have been used to accurately locate thunder sources in lightning flashes. The acoustic arrays located around the Magdalena mountains of central New Mexico produce locations which compare quite well with source locations provided by the New Mexico Tech Lightning Mapping Array. These arrays utilize 3 outer microphones surrounding a 4th microphone located at the center, The location is computed by band-passing the signal to remove noise, and then computing the cross correlating the outer 3 microphones with respect the center reference microphone. While this method works very well, it works best on signals with high signal to noise ratios; weaker signals are not as well located. Therefore, methods are being explored to improve the location accuracy and detection efficiency of the acoustic location systems. The signal received by acoustic arrays is strikingly similar to th signal received by radio frequency interferometers. Both acoustic location systems and radio frequency interferometers make coherent measurements of a signal arriving at a number of closely spaced antennas. And both acoustic and interferometric systems then correlate these signals between pairs of receivers to determine the direction to the source of the received signal. The primary difference between the two systems is the velocity of propagation of the emission, which is much slower for sound. Therefore, the same frequency based techniques that have been used quite successfully with radio interferometers should be applicable to acoustic based measurements as well. The results presented here are comparisons between the location results obtained with current cross correlation method and techniques developed for radio frequency interferometers applied to acoustic signals. The data were obtained during the summer 2013 storm season using multiple arrays sensitive to both infrasonic frequency and audio frequency acoustic emissions from lightning. Preliminary results show that interferometric techniques have good potential for improving the lightning location accuracy and detection efficiency of acoustic arrays.

  5. Numerical Models for Sound Propagation in Long Spaces

    NASA Astrophysics Data System (ADS)

    Lai, Chenly Yuen Cheung

    Both reverberation time and steady-state sound field are the key elements for assessing the acoustic condition in an enclosed space. They affect the noise propagation, speech intelligibility, clarity index, and definition. Since the sound field in a long space is non diffuse, classical room acoustics theory does not apply in this situation. The ray tracing technique and the image source methods are two common models to fathom both reverberation time and steady-state sound field in long enclosures nowadays. Although both models can give an accurate estimate of reverberation times and steady-state sound field directly or indirectly, they often involve time-consuming calculations. In order to simplify the acoustic consideration, a theoretical formulation has been developed for predicting both steady-state sound fields and reverberation times in street canyons. The prediction model is further developed to predict the steady-state sound field in a long enclosure. Apart from the straight long enclosure, there are other variations such as a cross junction, a long enclosure with a T-intersection, an U-turn long enclosure. In the present study, an theoretical and experimental investigations were conducted to develop formulae for predicting reverberation times and steady-state sound fields in a junction of a street canyon and in a long enclosure with T-intersection. The theoretical models are validated by comparing the numerical predictions with published experimental results. The theoretical results are also compared with precise indoor measurements and large-scale outdoor experimental results. In all of previous acoustical studies related to long enclosure, most of the studies are focused on the monopole sound source. Besides non-directional noise source, many noise sources in long enclosure are dipole like, such as train noise and fan noise. In order to study the characteristics of directional noise sources, a review of available dipole source was conducted. A dipole was constructed which was subsequent used for experimental studies. In additional, a theoretical model was developed for predicting dipole sound fields. The theoretical model can be used to study the effect of a dipole source on the speech intelligibility in long enclosures.

  6. Interneurons in the Honeybee Primary Auditory Center Responding to Waggle Dance-Like Vibration Pulses.

    PubMed

    Ai, Hiroyuki; Kai, Kazuki; Kumaraswamy, Ajayrama; Ikeno, Hidetoshi; Wachtler, Thomas

    2017-11-01

    Female honeybees use the "waggle dance" to communicate the location of nectar sources to their hive mates. Distance information is encoded in the duration of the waggle phase (von Frisch, 1967). During the waggle phase, the dancer produces trains of vibration pulses, which are detected by the follower bees via Johnston's organ located on the antennae. To uncover the neural mechanisms underlying the encoding of distance information in the waggle dance follower, we investigated morphology, physiology, and immunohistochemistry of interneurons arborizing in the primary auditory center of the honeybee ( Apis mellifera ). We identified major interneuron types, named DL-Int-1, DL-Int-2, and bilateral DL-dSEG-LP, that responded with different spiking patterns to vibration pulses applied to the antennae. Experimental and computational analyses suggest that inhibitory connection plays a role in encoding and processing the duration of vibration pulse trains in the primary auditory center of the honeybee. SIGNIFICANCE STATEMENT The waggle dance represents a form of symbolic communication used by honeybees to convey the location of food sources via species-specific sound. The brain mechanisms used to decipher this symbolic information are unknown. We examined interneurons in the honeybee primary auditory center and identified different neuron types with specific properties. The results of our computational analyses suggest that inhibitory connection plays a role in encoding waggle dance signals. Our results are critical for understanding how the honeybee deciphers information from the sound produced by the waggle dance and provide new insights regarding how common neural mechanisms are used by different species to achieve communication. Copyright © 2017 the authors 0270-6474/17/3710624-12$15.00/0.

  7. Snoring classified: The Munich-Passau Snore Sound Corpus.

    PubMed

    Janott, Christoph; Schmitt, Maximilian; Zhang, Yue; Qian, Kun; Pandit, Vedhas; Zhang, Zixing; Heiser, Clemens; Hohenhorst, Winfried; Herzog, Michael; Hemmert, Werner; Schuller, Björn

    2018-03-01

    Snoring can be excited in different locations within the upper airways during sleep. It was hypothesised that the excitation locations are correlated with distinct acoustic characteristics of the snoring noise. To verify this hypothesis, a database of snore sounds is developed, labelled with the location of sound excitation. Video and audio recordings taken during drug induced sleep endoscopy (DISE) examinations from three medical centres have been semi-automatically screened for snore events, which subsequently have been classified by ENT experts into four classes based on the VOTE classification. The resulting dataset containing 828 snore events from 219 subjects has been split into Train, Development, and Test sets. An SVM classifier has been trained using low level descriptors (LLDs) related to energy, spectral features, mel frequency cepstral coefficients (MFCC), formants, voicing, harmonic-to-noise ratio (HNR), spectral harmonicity, pitch, and microprosodic features. An unweighted average recall (UAR) of 55.8% could be achieved using the full set of LLDs including formants. Best performing subset is the MFCC-related set of LLDs. A strong difference in performance could be observed between the permutations of train, development, and test partition, which may be caused by the relatively low number of subjects included in the smaller classes of the strongly unbalanced data set. A database of snoring sounds is presented which are classified according to their sound excitation location based on objective criteria and verifiable video material. With the database, it could be demonstrated that machine classifiers can distinguish different excitation location of snoring sounds in the upper airway based on acoustic parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. A Corticothalamic Circuit Model for Sound Identification in Complex Scenes

    PubMed Central

    Otazu, Gonzalo H.; Leibold, Christian

    2011-01-01

    The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668

  9. Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid.

    PubMed

    Kidd, Gerald

    2017-10-17

    Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired "target" talker while ignoring the speech from unwanted "masker" talkers and other sources of sound. This listening situation forms the classic "cocktail party problem" described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. This approach, embodied in a prototype "visually guided hearing aid" (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based "spatial filter" operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in "informational masking." The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in "energetic masking." Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation. http://cred.pubs.asha.org/article.aspx?articleid=2601621.

  10. Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid

    PubMed Central

    2017-01-01

    Purpose Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired “target” talker while ignoring the speech from unwanted “masker” talkers and other sources of sound. This listening situation forms the classic “cocktail party problem” described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. Method This approach, embodied in a prototype “visually guided hearing aid” (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. Results The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based “spatial filter” operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in “informational masking.” The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in “energetic masking.” Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Conclusions Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601621 PMID:29049603

  11. Relation of sound intensity and accuracy of localization.

    PubMed

    Farrimond, T

    1989-08-01

    Tests were carried out on 17 subjects to determine the accuracy of monaural sound localization when the head is not free to turn toward the sound source. Maximum accuracy of localization for a constant-volume sound source coincided with the position for maximum perceived intensity of the sound in the front quadrant. There was a tendency for sounds to be perceived more often as coming from a position directly toward the ear. That is, for sounds in the front quadrant, errors of localization tended to be predominantly clockwise (i.e., biased toward a line directly facing the ear). Errors for sounds occurring in the rear quadrant tended to be anticlockwise. The pinna's differential effect on sound intensity between front and rear quadrants would assist in identifying the direction of movement of objects, for example an insect, passing the ear.

  12. Source sparsity control of sound field reproduction using the elastic-net and the lasso minimizers.

    PubMed

    Gauthier, P-A; Lecomte, P; Berry, A

    2017-04-01

    Sound field reproduction is aimed at the reconstruction of a sound pressure field in an extended area using dense loudspeaker arrays. In some circumstances, sound field reproduction is targeted at the reproduction of a sound field captured using microphone arrays. Although methods and algorithms already exist to convert microphone array recordings to loudspeaker array signals, one remaining research question is how to control the spatial sparsity in the resulting loudspeaker array signals and what would be the resulting practical advantages. Sparsity is an interesting feature for spatial audio since it can drastically reduce the number of concurrently active reproduction sources and, therefore, increase the spatial contrast of the solution at the expense of a difference between the target and reproduced sound fields. In this paper, the application of the elastic-net cost function to sound field reproduction is compared to the lasso cost function. It is shown that the elastic-net can induce solution sparsity and overcomes limitations of the lasso: The elastic-net solves the non-uniqueness of the lasso solution, induces source clustering in the sparse solution, and provides a smoother solution within the activated source clusters.

  13. Eye-movements intervening between two successive sounds disrupt comparisons of auditory location

    PubMed Central

    Pavani, Francesco; Husain, Masud; Driver, Jon

    2008-01-01

    Summary Many studies have investigated how saccades may affect the internal representation of visual locations across eye-movements. Here we studied instead whether eye-movements can affect auditory spatial cognition. In two experiments, participants judged the relative azimuth (same/different) of two successive sounds presented from a horizontal array of loudspeakers, separated by a 2.5 secs delay. Eye-position was either held constant throughout the trial (being directed in a fixed manner to the far left or right of the loudspeaker array), or had to be shifted to the opposite side of the array during the retention delay between the two sounds, after the first sound but before the second. Loudspeakers were either visible (Experiment1) or occluded from sight (Experiment 2). In both cases, shifting eye-position during the silent delay-period affected auditory performance in the successive auditory comparison task, even though the auditory inputs to be judged were equivalent. Sensitivity (d′) for the auditory discrimination was disrupted, specifically when the second sound shifted in the opposite direction to the intervening eye-movement with respect to the first sound. These results indicate that eye-movements affect internal representation of auditory location. PMID:18566808

  14. Eye-movements intervening between two successive sounds disrupt comparisons of auditory location.

    PubMed

    Pavani, Francesco; Husain, Masud; Driver, Jon

    2008-08-01

    Many studies have investigated how saccades may affect the internal representation of visual locations across eye-movements. Here, we studied, instead, whether eye-movements can affect auditory spatial cognition. In two experiments, participants judged the relative azimuth (same/different) of two successive sounds presented from a horizontal array of loudspeakers, separated by a 2.5-s delay. Eye-position was either held constant throughout the trial (being directed in a fixed manner to the far left or right of the loudspeaker array) or had to be shifted to the opposite side of the array during the retention delay between the two sounds, after the first sound but before the second. Loudspeakers were either visible (Experiment 1) or occluded from sight (Experiment 2). In both cases, shifting eye-position during the silent delay-period affected auditory performance in thn the successive auditory comparison task, even though the auditory inputs to be judged were equivalent. Sensitivity (d') for the auditory discrimination was disrupted, specifically when the second sound shifted in the opposite direction to the intervening eye-movement with respect to the first sound. These results indicate that eye-movements affect internal representation of auditory location.

  15. Sound Sources Identified in High-Speed Jets by Correlating Flow Density Fluctuations With Far-Field Noise

    NASA Technical Reports Server (NTRS)

    Panda, Jayanta; Seasholtz, Richard G.

    2003-01-01

    Noise sources in high-speed jets were identified by directly correlating flow density fluctuation (cause) to far-field sound pressure fluctuation (effect). The experimental study was performed in a nozzle facility at the NASA Glenn Research Center in support of NASA s initiative to reduce the noise emitted by commercial airplanes. Previous efforts to use this correlation method have failed because the tools for measuring jet turbulence were intrusive. In the present experiment, a molecular Rayleigh-scattering technique was used that depended on laser light scattering by gas molecules in air. The technique allowed accurate measurement of air density fluctuations from different points in the plume. The study was conducted in shock-free, unheated jets of Mach numbers 0.95, 1.4, and 1.8. The turbulent motion, as evident from density fluctuation spectra was remarkably similar in all three jets, whereas the noise sources were significantly different. The correlation study was conducted by keeping a microphone at a fixed location (at the peak noise emission angle of 30 to the jet axis and 50 nozzle diameters away) while moving the laser probe volume from point to point in the flow. The following figure shows maps of the nondimensional coherence value measured at different Strouhal frequencies ([frequency diameter]/jet speed) in the supersonic Mach 1.8 and subsonic Mach 0.95 jets. The higher the coherence, the stronger the source was.

  16. Achieving perceptually-accurate aural telepresence

    NASA Astrophysics Data System (ADS)

    Henderson, Paul D.

    Immersive multimedia requires not only realistic visual imagery but also a perceptually-accurate aural experience. A sound field may be presented simultaneously to a listener via a loudspeaker rendering system using the direct sound from acoustic sources as well as a simulation or "auralization" of room acoustics. Beginning with classical Wave-Field Synthesis (WFS), improvements are made to correct for asymmetries in loudspeaker array geometry. Presented is a new Spatially-Equalized WFS (SE-WFS) technique to maintain the energy-time balance of a simulated room by equalizing the reproduced spectrum at the listener for a distribution of possible source angles. Each reproduced source or reflection is filtered according to its incidence angle to the listener. An SE-WFS loudspeaker array of arbitrary geometry reproduces the sound field of a room with correct spectral and temporal balance, compared with classically-processed WFS systems. Localization accuracy of human listeners in SE-WFS sound fields is quantified by psychoacoustical testing. At a loudspeaker spacing of 0.17 m (equivalent to an aliasing cutoff frequency of 1 kHz), SE-WFS exhibits a localization blur of 3 degrees, nearly equal to real point sources. Increasing the loudspeaker spacing to 0.68 m (for a cutoff frequency of 170 Hz) results in a blur of less than 5 degrees. In contrast, stereophonic reproduction is less accurate with a blur of 7 degrees. The ventriloquist effect is psychometrically investigated to determine the effect of an intentional directional incongruence between audio and video stimuli. Subjects were presented with prerecorded full-spectrum speech and motion video of a talker's head as well as broadband noise bursts with a static image. The video image was displaced from the audio stimulus in azimuth by varying amounts, and the perceived auditory location measured. A strong bias was detectable for small angular discrepancies between audio and video stimuli for separations of less than 8 degrees for speech and less than 4 degrees with a pink noise burst. The results allow for the density of WFS systems to be selected from the required localization accuracy. Also, by exploiting the ventriloquist effect, the angular resolution of an audio rendering may be reduced when combined with spatially-accurate video.

  17. Sound quality indicators for urban places in Paris cross-validated by Milan data.

    PubMed

    Ricciardi, Paola; Delaitre, Pauline; Lavandier, Catherine; Torchia, Francesca; Aumond, Pierre

    2015-10-01

    A specific smartphone application was developed to collect perceptive and acoustic data in Paris. About 3400 questionnaires were analyzed, regarding the global sound environment characterization, the perceived loudness of some emergent sources and the presence time ratio of sources that do not emerge from the background. Sound pressure level was recorded each second from the mobile phone's microphone during a 10-min period. The aim of this study is to propose indicators of urban sound quality based on linear regressions with perceptive variables. A cross validation of the quality models extracted from Paris data was carried out by conducting the same survey in Milan. The proposed sound quality general model is correlated with the real perceived sound quality (72%). Another model without visual amenity and familiarity is 58% correlated with perceived sound quality. In order to improve the sound quality indicator, a site classification was performed by Kohonen's Artificial Neural Network algorithm, and seven specific class models were developed. These specific models attribute more importance on source events and are slightly closer to the individual data than the global model. In general, the Parisian models underestimate the sound quality of Milan environments assessed by Italian people.

  18. Computer program to predict aircraft noise levels

    NASA Technical Reports Server (NTRS)

    Clark, B. J.

    1981-01-01

    Methods developed at the NASA Lewis Research Center for predicting the noise contributions from various aircraft noise sources were programmed to predict aircraft noise levels either in flight or in ground tests. The noise sources include fan inlet and exhaust, jet, flap (for powered lift), core (combustor), turbine, and airframe. Noise propagation corrections are available for atmospheric attenuation, ground reflections, extra ground attenuation, and shielding. Outputs can include spectra, overall sound pressure level, perceived noise level, tone-weighted perceived noise level, and effective perceived noise level at locations specified by the user. Footprint contour coordinates and approximate footprint areas can also be calculated. Inputs and outputs can be in either System International or U.S. customary units. The subroutines for each noise source and propagation correction are described. A complete listing is given.

  19. 49 CFR 325.37 - Location and operation of sound level measurement system; highway operations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... system; highway operations. 325.37 Section 325.37 Transportation Other Regulations Relating to...; Highway Operations § 325.37 Location and operation of sound level measurement system; highway operations..., the holder must orient himself/herself relative to the highway in a manner consistent with the...

  20. 49 CFR 325.37 - Location and operation of sound level measurement system; highway operations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... system; highway operations. 325.37 Section 325.37 Transportation Other Regulations Relating to...; Highway Operations § 325.37 Location and operation of sound level measurement system; highway operations..., the holder must orient himself/herself relative to the highway in a manner consistent with the...

  1. 49 CFR 325.37 - Location and operation of sound level measurement system; highway operations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... system; highway operations. 325.37 Section 325.37 Transportation Other Regulations Relating to...; Highway Operations § 325.37 Location and operation of sound level measurement system; highway operations..., the holder must orient himself/herself relative to the highway in a manner consistent with the...

  2. Repair, Evaluation, Maintenance, and Rehabilitation Research Program. Case Histories of Corps Breakwater and Jetty Structures. Report 6. North Pacific Division

    DTIC Science & Technology

    1988-11-01

    is located in southern Alaska on Orca Inlet at the south- eastern approach of Prince William Sound , 145 air miles east-southeast from Anchorage. The...Anacortes Harbor, Washington 113. Anacortes is located on Fidalgo Island on the east side of Puget Sound . The project includes a 2,850-ft-long channel... Puget Sound in northern Washington. The project includes three waterways maintained by dredging, a small-boat basin protected by two rubble-mound

  3. West Flank Coso, CA FORGE Magnetotelluric Inversion

    DOE Data Explorer

    Doug Blankenship

    2016-05-16

    The Coso Magnetotelluric (MT) dataset of which the West Flank FORGE MT data is a subset, was collected by Schlumberger / WesternGeco and initially processed by the WesternGeco GeoSolutions Integrated EM Center of Excellence in Milan, Italy. The 2011 data was based on 99 soundings that were centered on the West Flank geothermal prospect. The new soundings along with previous data from 2003 and 2006 were incorporated into a 3D inversion. Full impedance tensor data were inverted in the 1-3000 Hz range. The modelling report notes several noise sources, specifically the DC powerline that is 20,000 feet west of the survey area, and may have affected data in the 0.02 to 10 Hz range. Model cell dimensions of 450 x 450 x 65 feet were used to avoid computational instability in the 3D model. The fit between calculated and observed MT values for the final model run had an RMS value of 1.807. The included figure from the WesternGeco report shows the sounding locations from the 2011, 2006 and 2003 surveys.

  4. Development of rotorcraft interior noise control concepts. Phase 2: Full scale testing, revision 1

    NASA Technical Reports Server (NTRS)

    Yoerkie, C. A.; Gintoli, P. J.; Moore, J. A.

    1986-01-01

    The phase 2 effort consisted of a series of ground and flight test measurements to obtain data for validation of the Statistical Energy Analysis (SEA) model. Included in the gound tests were various transfer function measurements between vibratory and acoustic subsystems, vibration and acoustic decay rate measurements, and coherent source measurements. The bulk of these, the vibration transfer functions, were used for SEA model validation, while the others provided information for characterization of damping and reverberation time of the subsystems. The flight test program included measurements of cabin and cockpit sound pressure level, frame and panel vibration level, and vibration levels at the main transmission attachment locations. Comparisons between measured and predicted subsystem excitation levels from both ground and flight testing were evaluated. The ground test data show good correlation with predictions of vibration levels throughout the cabin overhead for all excitations. The flight test results also indicate excellent correlation of inflight sound pressure measurements to sound pressure levels predicted by the SEA model, where the average aircraft speech interference level is predicted within 0.2 dB.

  5. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...

  6. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...

  7. Environmental and Management Goal Setting for the Long Island Sound Comprehensive Conservation and Management Plan

    EPA Science Inventory

    Over the past 3 years the Long Island Sound Study (LISS) has been developing a revised Comprehensive Conservation and Management Plan (CCMP), the blueprint for the protection and restoration of the Sound for the next generation. Long Island Sound is located within the most densel...

  8. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...

  9. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...

  10. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...

  11. Development of an ICT-Based Air Column Resonance Learning Media

    NASA Astrophysics Data System (ADS)

    Purjiyanta, Eka; Handayani, Langlang; Marwoto, Putut

    2016-08-01

    Commonly, the sound source used in the air column resonance experiment is the tuning fork having disadvantage of unoptimal resonance results due to the sound produced which is getting weaker. In this study we made tones with varying frequency using the Audacity software which were, then, stored in a mobile phone as a source of sound. One advantage of this sound source is the stability of the resulting sound enabling it to produce the same powerful sound. The movement of water in a glass tube mounted on the tool resonance and the tone sound that comes out from the mobile phone were recorded by using a video camera. Sound resonances recorded were first, second, and third resonance, for each tone frequency mentioned. The resulting sound stays longer, so it can be used for the first, second, third and next resonance experiments. This study aimed to (1) explain how to create tones that can substitute tuning forks sound used in air column resonance experiments, (2) illustrate the sound wave that occurred in the first, second, and third resonance in the experiment, and (3) determine the speed of sound in the air. This study used an experimental method. It was concluded that; (1) substitute tones of a tuning fork sound can be made by using the Audacity software; (2) the form of sound waves that occured in the first, second, and third resonance in the air column resonance can be drawn based on the results of video recording of the air column resonance; and (3) based on the experiment result, the speed of sound in the air is 346.5 m/s, while based on the chart analysis with logger pro software, the speed of sound in the air is 343.9 ± 0.3171 m/s.

  12. Active Control of Fan-Generated Tone Noise

    NASA Technical Reports Server (NTRS)

    Gerhold, Carl H.

    1995-01-01

    This paper reports on an experiment to control the noise radiated from the inlet of a ducted fan using a time domain active adaptive system. The control ,sound source consists of loudspeakers arranged in a ring around the fan duct. The error sensor location is in the fan duct. The purpose of this experiment is to demonstrate that the in-duct error sensor reduces the mode spillover in the far field, thereby increasing the efficiency of the control system. The control system is found to reduce the blade passage frequency tone significantly in the acoustic far field when the mode orders of the noise source and of the control source are the same, when the dominant wave in the duct is a plane wave. The presence of higher order modes in the duct reduces the noise reduction efficiency, particularly near the mode cut-on where the standing wave component is strong, but the control system converges stably. The control system is stable and converges when the first circumferential mode is generated in the duct. The control system is found to reduce the fan noise in the far field on an arc around the fan inlet by as much as 20 dB with none of the sound amplification associated with mode spillover.

  13. Speech intelligibility in complex acoustic environments in young children

    NASA Astrophysics Data System (ADS)

    Litovsky, Ruth

    2003-04-01

    While the auditory system undergoes tremendous maturation during the first few years of life, it has become clear that in complex scenarios when multiple sounds occur and when echoes are present, children's performance is significantly worse than their adult counterparts. The ability of children (3-7 years of age) to understand speech in a simulated multi-talker environment and to benefit from spatial separation of the target and competing sounds was investigated. In these studies, competing sources vary in number, location, and content (speech, modulated or unmodulated speech-shaped noise and time-reversed speech). The acoustic spaces were also varied in size and amount of reverberation. Finally, children with chronic otitis media who received binaural training were tested pre- and post-training on a subset of conditions. Results indicated the following. (1) Children experienced significantly more masking than adults, even in the simplest conditions tested. (2) When the target and competing sounds were spatially separated speech intelligibility improved, but the amount varied with age, type of competing sound, and number of competitors. (3) In a large reverberant classroom there was no benefit of spatial separation. (4) Binaural training improved speech intelligibility performance in children with otitis media. Future work includes similar studies in children with unilateral and bilateral cochlear implants. [Work supported by NIDCD, DRF, and NOHR.

  14. A training system of orientation and mobility for blind people using acoustic virtual reality.

    PubMed

    Seki, Yoshikazu; Sato, Tetsuji

    2011-02-01

    A new auditory orientation training system was developed for blind people using acoustic virtual reality (VR) based on a head-related transfer function (HRTF) simulation. The present training system can reproduce a virtual training environment for orientation and mobility (O&M) instruction, and the trainee can walk through the virtual training environment safely by listening to sounds such as vehicles, stores, ambient noise, etc., three-dimensionally through headphones. The system can reproduce not only sound sources but also sound reflection and insulation, so that the trainee can learn both sound location and obstacle perception skills. The virtual training environment is described in extensible markup language (XML), and the O&M instructor can edit it easily according to the training curriculum. Evaluation experiments were conducted to test the efficiency of some features of the system. Thirty subjects who had not acquired O&M skills attended the experiments. The subjects were separated into three groups: a no-training group, a virtual-training group using the present system, and a real-training group in real environments. The results suggested that virtual-training can reduce "veering" more than real-training and also can reduce stress as much as real training. The subjective technical and anxiety scores also improved.

  15. Automatic analysis of slips of the tongue: Insights into the cognitive architecture of speech production.

    PubMed

    Goldrick, Matthew; Keshet, Joseph; Gustafson, Erin; Heller, Jordana; Needle, Jeremy

    2016-04-01

    Traces of the cognitive mechanisms underlying speaking can be found within subtle variations in how we pronounce sounds. While speech errors have traditionally been seen as categorical substitutions of one sound for another, acoustic/articulatory analyses show they partially reflect the intended sound. When "pig" is mispronounced as "big," the resulting /b/ sound differs from correct productions of "big," moving towards intended "pig"-revealing the role of graded sound representations in speech production. Investigating the origins of such phenomena requires detailed estimation of speech sound distributions; this has been hampered by reliance on subjective, labor-intensive manual annotation. Computational methods can address these issues by providing for objective, automatic measurements. We develop a novel high-precision computational approach, based on a set of machine learning algorithms, for measurement of elicited speech. The algorithms are trained on existing manually labeled data to detect and locate linguistically relevant acoustic properties with high accuracy. Our approach is robust, is designed to handle mis-productions, and overall matches the performance of expert coders. It allows us to analyze a very large dataset of speech errors (containing far more errors than the total in the existing literature), illuminating properties of speech sound distributions previously impossible to reliably observe. We argue that this provides novel evidence that two sources both contribute to deviations in speech errors: planning processes specifying the targets of articulation and articulatory processes specifying the motor movements that execute this plan. These findings illustrate how a much richer picture of speech provides an opportunity to gain novel insights into language processing. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Silent oceans: ocean acidification impoverishes natural soundscapes by altering sound production of the world's noisiest marine invertebrate.

    PubMed

    Rossi, Tullio; Connell, Sean D; Nagelkerken, Ivan

    2016-03-16

    Soundscapes are multidimensional spaces that carry meaningful information for many species about the location and quality of nearby and distant resources. Because soundscapes are the sum of the acoustic signals produced by individual organisms and their interactions, they can be used as a proxy for the condition of whole ecosystems and their occupants. Ocean acidification resulting from anthropogenic CO2 emissions is known to have profound effects on marine life. However, despite the increasingly recognized ecological importance of soundscapes, there is no empirical test of whether ocean acidification can affect biological sound production. Using field recordings obtained from three geographically separated natural CO2 vents, we show that forecasted end-of-century ocean acidification conditions can profoundly reduce the biological sound level and frequency of snapping shrimp snaps. Snapping shrimp were among the noisiest marine organisms and the suppression of their sound production at vents was responsible for the vast majority of the soundscape alteration observed. To assess mechanisms that could account for these observations, we tested whether long-term exposure (two to three months) to elevated CO2 induced a similar reduction in the snapping behaviour (loudness and frequency) of snapping shrimp. The results indicated that the soniferous behaviour of these animals was substantially reduced in both frequency (snaps per minute) and sound level of snaps produced. As coastal marine soundscapes are dominated by biological sounds produced by snapping shrimp, the observed suppression of this component of soundscapes could have important and possibly pervasive ecological consequences for organisms that use soundscapes as a source of information. This trend towards silence could be of particular importance for those species whose larval stages use sound for orientation towards settlement habitats. © 2016 The Author(s).

  17. Silent oceans: ocean acidification impoverishes natural soundscapes by altering sound production of the world's noisiest marine invertebrate

    PubMed Central

    Rossi, Tullio; Connell, Sean D.; Nagelkerken, Ivan

    2016-01-01

    Soundscapes are multidimensional spaces that carry meaningful information for many species about the location and quality of nearby and distant resources. Because soundscapes are the sum of the acoustic signals produced by individual organisms and their interactions, they can be used as a proxy for the condition of whole ecosystems and their occupants. Ocean acidification resulting from anthropogenic CO2 emissions is known to have profound effects on marine life. However, despite the increasingly recognized ecological importance of soundscapes, there is no empirical test of whether ocean acidification can affect biological sound production. Using field recordings obtained from three geographically separated natural CO2 vents, we show that forecasted end-of-century ocean acidification conditions can profoundly reduce the biological sound level and frequency of snapping shrimp snaps. Snapping shrimp were among the noisiest marine organisms and the suppression of their sound production at vents was responsible for the vast majority of the soundscape alteration observed. To assess mechanisms that could account for these observations, we tested whether long-term exposure (two to three months) to elevated CO2 induced a similar reduction in the snapping behaviour (loudness and frequency) of snapping shrimp. The results indicated that the soniferous behaviour of these animals was substantially reduced in both frequency (snaps per minute) and sound level of snaps produced. As coastal marine soundscapes are dominated by biological sounds produced by snapping shrimp, the observed suppression of this component of soundscapes could have important and possibly pervasive ecological consequences for organisms that use soundscapes as a source of information. This trend towards silence could be of particular importance for those species whose larval stages use sound for orientation towards settlement habitats. PMID:26984624

  18. How the owl tracks its prey – II

    PubMed Central

    Takahashi, Terry T.

    2010-01-01

    Barn owls can capture prey in pitch darkness or by diving into snow, while homing in on the sounds made by their prey. First, the neural mechanisms by which the barn owl localizes a single sound source in an otherwise quiet environment will be explained. The ideas developed for the single source case will then be expanded to environments in which there are multiple sound sources and echoes – environments that are challenging for humans with impaired hearing. Recent controversies regarding the mechanisms of sound localization will be discussed. Finally, the case in which both visual and auditory information are available to the owl will be considered. PMID:20889819

  19. Assessment of Hydroacoustic Propagation Using Autonomous Hydrophones in the Scotia Sea

    DTIC Science & Technology

    2010-09-01

    Award No. DE-AI52-08NA28654 Proposal No. BAA08-36 ABSTRACT The remote area of the Atlantic Ocean near the Antarctic Peninsula and the South...hydroacoustic blind spot. To investigate the sound propagation and interferences affected by these landmasses in the vicinity of the Antarctic polar...from large icebergs (near-surface sources) were utilized as natural sound sources. Surface sound sources, e.g., ice-related events, tend to suffer less

  20. Active control of noise on the source side of a partition to increase its sound isolation

    NASA Astrophysics Data System (ADS)

    Tarabini, Marco; Roure, Alain; Pinhede, Cedric

    2009-03-01

    This paper describes a local active noise control system that virtually increases the sound isolation of a dividing wall by means of a secondary source array. With the proposed method, sound pressure on the source side of the partition is reduced using an array of loudspeakers that generates destructive interference on the wall surface, where an array of error microphones is placed. The reduction of sound pressure on the incident side of the wall is expected to decrease the sound radiated into the contiguous room. The method efficiency was experimentally verified by checking the insertion loss of the active noise control system; in order to investigate the possibility of using a large number of actuators, a decentralized FXLMS control algorithm was used. Active control performances and stability were tested with different array configurations, loudspeaker directivities and enclosure characteristics (sound source position and absorption coefficient). The influence of all these parameters was investigated with the factorial design of experiments. The main outcome of the experimental campaign was that the insertion loss produced by the secondary source array, in the 50-300 Hz frequency range, was close to 10 dB. In addition, the analysis of variance showed that the active noise control performance can be optimized with a proper choice of the directional characteristics of the secondary source and the distance between loudspeakers and error microphones.

  1. The low-frequency sound power measuring technique for an underwater source in a non-anechoic tank

    NASA Astrophysics Data System (ADS)

    Zhang, Yi-Ming; Tang, Rui; Li, Qi; Shang, Da-Jing

    2018-03-01

    In order to determine the radiated sound power of an underwater source below the Schroeder cut-off frequency in a non-anechoic tank, a low-frequency extension measuring technique is proposed. This technique is based on a unique relationship between the transmission characteristics of the enclosed field and those of the free field, which can be obtained as a correction term based on previous measurements of a known simple source. The radiated sound power of an unknown underwater source in the free field can thereby be obtained accurately from measurements in a non-anechoic tank. To verify the validity of the proposed technique, a mathematical model of the enclosed field is established using normal-mode theory, and the relationship between the transmission characteristics of the enclosed and free fields is obtained. The radiated sound power of an underwater transducer source is tested in a glass tank using the proposed low-frequency extension measuring technique. Compared with the free field, the radiated sound power level of the narrowband spectrum deviation is found to be less than 3 dB, and the 1/3 octave spectrum deviation is found to be less than 1 dB. The proposed testing technique can be used not only to extend the low-frequency applications of non-anechoic tanks, but also for measurement of radiated sound power from complicated sources in non-anechoic tanks.

  2. Modeling and analysis of secondary sources coupling for active sound field reduction in confined spaces

    NASA Astrophysics Data System (ADS)

    Montazeri, Allahyar; Taylor, C. James

    2017-10-01

    This article addresses the coupling of acoustic secondary sources in a confined space in a sound field reduction framework. By considering the coupling of sources in a rectangular enclosure, the set of coupled equations governing its acoustical behavior are solved. The model obtained in this way is used to analyze the behavior of multi-input multi-output (MIMO) active sound field control (ASC) systems, where the coupling of sources cannot be neglected. In particular, the article develops the analytical results to analyze the effect of coupling of an array of secondary sources on the sound pressure levels inside an enclosure, when an array of microphones is used to capture the acoustic characteristics of the enclosure. The results are supported by extensive numerical simulations showing how coupling of loudspeakers through acoustic modes of the enclosure will change the strength and hence the driving voltage signal applied to the secondary loudspeakers. The practical significance of this model is to provide a better insight on the performance of the sound reproduction/reduction systems in confined spaces when an array of loudspeakers and microphones are placed in a fraction of wavelength of the excitation signal to reduce/reproduce the sound field. This is of particular importance because the interaction of different sources affects their radiation impedance depending on the electromechanical properties of the loudspeakers.

  3. Noise pair velocity and range echo location system

    DOEpatents

    Erskine, D.J.

    1999-02-16

    An echo-location method for microwaves, sound and light capable of using incoherent and arbitrary waveforms of wide bandwidth to measure velocity and range (and target size) simultaneously to high resolution is disclosed. Two interferometers having very long and nearly equal delays are used in series with the target interposed. The delays can be longer than the target range of interest. The first interferometer imprints a partial coherence on an initially incoherent source which allows autocorrelation to be performed on the reflected signal to determine velocity. A coherent cross-correlation subsequent to the second interferometer with the source determines a velocity discriminated range. Dithering the second interferometer identifies portions of the cross-correlation belonging to a target apart from clutter moving at a different velocity. The velocity discrimination is insensitive to all slowly varying distortions in the signal path. Speckle in the image of target and antenna lobing due to parasitic reflections is minimal for an incoherent source. An arbitrary source which varies its spectrum dramatically and randomly from pulse to pulse creates a radar elusive to jamming. Monochromatic sources which jigger in frequency from pulse to pulse or combinations of monochromatic sources can simulate some benefits of incoherent broadband sources. Clutter which has a symmetrical velocity spectrum will self-cancel for short wavelengths, such as the apparent motion of ground surrounding target from a sidelooking airborne antenna. 46 figs.

  4. Noise pair velocity and range echo location system

    DOEpatents

    Erskine, David J.

    1999-01-01

    An echo-location method for microwaves, sound and light capable of using incoherent and arbitrary waveforms of wide bandwidth to measure velocity and range (and target size) simultaneously to high resolution. Two interferometers having very long and nearly equal delays are used in series with the target interposed. The delays can be longer than the target range of interest. The first interferometer imprints a partial coherence on an initially incoherent source which allows autocorrelation to be performed on the reflected signal to determine velocity. A coherent cross-correlation subsequent to the second interferometer with the source determines a velocity discriminated range. Dithering the second interferometer identifies portions of the cross-correlation belonging to a target apart from clutter moving at a different velocity. The velocity discrimination is insensitive to all slowly varying distortions in the signal path. Speckle in the image of target and antenna lobing due to parasitic reflections is minimal for an incoherent source. An arbitrary source which varies its spectrum dramatically and randomly from pulse to pulse creates a radar elusive to jamming. Monochromatic sources which jigger in frequency from pulse to pulse or combinations of monochromatic sources can simulate some benefits of incoherent broadband sources. Clutter which has a symmetrical velocity spectrum will self-cancel for short wavelengths, such as the apparent motion of ground surrounding target from a sidelooking airborne antenna.

  5. Active Control of Sound Radiation due to Subsonic Wave Scattering from Discontinuities on Thin Elastic Beams.

    NASA Astrophysics Data System (ADS)

    Guigou, Catherine Renee J.

    1992-01-01

    Much progress has been made in recent years in active control of sound radiation from vibrating structures. Reduction of the far-field acoustic radiation can be obtained by directly modifying the response of the structure by applying structural inputs rather than by adding acoustic sources. Discontinuities, which are present in many structures are often important in terms of sound radiation due to wave scattering behavior at their location. In this thesis, an edge or boundary type discontinuity (clamped edge) and a point discontinuity (blocking mass) are analytically studied in terms of sound radiation. When subsonic vibrational waves impinge on these discontinuities, large scattered sound levels are radiated. Active control is then achieved by applying either control forces, which approximate shakers, or pairs of control moments, which approximate piezoelectric actuators, near the discontinuity. Active control of sound radiation from a simply-supported beam is also examined. For a single frequency, the flexural response of the beam subject to an incident wave or an input force (disturbance) and to control forces or control moments is expressed in terms of waves of both propagating and near-field types. The far-field radiated pressure is then evaluated in terms of the structural response, using Rayleigh's formula or a stationary phase approach, depending upon the application. The control force and control moment magnitudes are determined by optimizing a quadratic cost function, which is directly related to the control performance. On determining the optimal control complex amplitudes, these can be resubstituted in the constitutive equations for the system under study and the minimized radiated fields can be evaluated. High attenuation in radiated sound power and radiated acoustic pressure is found to be possible when one or two active control actuators are located near the discontinuity, as is shown to be mostly associated with local changes in beam response near the discontinuity. The effect of the control actuators on the far-field radiated pressure, the wavenumber spectrum, the flexural displacement and the near-field time averaged intensity and pressure distributions are studied in order to further understand the control mechanisms. The influence of the near-field structural waves is investigated as well. Some experimental results are presented for comparison.

  6. Global Marine Gravity and Bathymetry at 1-Minute Resolution

    NASA Astrophysics Data System (ADS)

    Sandwell, D. T.; Smith, W. H.

    2008-12-01

    We have developed global gravity and bathymetry grids at 1-minute resolution. Three approaches are used to reduce the error in the satellite-derived marine gravity anomalies. First, we have retracked the raw waveforms from the ERS-1 and Geosat/GM missions resulting in improvements in range precision of 40% and 27%, respectively. Second, we have used the recently published EGM2008 global gravity model as a reference field to provide a seamless gravity transition from land to ocean. Third we have used a biharmonic spline interpolation method to construct residual vertical deflection grids. Comparisons between shipboard gravity and the global gravity grid show errors ranging from 2.0 mGal in the Gulf of Mexico to 4.0 mGal in areas with rugged seafloor topography. The largest errors occur on the crests of narrow large seamounts. The bathymetry grid is based on prediction from satellite gravity and available ship soundings. Global soundings were assembled from a wide variety of sources including NGDC/GEODAS, NOAA Coastal Relief, CCOM, IFREMER, JAMSTEC, NSF Polar Programs, UKHO, LDEO, HIG, SIO and numerous miscellaneous contributions. The National Geospatial-intelligence Agency and other volunteering hydrographic offices within the International Hydrographic Organization provided global significant shallow water (< 300 m) soundings derived from their nautical charts. All soundings were converted to a common format and were hand-edited in relation to a smooth bathymetric model. Land elevations and shoreline location are based on a combination SRTM30, GTOPO30, and ICESAT data. A new feature of the bathymetry grid is a matching grid of source identification number that enables one to establish the origin of the depth estimate in each grid cell. Both the gravity and bathymetry grids are freely available.

  7. Binaural room simulation

    NASA Technical Reports Server (NTRS)

    Lehnert, H.; Blauert, Jens; Pompetzki, W.

    1991-01-01

    In every-day listening the auditory event perceived by a listener is determined not only by the sound signal that a sound emits but also by a variety of environmental parameters. These parameters are the position, orientation and directional characteristics of the sound source, the listener's position and orientation, the geometrical and acoustical properties of surfaces which affect the sound field and the sound propagation properties of the surrounding fluid. A complete set of these parameters can be called an Acoustic Environment. If the auditory event perceived by a listener is manipulated in such a way that the listener is shifted acoustically into a different acoustic environment without moving himself physically, a Virtual Acoustic Environment has been created. Here, we deal with a special technique to set up nearly arbitrary Virtual Acoustic Environments, the Binaural Room Simulation. The purpose of the Binaural Room Simulation is to compute the binaural impulse response related to a virtual acoustic environment taking into account all parameters mentioned above. One possible way to describe a Virtual Acoustic Environment is the concept of the virtual sound sources. Each of the virtual sources emits a certain signal which is correlated but not necessarily identical with the signal emitted by the direct sound source. If source and receiver are non moving, the acoustic environment becomes a linear time-invariant system. Then, the Binaural Impulse Response from the source to a listener' s eardrums contains all relevant auditory information related to the Virtual Acoustic Environment. Listening into the simulated environment can easily be achieved by convolving the Binaural Impulse Response with dry signals and representing the results via headphones.

  8. Broad band sound from wind turbine generators

    NASA Technical Reports Server (NTRS)

    Hubbard, H. H.; Shepherd, K. P.; Grosveld, F. W.

    1981-01-01

    Brief descriptions are given of the various types of large wind turbines and their sound characteristics. Candidate sources of broadband sound are identified and are rank ordered for a large upwind configuration wind turbine generator for which data are available. The rotor is noted to be the main source of broadband sound which arises from inflow turbulence and from the interactions of the turbulent boundary layer on the blade with its trailing edge. Sound is radiated about equally in all directions but the refraction effects of the wind produce an elongated contour pattern in the downwind direction.

  9. Effects of sound source directivity on auralizations

    NASA Astrophysics Data System (ADS)

    Sheets, Nathan W.; Wang, Lily M.

    2002-05-01

    Auralization, the process of rendering audible the sound field in a simulated space, is a useful tool in the design of acoustically sensitive spaces. The auralization depends on the calculation of an impulse response between a source and a receiver which have certain directional behavior. Many auralizations created to date have used omnidirectional sources; the effects of source directivity on auralizations is a relatively unexplored area. To examine if and how the directivity of a sound source affects the acoustical results obtained from a room, we used directivity data for three sources in a room acoustic modeling program called Odeon. The three sources are: violin, piano, and human voice. The results from using directional data are compared to those obtained using omnidirectional source behavior, both through objective measure calculations and subjective listening tests.

  10. Mercury in Sediment, Water, and Biota of Sinclair Inlet, Puget Sound, Washington, 1989-2007

    USGS Publications Warehouse

    Paulson, Anthony J.; Keys, Morgan E.; Scholting, Kelly L.

    2010-01-01

    Historical records of mercury contamination in dated sediment cores from Sinclair Inlet are coincidental with activities at the U.S. Navy Puget Sound Naval Shipyard; peak total mercury concentrations occurred around World War II. After World War II, better metallurgical management practices and environmental regulations reduced mercury contamination, but total mercury concentrations in surface sediment of Sinclair Inlet have decreased slowly because of the low rate of sedimentation relative to the vertical mixing within sediment. The slopes of linear regressions between the total mercury and total organic carbon concentrations of sediment offshore of Puget Sound urban areas was the best indicator of general mercury contamination above pre-industrial levels. Prior to the 2000-01 remediation, this indicator placed Sinclair Inlet in the tier of estuaries with the highest level of mercury contamination, along with Bellingham Bay in northern Puget Sound and Elliott Bay near Seattle. This indicator also suggests that the 2000/2001 remediation dredging had significant positive effect on Sinclair Inlet as a whole. In 2007, about 80 percent of the area of the Bremerton naval complex had sediment total mercury concentrations within about 0.5 milligrams per kilogram of the Sinclair Inlet regression. Three areas adjacent to the waterfront of the Bremerton naval complex have total mercury concentrations above this range and indicate a possible terrestrial source from waterfront areas of Bremerton naval complex. Total mercury concentrations in unfiltered Sinclair Inlet marine waters are about three times higher than those of central Puget Sound, but the small numbers of samples and complex physical and geochemical processes make it difficult to interpret the geographical distribution of mercury in marine waters from Sinclair Inlet. Total mercury concentrations in various biota species were compared among geographical locations and included data of composite samples, individual specimens, and caged mussels. Total mercury concentrations in muscle and liver of English sole from Sinclair Inlet ranked in the upper quarter and third, respectively, of Puget Sound locations. For other species, concentrations from Sinclair Inlet were within the mid-range of locations (for example, Chinook salmon). Total mercury concentrations of the long-lived and higher trophic rockfish in composites and individual specimens from Sinclair Inlet tended to be the highest in Puget Sound. For a given size, sand sole, graceful crab, staghorn sculpin, surf perch, and sea cucumber individuals collected from Sinclair Inlet had higher total mercury concentrations than individuals collected from non-urban estuaries. Total mercury concentrations in individual English sole and ratfish were not significantly different than in individuals of various sizes collected from either urban or non-urban estuaries in Puget Sound. Total mercury concentrations in English sole collected from Sinclair Inlet after the 2000-2001 dredging appear to have lower total mercury concentrations than those collected before (1996) the dredging project. The highest total mercury concentrations of mussels caged in 2002 were not within the Bremerton naval complex, but within the Port Orchard Marina and inner Sinclair Inlet.

  11. 49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 5 2014-10-01 2014-10-01 false Location and operation of sound level measurement systems; stationary test. 325.57 Section 325.57 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL MOTOR CARRIER SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GENERAL REGULATIONS COMPLIANCE WITH INTERSTATE MOTOR...

  12. Effects of Variability Associated with the Antarctic Circumpolar Current on Sound Propagation in the Ocean

    DTIC Science & Technology

    2008-09-01

    showing shot locations (circles) and IMS hydrophone station locations ( triangles ), superimposed on a map of group velocities derived using average fall...E. McDonald (1991). Perth- Bermuda sound propagation (1960): Adiabatic mode interpretation, J. Acoust. Soc. Am. 90: 2586–2594. Jensen, F. B., W. A

  13. 49 CFR 325.57 - Location and operation of sound level measurement systems; stationary test.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the vehicle at an angle that is consistent with the recommendation of the system's manufacturer. If... systems; stationary test. 325.57 Section 325.57 Transportation Other Regulations Relating to...; Stationary Test § 325.57 Location and operation of sound level measurement systems; stationary test. (a) The...

  14. Development of a directivity-controlled piezoelectric transducer for sound reproduction

    NASA Astrophysics Data System (ADS)

    Bédard, Magella; Berry, Alain

    2008-04-01

    Present sound reproduction systems do not attempt to simulate the spatial radiation of musical instruments, or sound sources in general, even though the spatial directivity has a strong impact on the psychoacoustic experience. A transducer consisting of 4 piezoelectric elemental sources made from curved PVDF films is used to generate a target directivity pattern in the horizontal plane, in the frequency range of 5-20 kHz. The vibratory and acoustical response of an elemental source is addressed, both theoretically and experimentally. Two approaches to synthesize the input signals to apply to each elemental source are developed in order to create a prescribed, frequency-dependent acoustic directivity. The circumferential Fourier decomposition of the target directivity provides a compromise between the magnitude and the phase reconstruction, whereas the minimization of a quadratic error criterion provides a best magnitude reconstruction. This transducer can improve sound reproduction by introducing the spatial radiation aspect of the original source at high frequency.

  15. Source localization of turboshaft engine broadband noise using a three-sensor coherence method

    NASA Astrophysics Data System (ADS)

    Blacodon, Daniel; Lewy, Serge

    2015-03-01

    Turboshaft engines can become the main source of helicopter noise at takeoff. Inlet radiation mainly comes from the compressor tones, but aft radiation is more intricate: turbine tones usually are above the audible frequency range and do not contribute to the weighted sound levels; jet is secondary and radiates low noise levels. A broadband component is the most annoying but its sources are not well known (it is called internal or core noise). Present study was made in the framework of the European project TEENI (Turboshaft Engine Exhaust Noise Identification). Its main objective was to localize the broadband sources in order to better reduce them. Several diagnostic techniques were implemented by the various TEENI partners. As regards ONERA, a first attempt at separating sources was made in the past with Turbomeca using a three-signal coherence method (TSM) to reject background non-acoustic noise. The main difficulty when using TSM is the assessment of the frequency range where the results are valid. This drawback has been circumvented in the TSM implemented in TEENI. Measurements were made on a highly instrumented Ardiden turboshaft engine in the Turbomeca open-air test bench. Two engine powers (approach and takeoff) were selected to apply TSM. Two internal pressure probes were located in various cross-sections, either behind the combustion chamber (CC), the high-pressure turbine (HPT), the free-turbine first stage (TL), or in four nozzle sections. The third transducer was a far-field microphone located around the maximum of radiation, at 120° from the intake centerline. The key result is that coherence increases from CC to HPT and TL, then decreases in the nozzle up to the exit. Pressure fluctuations from HPT and TL are very coherent with the far-field acoustic spectra up to 700 Hz. They are thus the main acoustic source and can be attributed to indirect combustion noise (accuracy decreases above 700 Hz because coherence is lower, but far-field sound spectra also are much lower above 700 Hz).

  16. Callback response of dugongs to conspecific chirp playbacks.

    PubMed

    Ichikawa, Kotaro; Akamatsu, Tomonari; Shinke, Tomio; Adulyanukosol, Kanjana; Arai, Nobuaki

    2011-06-01

    Dugongs (Dugong dugon) produce bird-like calls such as chirps and trills. The vocal responses of dugongs to playbacks of several acoustic stimuli were investigated. Animals were exposed to four different playback stimuli: a recorded chirp from a wild dugong, a synthesized down-sweep sound, a synthesized constant-frequency sound, and silence. Wild dugongs vocalized more frequently after playback of broadcast chirps than that after constant-frequency sounds or silence. The down-sweep sound also elicited more vocal responses than did silence. No significant difference was found between the broadcast chirps and the down-sweep sound. The ratio of wild dugong chirps to all calls and the dominant frequencies of the wild dugong calls were significantly higher during playbacks of broadcast chirps, down-sweep sounds, and constant-frequency sounds than during those of silence. The source level and duration of dugong chirps increased significantly as signaling distance increased. No significant correlation was found between signaling distance and the source level of trills. These results show that dugongs vocalize to playbacks of frequency-modulated signals and suggest that the source level of dugong chirps may be manipulated to compensate for transmission loss between the source and receiver. This study provides the first behavioral observations revealing the function of dugong chirps. © 2011 Acoustical Society of America

  17. 33 CFR 165.1301 - Puget Sound and Adjacent Waters in Northwestern Washington-Regulated Navigation Area.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... defined at the time by Puget Sound Vessel Traffic Service. (b) Nothing in this section shall be construed... Sound Vessel Traffic Service (PSVTS) VHF-FM radio frequency for the area in which the vessel is... specific locations by Puget Sound Vessel Traffic Service. They are intended to enhance vessel traffic...

  18. 33 CFR 165.1301 - Puget Sound and Adjacent Waters in Northwestern Washington-Regulated Navigation Area.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... defined at the time by Puget Sound Vessel Traffic Service. (b) Nothing in this section shall be construed... Sound Vessel Traffic Service (PSVTS) VHF-FM radio frequency for the area in which the vessel is... specific locations by Puget Sound Vessel Traffic Service. They are intended to enhance vessel traffic...

  19. 33 CFR 165.1301 - Puget Sound and Adjacent Waters in Northwestern Washington-Regulated Navigation Area.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... defined at the time by Puget Sound Vessel Traffic Service. (b) Nothing in this section shall be construed... Sound Vessel Traffic Service (PSVTS) VHF-FM radio frequency for the area in which the vessel is... specific locations by Puget Sound Vessel Traffic Service. They are intended to enhance vessel traffic...

  20. 75 FR 8566 - Safety Zones; Annual Firework Displays Within the Captain of the Port, Puget Sound Area of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-25

    ...-AA00 Safety Zones; Annual Firework Displays Within the Captain of the Port, Puget Sound Area of... at various locations the Captain of the Port, Puget Sound Area of Responsibility (AOR). When these... prohibited unless authorized by the Captain of the Port, Puget Sound or Designated Representative. DATES...

  1. Multichannel feedforward control schemes with coupling compensation for active sound profiling

    NASA Astrophysics Data System (ADS)

    Mosquera-Sánchez, Jaime A.; Desmet, Wim; de Oliveira, Leopoldo P. R.

    2017-05-01

    Active sound profiling includes a number of control techniques that enables the equalization, rather than the mere reduction, of acoustic noise. Challenges may rise when trying to achieve distinct targeted sound profiles simultaneously at multiple locations, e.g., within a vehicle cabin. This paper introduces distributed multichannel control schemes for independently tailoring structural borne sound reaching a number of locations within a cavity. The proposed techniques address the cross interactions amongst feedforward active sound profiling units, which compensate for interferences of the primary sound at each location of interest by exchanging run-time data amongst the control units, while attaining the desired control targets. Computational complexity, convergence, and stability of the proposed multichannel schemes are examined in light of the physical system at which they are implemented. The tuning performance of the proposed algorithms is benchmarked with the centralized and pure-decentralized control schemes through computer simulations on a simplified numerical model, which has also been subjected to plant magnitude variations. Provided that the representation of the plant is accurate enough, the proposed multichannel control schemes have been shown as the only ones that properly deliver targeted active sound profiling tasks at each error sensor location. Experimental results in a 1:3-scaled vehicle mock-up further demonstrate that the proposed schemes are able to attain reductions of more than 60 dB upon periodic disturbances at a number of positions, while resolving cross-channel interferences. Moreover, when the sensor/actuator placement is found as defective at a given frequency, the inclusion of a regularization parameter in the cost function is seen to not hinder the proper operation of the proposed compensation schemes, at the time that it assures their stability, at the expense of losing control performance.

  2. Atmospheric Propagation

    NASA Technical Reports Server (NTRS)

    Embleton, Tony F. W.; Daigle, Gilles A.

    1991-01-01

    Reviewed here is the current state of knowledge with respect to each basic mechanism of sound propagation in the atmosphere and how each mechanism changes the spectral or temporal characteristics of the sound received at a distance from the source. Some of the basic processes affecting sound wave propagation which are present in any situation are discussed. They are geometrical spreading, molecular absorption, and turbulent scattering. In geometrical spreading, sound levels decrease with increasing distance from the source; there is no frequency dependence. In molecular absorption, sound energy is converted into heat as the sound wave propagates through the air; there is a strong dependence on frequency. In turbulent scattering, local variations in wind velocity and temperature induce fluctuations in phase and amplitude of the sound waves as they propagate through an inhomogeneous medium; there is a moderate dependence on frequency.

  3. Changes in acoustic features and their conjunctions are processed by separate neuronal populations.

    PubMed

    Takegata, R; Huotilainen, M; Rinne, T; Näätänen, R; Winkler, I

    2001-03-05

    We investigated the relationship between the neuronal populations involved in detecting change in two acoustic features and their conjunction. Equivalent current dipole (ECD) models of the magnetic mismatch negativity (MMNm) generators were calculated for infrequent changes in pitch, perceived sound source location, and the conjunction of these two features. All of these three changes elicited MMNms that were generated in the vicinity of auditory cortex. The location of the ECD best describing the MMNm to the conjunction deviant was anterior to those for the MMNm responses elicited by either one of the constituent features. The present data thus suggest that at least partially separate neuronal populations are involved in detecting change in acoustic features and feature conjunctions.

  4. How effectively do horizontal and vertical response strategies of long-finned pilot whales reduce sound exposure from naval sonar?

    PubMed

    Wensveen, Paul J; von Benda-Beckmann, Alexander M; Ainslie, Michael A; Lam, Frans-Peter A; Kvadsheim, Petter H; Tyack, Peter L; Miller, Patrick J O

    2015-05-01

    The behaviour of a marine mammal near a noise source can modulate the sound exposure it receives. We demonstrate that two long-finned pilot whales both surfaced in synchrony with consecutive arrivals of multiple sonar pulses. We then assess the effect of surfacing and other behavioural response strategies on the received cumulative sound exposure levels and maximum sound pressure levels (SPLs) by modelling realistic spatiotemporal interactions of a pilot whale with an approaching source. Under the propagation conditions of our model, some response strategies observed in the wild were effective in reducing received levels (e.g. movement perpendicular to the source's line of approach), but others were not (e.g. switching from deep to shallow diving; synchronous surfacing after maximum SPLs). Our study exemplifies how simulations of source-whale interactions guided by detailed observational data can improve our understanding about motivations behind behaviour responses observed in the wild (e.g., reducing sound exposure, prey movement). Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Parasitoid flies exploiting acoustic communication of insects-comparative aspects of independent functional adaptations.

    PubMed

    Lakes-Harlan, Reinhard; Lehmann, Gerlind U C

    2015-01-01

    Two taxa of parasitoid Diptera have independently evolved tympanal hearing organs to locate sound producing host insects. Here we review and compare functional adaptations in both groups of parasitoids, Ormiini and Emblemasomatini. Tympanal organs in both groups originate from a common precursor organ and are somewhat similar in morphology and physiology. In terms of functional adaptations, the hearing thresholds are largely adapted to the frequency spectra of the calling song of the hosts. The large host ranges of some parasitoids indicate that their neuronal filter for the temporal patterns of the calling songs are broader than those found in intraspecific communication. For host localization the night active Ormia ochracea and the day active E. auditrix are able to locate a sound source precisely in space. For phonotaxis flight and walking phases are used, whereby O. ochracea approaches hosts during flight while E. auditrix employs intermediate landings and re-orientation, apparently separating azimuthal and vertical angles. The consequences of the parasitoid pressure are discussed for signal evolution and intraspecific communication of the host species. This natural selection pressure might have led to different avoidance strategies in the hosts: silent males in crickets, shorter signals in tettigoniids and fluctuating population abundances in cicadas.

  6. An experimental investigation of velocity fields in divergent glottal models of the human vocal tract

    NASA Astrophysics Data System (ADS)

    Erath, Byron D.; Plesniak, Michael W.

    2005-09-01

    In speech, sound production arises from fluid-structure interactions within the larynx as well as viscous flow phenomena that is most likely to occur during the divergent orientation of the vocal folds. Of particular interest are the flow mechanisms that influence the location of flow separation points on the vocal folds walls. Physiologically scaled pulsatile flow fields in 7.5 times real size static divergent glottal models were investigated. Three divergence angles were investigated using phase-averaged particle image velocimetry (PIV). The pulsatile glottal jet exhibited a bi-modal stability toward both glottal walls, although there was a significant amount of variance in the angle the jet deflected from the midline. The attachment of the Coanda effect to the glottal model walls occurred when the pulsatile velocity was a maximum, and the acceleration of the waveform was zero. The location of the separation and reattachment points of the flow from the glottal models was a function of the velocity waveform and divergence angle. Acoustic analogies show that a dipole sound source contribution arising from the fluid interaction (Coanda jet) with the vocal fold walls is expected. [Work funded by NIH Grant RO1 DC03577.

  7. USAF Bioenvironmental Noise Data Handbook. Volume 160: KC-10A aircraft, near and far-field noise

    NASA Astrophysics Data System (ADS)

    Powell, R. G.

    1982-09-01

    The USAF KC-10A aircraft is an advanced tanker/cargo aircraft powered by three CF6-50C2 turbofan engines. This report provides measured and extrapolated data defining the bioacoustic environments produced by this aircraft operating on a concrete runup pad for eight engine/power configurations. Near-field data are reported for one location in a wide variety of physical and psychoacoustic measures: overall and band sound pressure levels, C-weighted and A-weighted sound levels, preferred speech interference levels, perceived noise levels, and limiting times for total daily exposure of personnel with and without standard Air Force ear protectors. Far-field data measured at 15 locations are normalized to standard meteorological conditions and extrapolated from 75-8000 meters to derive sets of equal-value contours for these same seven acoustic measures as functions of angle and distance from the source. Refer to Volume 1 of this handbook, USAF Bioenvironmental Noise Data Handbook, Vol 1: Organization, Content and Application, AMRL-TR-75-50(1) 1975, for discussion of the objective and design of the handbook, the types of data presented, measurement procedures, instrumentation, data processing, definitions of quantities, symbols, equations, applications, limitations, etc.

  8. Simulated seal scarer sounds scare porpoises, but not seals: species-specific responses to 12 kHz deterrence sounds

    PubMed Central

    Hermannsen, Line; Beedholm, Kristian

    2017-01-01

    Acoustic harassment devices (AHD) or ‘seal scarers’ are used extensively, not only to deter seals from fisheries, but also as mitigation tools to deter marine mammals from potentially harmful sound sources, such as offshore pile driving. To test the effectiveness of AHDs, we conducted two studies with similar experimental set-ups on two key species: harbour porpoises and harbour seals. We exposed animals to 500 ms tone bursts at 12 kHz simulating that of an AHD (Lofitech), but with reduced output levels (source peak-to-peak level of 165 dB re 1 µPa). Animals were localized with a theodolite before, during and after sound exposures. In total, 12 sound exposures were conducted to porpoises and 13 exposures to seals. Porpoises were found to exhibit avoidance reactions out to ranges of 525 m from the sound source. Contrary to this, seal observations increased during sound exposure within 100 m of the loudspeaker. We thereby demonstrate that porpoises and seals respond very differently to AHD sounds. This has important implications for application of AHDs in multi-species habitats, as sound levels required to deter less sensitive species (seals) can lead to excessive and unwanted large deterrence ranges on more sensitive species (porpoises). PMID:28791155

  9. Characterizing large river sounds: Providing context for understanding the environmental effects of noise produced by hydrokinetic turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bevelhimer, Mark S.; Deng, Z. Daniel; Scherelis, Constantin

    2016-01-01

    Underwater noise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels of different sizes and other underwater sound sources in both static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines onmore » the Mississippi River, where the sound of flowing water is included in background measurements. The size of vessels measured ranged from a small fishing boat with a 60 HP outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, and when compared to the sound created by an operating HK turbine were many times greater. A comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed values.« less

  10. Feasibility of making sound power measurements in the NASA Langley V/STOL tunnel test section

    NASA Technical Reports Server (NTRS)

    Brooks, T. F.; Scheiman, J.; Silcox, R. J.

    1976-01-01

    Based on exploratory acoustic measurements in Langley's V/STOL wind tunnel, recommendations are made on the methodology for making sound power measurements of aircraft components in the closed tunnel test section. During airflow, tunnel self-noise and microphone flow-induced noise place restrictions on the amplitude and spectrum of the sound source to be measured. Models of aircraft components with high sound level sources, such as thrust engines and powered lift systems, seem likely candidates for acoustic testing.

  11. Varying sediment sources (Hudson Strait, Cumberland Sound, Baffin Bay) to the NW Labrador Sea slope between and during Heinrich events 0 to 4

    USGS Publications Warehouse

    Andrews, John T.; Barber, D.C.; Jennings, A.E.; Eberl, D.D.; Maclean, B.; Kirby, M.E.; Stoner, J.S.

    2012-01-01

    Core HU97048-007PC was recovered from the continental Labrador Sea slope at a water depth of 945 m, 250 km seaward from the mouth of Cumberland Sound, and 400 km north of Hudson Strait. Cumberland Sound is a structural trough partly floored by Cretaceous mudstones and Paleozoic carbonates. The record extends from ∼10 to 58 ka. On-board logging revealed a complex series of lithofacies, including buff-colored detrital carbonate-rich sediments [Heinrich (H)-events] frequently bracketed by black facies. We investigate the provenance of these facies using quantitative X-ray diffraction on drill-core samples from Paleozoic and Cretaceous bedrock from the SE Baffin Island Shelf, and on the < 2-mm sediment fraction in a transect of five cores from Cumberland Sound to the NW Labrador Sea. A sediment unmixing program was used to discriminate between sediment sources, which included dolomite-rich sediments from Baffin Bay, calcite-rich sediments from Hudson Strait and discrete sources from Cumberland Sound. Results indicated that the bulk of the sediment was derived from Cumberland Sound, but Baffin Bay contributed to sediments coeval with H-0 (Younger Dryas), whereas Hudson Strait was the source during H-events 1–4. Contributions from the Cretaceous outcrops within Cumberland Sound bracket H-events, thus both leading and lagging Hudson Strait-sourced H-events.

  12. Peripheral mechanisms for vocal production in birds - differences and similarities to human speech and singing.

    PubMed

    Riede, Tobias; Goller, Franz

    2010-10-01

    Song production in songbirds is a model system for studying learned vocal behavior. As in humans, bird phonation involves three main motor systems (respiration, vocal organ and vocal tract). The avian respiratory mechanism uses pressure regulation in air sacs to ventilate a rigid lung. In songbirds sound is generated with two independently controlled sound sources, which reside in a uniquely avian vocal organ, the syrinx. However, the physical sound generation mechanism in the syrinx shows strong analogies to that in the human larynx, such that both can be characterized as myoelastic-aerodynamic sound sources. Similarities include active adduction and abduction, oscillating tissue masses which modulate flow rate through the organ and a layered structure of the oscillating tissue masses giving rise to complex viscoelastic properties. Differences in the functional morphology of the sound producing system between birds and humans require specific motor control patterns. The songbird vocal apparatus is adapted for high speed, suggesting that temporal patterns and fast modulation of sound features are important in acoustic communication. Rapid respiratory patterns determine the coarse temporal structure of song and maintain gas exchange even during very long songs. The respiratory system also contributes to the fine control of airflow. Muscular control of the vocal organ regulates airflow and acoustic features. The upper vocal tract of birds filters the sounds generated in the syrinx, and filter properties are actively adjusted. Nonlinear source-filter interactions may also play a role. The unique morphology and biomechanical system for sound production in birds presents an interesting model for exploring parallels in control mechanisms that give rise to highly convergent physical patterns of sound generation. More comparative work should provide a rich source for our understanding of the evolution of complex sound producing systems. Copyright © 2009 Elsevier Inc. All rights reserved.

  13. The auditory P50 component to onset and offset of sound

    PubMed Central

    Pratt, Hillel; Starr, Arnold; Michalewski, Henry J.; Bleich, Naomi; Mittelman, Nomi

    2008-01-01

    Objective: The auditory Event-Related Potentials (ERP) component P50 to sound onset and offset have been reported to be similar, but their magnetic homologue has been reported absent to sound offset. We compared the spatio-temporal distribution of cortical activity during P50 to sound onset and offset, without confounds of spectral change. Methods: ERPs were recorded in response to onsets and offsets of silent intervals of 0.5 s (gaps) appearing randomly in otherwise continuous white noise and compared to ERPs to randomly distributed click pairs with half second separation presented in silence. Subjects were awake and distracted from the stimuli by reading a complicated text. Measures of P50 included peak latency and amplitude, as well as source current density estimates to the clicks and sound onsets and offsets. Results P50 occurred in response to noise onsets and to clicks, while to noise offset it was absent. Latency of P50 was similar to noise onset (56 msec) and to clicks (53 msec). Sources of P50 to noise onsets and clicks included bilateral superior parietal areas. In contrast, noise offsets activated left inferior temporal and occipital areas at the time of P50. Source current density was significantly higher to noise onset than offset in the vicinity of the temporo-parietal junction. Conclusions: P50 to sound offset is absent compared to the distinct P50 to sound onset and to clicks, at different intracranial sources. P50 to stimulus onset and to clicks appears to reflect preattentive arousal by a new sound in the scene. Sound offset does not involve a new sound and hence the absent P50. Significance: Stimulus onset activates distinct early cortical processes that are absent to offset. PMID:18055255

  14. 33 CFR 149.585 - What are the requirements for sound signals?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... complex must have a sound signal, approved under subpart 67.10 of this chapter, that has a 2-mile (3...) Each sound signal must be: (1) Located at least 10 feet, but not more than 150 feet, above mean high...

  15. 33 CFR 149.585 - What are the requirements for sound signals?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... complex must have a sound signal, approved under subpart 67.10 of this chapter, that has a 2-mile (3...) Each sound signal must be: (1) Located at least 10 feet, but not more than 150 feet, above mean high...

  16. 33 CFR 149.585 - What are the requirements for sound signals?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... complex must have a sound signal, approved under subpart 67.10 of this chapter, that has a 2-mile (3...) Each sound signal must be: (1) Located at least 10 feet, but not more than 150 feet, above mean high...

  17. 33 CFR 149.585 - What are the requirements for sound signals?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... complex must have a sound signal, approved under subpart 67.10 of this chapter, that has a 2-mile (3...) Each sound signal must be: (1) Located at least 10 feet, but not more than 150 feet, above mean high...

  18. Blind separation of incoherent and spatially disjoint sound sources

    NASA Astrophysics Data System (ADS)

    Dong, Bin; Antoni, Jérôme; Pereira, Antonio; Kellermann, Walter

    2016-11-01

    Blind separation of sound sources aims at reconstructing the individual sources which contribute to the overall radiation of an acoustical field. The challenge is to reach this goal using distant measurements when all sources are operating concurrently. The working assumption is usually that the sources of interest are incoherent - i.e. statistically orthogonal - so that their separation can be approached by decorrelating a set of simultaneous measurements, which amounts to diagonalizing the cross-spectral matrix. Principal Component Analysis (PCA) is traditionally used to this end. This paper reports two new findings in this context. First, a sufficient condition is established under which "virtual" sources returned by PCA coincide with true sources; it stipulates that the sources of interest should be not only incoherent but also spatially orthogonal. A particular case of this instance is met by spatially disjoint sources - i.e. with non-overlapping support sets. Second, based on this finding, a criterion that enforces both statistical and spatial orthogonality is proposed to blindly separate incoherent sound sources which radiate from disjoint domains. This criterion can be easily incorporated into acoustic imaging algorithms such as beamforming or acoustical holography to identify sound sources of different origins. The proposed methodology is validated on laboratory experiments. In particular, the separation of aeroacoustic sources is demonstrated in a wind tunnel.

  19. On the temporal window of auditory-brain system in connection with subjective responses

    NASA Astrophysics Data System (ADS)

    Mouri, Kiminori

    2003-08-01

    The human auditory-brain system processes information extracted from autocorrelation function (ACF) of the source signal and interaural cross correlation function (IACF) of binaural sound signals which are associated with the left and right cerebral hemispheres, respectively. The purpose of this dissertation is to determine the desirable temporal window (2T: integration interval) for ACF and IACF mechanisms. For the ACF mechanism, the visual change of Φ(0), i.e., the power of ACF, was associated with the change of loudness, and it is shown that the recommended temporal window is given as about 30(τe)min [s]. The value of (τe)min is the minimum value of effective duration of the running ACF of the source signal. It is worth noticing from the experiment of EEG that the most preferred delay time of the first reflection sound is determined by the piece indicating (τe)min in the source signal. For the IACF mechanism, the temporal window is determined as below: The measured range of τIACC corresponding to subjective angle for the moving image sound depends on the temporal window. Here, the moving image was simulated by the use of two loudspeakers located at +/-20° in the horizontal plane, reproducing amplitude modulated band-limited noise alternatively. It is found that the temporal window has a wide range of values from 0.03 to 1 [s] for the modulation frequency below 0.2 Hz. Thesis advisor: Yoichi Ando Copies of this thesis written in English can be obtained from Kiminori Mouri, 5-3-3-1110 Harayama-dai, Sakai city, Osaka 590-0132, Japan. E-mail address: km529756@aol.com

  20. Multisensory processing of naturalistic objects in motion: a high-density electrical mapping and source estimation study.

    PubMed

    Senkowski, Daniel; Saint-Amour, Dave; Kelly, Simon P; Foxe, John J

    2007-07-01

    In everyday life, we continuously and effortlessly integrate the multiple sensory inputs from objects in motion. For instance, the sound and the visual percept of vehicles in traffic provide us with complementary information about the location and motion of vehicles. Here, we used high-density electrical mapping and local auto-regressive average (LAURA) source estimation to study the integration of multisensory objects in motion as reflected in event-related potentials (ERPs). A randomized stream of naturalistic multisensory-audiovisual (AV), unisensory-auditory (A), and unisensory-visual (V) "splash" clips (i.e., a drop falling and hitting a water surface) was presented among non-naturalistic abstract motion stimuli. The visual clip onset preceded the "splash" onset by 100 ms for multisensory stimuli. For naturalistic objects early multisensory integration effects beginning 120-140 ms after sound onset were observed over posterior scalp, with distributed sources localized to occipital cortex, temporal lobule, insular, and medial frontal gyrus (MFG). These effects, together with longer latency interactions (210-250 and 300-350 ms) found in a widespread network of occipital, temporal, and frontal areas, suggest that naturalistic objects in motion are processed at multiple stages of multisensory integration. The pattern of integration effects differed considerably for non-naturalistic stimuli. Unlike naturalistic objects, no early interactions were found for non-naturalistic objects. The earliest integration effects for non-naturalistic stimuli were observed 210-250 ms after sound onset including large portions of the inferior parietal cortex (IPC). As such, there were clear differences in the cortical networks activated by multisensory motion stimuli as a consequence of the semantic relatedness (or lack thereof) of the constituent sensory elements.

  1. Interaction of Sound from Supersonic Jets with Nearby Structures

    NASA Technical Reports Server (NTRS)

    Fenno, C. C., Jr.; Bayliss, A.; Maestrello, L.

    1997-01-01

    A model of sound generated in an ideally expanded supersonic (Mach 2) jet is solved numerically. Two configurations are considered: (1) a free jet and (2) an installed jet with a nearby array of flexible aircraft type panels. In the later case the panels vibrate in response to loading by sound from the jet and the full coupling between the panels and the jet is considered, accounting for panel response and radiation. The long time behavior of the jet is considered. Results for near field and far field disturbance, the far field pressure and the vibration of and radiation from the panels are presented. Panel response crucially depends on the location of the panels. Panels located upstream of the Mach cone are subject to a low level, nearly continuous spectral excitation and consequently exhibit a low level, relatively continuous spectral response. In contrast, panels located within the Mach cone are subject to a significant loading due to the intense Mach wave radiation of sound and exhibit a large, relatively peaked spectral response centered around the peak frequency of sound radiation. The panels radiate in a similar fashion to the sound in the jet, in particular exhibiting a relatively peaked spectral response at approximately the Mach angle from the bounding wall.

  2. Characteristics of the swallowing sounds recorded in the ear, nose and on trachea.

    PubMed

    Sarraf-Shirazi, Samaneh; Baril, Jonathan-F; Moussavi, Zahra

    2012-08-01

    The various malfunctions and difficulties of the swallowing mechanism necessitate various diagnostic techniques to address those problems. Swallowing sounds recorded from the trachea have been suggested as a noninvasive method of swallowing assessment. However, acquiring signals from the trachea can be difficult for those with loose skin. The objective of this pilot study was to explore the viability of using the ear and nose as alternative recording locations for recording swallowing sounds. We recorded the swallowing and breathing sounds of five healthy young individuals from the ear, nose and trachea, simultaneously. We computed time-frequency features and compared them for the different locations of recording. The features included the peak and the maximum frequencies of the power spectrum density, average power at different frequency bands and the wavelet coefficients. The average power calculated over the 4 octave bands between 150 and 2,400 Hz showed a consistent trend with less than 20 dB difference for the breath sounds of all the recording locations. Thus, analyzing breath sounds recorded from the ear and nose for the purpose of aspiration detection would give similar results to those from tracheal recordings; thus, ear and nose recording may be a viable alternative when tracheal recording is not possible.

  3. Wind-instrument reflection function measurements in the time domain.

    PubMed

    Keefe, D H

    1996-04-01

    Theoretical and computational analyses of wind-instrument sound production in the time domain have emerged as useful tools for understanding musical instrument acoustics, yet there exist few experimental measurements of the air-column response directly in the time domain. A new experimental, time-domain technique is proposed to measure the reflection function response of woodwind and brass-instrument air columns. This response is defined at the location of sound regeneration in the mouthpiece or double reed. A probe assembly comprised of an acoustic source and microphone is inserted directly into the air column entryway using a foam plug to ensure a leak-free fit. An initial calibration phase involves measurements on a single cylindrical tube of known dimensions. Measurements are presented on an alto saxophone and euphonium. The technique has promise for testing any musical instrument air columns using a single probe assembly and foam plugs over a range of diameters typical of air-column entryways.

  4. Optimizing noise control strategy in a forging workshop.

    PubMed

    Razavi, Hamideh; Ramazanifar, Ehsan; Bagherzadeh, Jalal

    2014-01-01

    In this paper, a computer program based on a genetic algorithm is developed to find an economic solution for noise control in a forging workshop. Initially, input data, including characteristics of sound sources, human exposure, abatement techniques, and production plans are inserted into the model. Using sound pressure levels at working locations, the operators who are at higher risk are identified and picked out for the next step. The program is devised in MATLAB such that the parameters can be easily defined and changed for comparison. The final results are structured into 4 sections that specify an appropriate abatement method for each operator and machine, minimum allowance time for high-risk operators, required damping material for enclosures, and minimum total cost of these treatments. The validity of input data in addition to proper settings in the optimization model ensures the final solution is practical and economically reasonable.

  5. Time-domain electromagnetic soundings collected in Dawson County, Nebraska, 2007-09

    USGS Publications Warehouse

    Payne, Jason; Teeple, Andrew

    2011-01-01

    Between April 2007 and November 2009, the U.S. Geological Survey, in cooperation with the Central Platte Natural Resources District, collected time-domain electro-magnetic (TDEM) soundings at 14 locations in Dawson County, Nebraska. The TDEM soundings provide information pertaining to the hydrogeology at each of 23 sites at the 14 locations; 30 TDEM surface geophysical soundings were collected at the 14 locations to develop smooth and layered-earth resistivity models of the subsurface at each site. The soundings yield estimates of subsurface electrical resistivity; variations in subsurface electrical resistivity can be correlated with hydrogeologic and stratigraphic units. Results from each sounding were used to calculate resistivity to depths of approximately 90-130 meters (depending on loop size) below the land surface. Geonics Protem 47 and 57 systems, as well as the Alpha Geoscience TerraTEM, were used to collect the TDEM soundings (voltage data from which resistivity is calculated). For each sounding, voltage data were averaged and evaluated statistically before inversion (inverse modeling). Inverse modeling is the process of creating an estimate of the true distribution of subsurface resistivity from the mea-sured apparent resistivity obtained from TDEM soundings. Smooth and layered-earth models were generated for each sounding. A smooth model is a vertical delineation of calculated apparent resistivity that represents a non-unique estimate of the true resistivity. Ridge regression (Interpex Limited, 1996) was used by the inversion software in a series of iterations to create a smooth model consisting of 24-30 layers for each sounding site. Layered-earth models were then generated based on results of smooth modeling. The layered-earth models are simplified (generally 1 to 6 layers) to represent geologic units with depth. Throughout the area, the layered-earth models range from 2 to 4 layers, depending on observed inflections in the raw data and smooth model inversions. The TDEM data collected were considered good results on the basis of root mean square errors calculated after inversion modeling, comparisons with borehole geophysical logging, and repeatability.

  6. A New Mechanism of Sound Generation in Songbirds

    NASA Astrophysics Data System (ADS)

    Goller, Franz; Larsen, Ole N.

    1997-12-01

    Our current understanding of the sound-generating mechanism in the songbird vocal organ, the syrinx, is based on indirect evidence and theoretical treatments. The classical avian model of sound production postulates that the medial tympaniform membranes (MTM) are the principal sound generators. We tested the role of the MTM in sound generation and studied the songbird syrinx more directly by filming it endoscopically. After we surgically incapacitated the MTM as a vibratory source, zebra finches and cardinals were not only able to vocalize, but sang nearly normal song. This result shows clearly that the MTM are not the principal sound source. The endoscopic images of the intact songbird syrinx during spontaneous and brain stimulation-induced vocalizations illustrate the dynamics of syringeal reconfiguration before phonation and suggest a different model for sound production. Phonation is initiated by rostrad movement and stretching of the syrinx. At the same time, the syrinx is closed through movement of two soft tissue masses, the medial and lateral labia, into the bronchial lumen. Sound production always is accompanied by vibratory motions of both labia, indicating that these vibrations may be the sound source. However, because of the low temporal resolution of the imaging system, the frequency and phase of labial vibrations could not be assessed in relation to that of the generated sound. Nevertheless, in contrast to the previous model, these observations show that both labia contribute to aperture control and strongly suggest that they play an important role as principal sound generators.

  7. Auditory spatial attention to speech and complex non-speech sounds in children with autism spectrum disorder.

    PubMed

    Soskey, Laura N; Allen, Paul D; Bennetto, Loisa

    2017-08-01

    One of the earliest observable impairments in autism spectrum disorder (ASD) is a failure to orient to speech and other social stimuli. Auditory spatial attention, a key component of orienting to sounds in the environment, has been shown to be impaired in adults with ASD. Additionally, specific deficits in orienting to social sounds could be related to increased acoustic complexity of speech. We aimed to characterize auditory spatial attention in children with ASD and neurotypical controls, and to determine the effect of auditory stimulus complexity on spatial attention. In a spatial attention task, target and distractor sounds were played randomly in rapid succession from speakers in a free-field array. Participants attended to a central or peripheral location, and were instructed to respond to target sounds at the attended location while ignoring nearby sounds. Stimulus-specific blocks evaluated spatial attention for simple non-speech tones, speech sounds (vowels), and complex non-speech sounds matched to vowels on key acoustic properties. Children with ASD had significantly more diffuse auditory spatial attention than neurotypical children when attending front, indicated by increased responding to sounds at adjacent non-target locations. No significant differences in spatial attention emerged based on stimulus complexity. Additionally, in the ASD group, more diffuse spatial attention was associated with more severe ASD symptoms but not with general inattention symptoms. Spatial attention deficits have important implications for understanding social orienting deficits and atypical attentional processes that contribute to core deficits of ASD. Autism Res 2017, 10: 1405-1416. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  8. On the role of glottis-interior sources in the production of voiced sound.

    PubMed

    Howe, M S; McGowan, R S

    2012-02-01

    The voice source is dominated by aeroacoustic sources downstream of the glottis. In this paper an investigation is made of the contribution to voiced speech of secondary sources within the glottis. The acoustic waveform is ultimately determined by the volume velocity of air at the glottis, which is controlled by vocal fold vibration, pressure forcing from the lungs, and unsteady backreactions from the sound and from the supraglottal air jet. The theory of aerodynamic sound is applied to study the influence on the fine details of the acoustic waveform of "potential flow" added-mass-type glottal sources, glottis friction, and vorticity either in the glottis-wall boundary layer or in the portion of the free jet shear layer within the glottis. These sources govern predominantly the high frequency content of the sound when the glottis is near closure. A detailed analysis performed for a canonical, cylindrical glottis of rectangular cross section indicates that glottis-interior boundary/shear layer vortex sources and the surface frictional source are of comparable importance; the influence of the potential flow source is about an order of magnitude smaller. © 2012 Acoustical Society of America

  9. [Perception by teenagers and adults of the changed by amplitude sound sequences used in models of movement of the sound source].

    PubMed

    Andreeva, I G; Vartanian, I A

    2012-01-01

    The ability to evaluate direction of amplitude changes of sound stimuli was studied in adults and in the 11-12- and 15-16-year old teenagers. The stimuli representing sequences of fragments of the tone of 1 kHz, whose amplitude is changing with time, are used as model of approach and withdrawal of the sound sources. The 11-12-year old teenagers at estimation of direction of amplitude changes were shown to make the significantly higher number of errors as compared with two other examined groups, including those in repeated experiments. The structure of errors - the ratio of the portion of errors at estimation of increasing and decreasing by amplitude stimulus - turned out to be different in teenagers and in adults. The question is discussed about the effect of unspecific activation of the large hemisphere cortex in teenagers on processes if taking solution about the complex sound stimulus, including a possibility estimation of approach and withdrawal of the sound source.

  10. Interior and exterior sound field control using general two-dimensional first-order sources.

    PubMed

    Poletti, M A; Abhayapala, T D

    2011-01-01

    Reproduction of a given sound field interior to a circular loudspeaker array without producing an undesirable exterior sound field is an unsolved problem over a broadband of frequencies. At low frequencies, by implementing the Kirchhoff-Helmholtz integral using a circular discrete array of line-source loudspeakers, a sound field can be recreated within the array and produce no exterior sound field, provided that the loudspeakers have azimuthal polar responses with variable first-order responses which are a combination of a two-dimensional (2D) monopole and a radially oriented 2D dipole. This paper examines the performance of circular discrete arrays of line-source loudspeakers which also include a tangential dipole, providing general variable-directivity responses in azimuth. It is shown that at low frequencies, the tangential dipoles are not required, but that near and above the Nyquist frequency, the tangential dipoles can both improve the interior accuracy and reduce the exterior sound field. The additional dipoles extend the useful range of the array by around an octave.

  11. Activity in Human Auditory Cortex Represents Spatial Separation Between Concurrent Sounds.

    PubMed

    Shiell, Martha M; Hausfeld, Lars; Formisano, Elia

    2018-05-23

    The primary and posterior auditory cortex (AC) are known for their sensitivity to spatial information, but how this information is processed is not yet understood. AC that is sensitive to spatial manipulations is also modulated by the number of auditory streams present in a scene (Smith et al., 2010), suggesting that spatial and nonspatial cues are integrated for stream segregation. We reasoned that, if this is the case, then it is the distance between sounds rather than their absolute positions that is essential. To test this hypothesis, we measured human brain activity in response to spatially separated concurrent sounds with fMRI at 7 tesla in five men and five women. Stimuli were spatialized amplitude-modulated broadband noises recorded for each participant via in-ear microphones before scanning. Using a linear support vector machine classifier, we investigated whether sound location and/or location plus spatial separation between sounds could be decoded from the activity in Heschl's gyrus and the planum temporale. The classifier was successful only when comparing patterns associated with the conditions that had the largest difference in perceptual spatial separation. Our pattern of results suggests that the representation of spatial separation is not merely the combination of single locations, but rather is an independent feature of the auditory scene. SIGNIFICANCE STATEMENT Often, when we think of auditory spatial information, we think of where sounds are coming from-that is, the process of localization. However, this information can also be used in scene analysis, the process of grouping and segregating features of a soundwave into objects. Essentially, when sounds are further apart, they are more likely to be segregated into separate streams. Here, we provide evidence that activity in the human auditory cortex represents the spatial separation between sounds rather than their absolute locations, indicating that scene analysis and localization processes may be independent. Copyright © 2018 the authors 0270-6474/18/384977-08$15.00/0.

  12. The silent base flow and the sound sources in a laminar jet.

    PubMed

    Sinayoko, Samuel; Agarwal, Anurag

    2012-03-01

    An algorithm to compute the silent base flow sources of sound in a jet is introduced. The algorithm is based on spatiotemporal filtering of the flow field and is applicable to multifrequency sources. It is applied to an axisymmetric laminar jet and the resulting sources are validated successfully. The sources are compared to those obtained from two classical acoustic analogies, based on quiescent and time-averaged base flows. The comparison demonstrates how the silent base flow sources shed light on the sound generation process. It is shown that the dominant source mechanism in the axisymmetric laminar jet is "shear-noise," which is a linear mechanism. The algorithm presented here could be applied to fully turbulent flows to understand the aerodynamic noise-generation mechanism. © 2012 Acoustical Society of America

  13. Instrumentation for localized superconducting cavity diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conway, Z. A.; Ge, M.; Iwashita, Y.

    2017-01-12

    Superconducting accelerator cavities are now routinely operated at levels approaching the theoretical limit of niobium. To achieve these operating levels more information than is available from the RF excitation signal is required to characterize and determine fixes for the sources of performance limitations. This information is obtained using diagnostic techniques which complement the analysis of the RF signal. In this paper we describe the operation and select results from three of these diagnostic techniques: the use of large scale thermometer arrays, second sound wave defect location and high precision cavity imaging with the Kyoto camera.

  14. Spectral Discrete Probability Density Function of Measured Wind Turbine Noise in the Far Field

    PubMed Central

    Ashtiani, Payam; Denison, Adelaide

    2015-01-01

    Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097

  15. Sound Radiated by a Wave-Like Structure in a Compressible Jet

    NASA Technical Reports Server (NTRS)

    Golubev, V. V.; Prieto, A. F.; Mankbadi, R. R.; Dahl, M. D.; Hixon, R.

    2003-01-01

    This paper extends the analysis of acoustic radiation from the source model representing spatially-growing instability waves in a round jet at high speeds. Compared to previous work, a modified approach to the sound source modeling is examined that employs a set of solutions to linearized Euler equations. The sound radiation is then calculated using an integral surface method.

  16. Sounds Activate Visual Cortex and Improve Visual Discrimination

    PubMed Central

    Störmer, Viola S.; Martinez, Antigona; McDonald, John J.; Hillyard, Steven A.

    2014-01-01

    A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. PMID:25031419

  17. Photoacoustic Effect Generated from an Expanding Spherical Source

    NASA Astrophysics Data System (ADS)

    Bai, Wenyu; Diebold, Gerald J.

    2018-02-01

    Although the photoacoustic effect is typically generated by amplitude-modulated continuous or pulsed radiation, the form of the wave equation for pressure that governs the generation of sound indicates that optical sources moving in an absorbing fluid can produce sound as well. Here, the characteristics of the acoustic wave produced by a radially symmetric Gaussian source expanding outwardly from the origin are found. The unique feature of the photoacoustic effect from the spherical source is a trailing compressive wave that arises from reflection of an inwardly propagating component of the wave. Similar to the one-dimensional geometry, an unbounded amplification effect is found for the Gaussian source expanding at the sound speed.

  18. 33 CFR 3.65-10 - Sector Seattle: Puget Sound Marine Inspection Zone and Captain of the Port Zone.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Sector Seattle: Puget Sound...: Puget Sound Marine Inspection Zone and Captain of the Port Zone. Sector Seattle's office is located in Seattle, WA. The boundaries of Sector Seattle's Puget Sound Marine Inspection and Captain of the Port...

  19. 33 CFR 3.65-10 - Sector Puget Sound Marine Inspection Zone and Captain of the Port Zone.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Sector Puget Sound Marine... ZONES, AND CAPTAIN OF THE PORT ZONES Thirteenth Coast Guard District § 3.65-10 Sector Puget Sound Marine Inspection Zone and Captain of the Port Zone. Sector Puget Sound's office is located in Seattle, WA. The...

  20. 33 CFR 3.65-10 - Sector Puget Sound Marine Inspection Zone and Captain of the Port Zone.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Sector Puget Sound Marine... ZONES, AND CAPTAIN OF THE PORT ZONES Thirteenth Coast Guard District § 3.65-10 Sector Puget Sound Marine Inspection Zone and Captain of the Port Zone. Sector Puget Sound's office is located in Seattle, WA. The...

  1. 33 CFR 3.65-10 - Sector Puget Sound Marine Inspection Zone and Captain of the Port Zone.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 1 2014-07-01 2014-07-01 false Sector Puget Sound Marine... ZONES, AND CAPTAIN OF THE PORT ZONES Thirteenth Coast Guard District § 3.65-10 Sector Puget Sound Marine Inspection Zone and Captain of the Port Zone. Sector Puget Sound's office is located in Seattle, WA. The...

  2. Radio Sounding of the Magnetopause from the Ground (NIRFI Part)

    DTIC Science & Technology

    2000-04-06

    subsolar point sounding from SURA location leads to oblique sounding wave propagation through the ionosphere when penetration condition requires less... ecliptic plane (along the direction of solar wind sector boundaries, morning hours) • near the subsolar point (along the solar wind velocity, noon

  3. Measurement and Numerical Calculation of Force on a Particle in a Strong Acoustic Field Required for Levitation

    NASA Astrophysics Data System (ADS)

    Kozuka, Teruyuki; Yasui, Kyuichi; Tuziuti, Toru; Towata, Atsuya; Lee, Judy; Iida, Yasuo

    2009-07-01

    Using a standing-wave field generated between a sound source and a reflector, it is possible to trap small objects at nodes of the sound pressure distribution in air. In this study, a sound field generated under a flat or concave reflector was studied by both experimental measurement and numerical calculation. The calculated result agrees well with the experimental data. The maximum force generated between a sound source of 25.0 mm diameter and a concave reflector is 0.8 mN in the experiment. A steel ball of 2.0 mm in diameter was levitated in the sound field in air.

  4. Bronchial intubation could be detected by the visual stethoscope techniques in pediatric patients.

    PubMed

    Kimura, Tetsuro; Suzuki, Akira; Mimuro, Soichiro; Makino, Hiroshi; Sato, Shigehito

    2012-12-01

    We created a system that allows the visualization of breath sounds (visual stethoscope). We compared the visual stethoscope technique with auscultation for the detection of bronchial intubation in pediatric patients. In the auscultation group, an anesthesiologist advanced the tracheal tube, while another anesthesiologist auscultated bilateral breath sounds to detect the change and/or disappearance of unilateral breath sounds. In the visualization group, the stethoscope was used to detect changes in breath sounds and/or disappearance of unilateral breath sounds. The distance from the edge of the mouth to the carina was measured using a fiberoptic bronchoscope. Forty pediatric patients were enrolled in the study. At the point at which irregular breath sounds were auscultated, the tracheal tube was located at 0.5 ± 0.8 cm on the bronchial side from the carina. When a detectable change of shape of the visualized breath sound was observed, the tracheal tube was located 0.1 ± 1.2 cm on the bronchial side (not significant). At the point at which unilateral breath sounds were auscultated or a unilateral shape of the visualized breath sound was observed, the tracheal tube was 1.5 ± 0.8 or 1.2 ± 1.0 cm on the bronchial side, respectively (not significant). The visual stethoscope allowed to display the left and the right lung sound simultaneously and detected changes of breath sounds and unilateral breath sound as a tracheal tube was advanced. © 2012 Blackwell Publishing Ltd.

  5. West Texas array experiment: Noise and source characterization of short-range infrasound and acoustic signals, along with lab and field evaluation of Intermountain Laboratories infrasound microphones

    NASA Astrophysics Data System (ADS)

    Fisher, Aileen

    The term infrasound describes atmospheric sound waves with frequencies below 20 Hz, while acoustics are classified within the audible range of 20 Hz to 20 kHz. Infrasound and acoustic monitoring in the scientific community is hampered by low signal-to-noise ratios and a limited number of studies on regional and short-range noise and source characterization. The JASON Report (2005) suggests the infrasound community focus on more broad-frequency, observational studies within a tactical distance of 10 km. In keeping with that recommendation, this paper presents a study of regional and short-range atmospheric acoustic and infrasonic noise characterization, at a desert site in West Texas, covering a broad frequency range of 0.2 to 100 Hz. To spatially sample the band, a large number of infrasound gauges was needed. A laboratory instrument analysis is presented of the set of low-cost infrasound sensors used in this study, manufactured by Inter-Mountain Laboratories (IML). Analysis includes spectra, transfer functions and coherences to assess the stability and range of the gauges, and complements additional instrument testing by Sandia National Laboratories. The IMLs documented here have been found reliably coherent from 0.1 to 7 Hz without instrument correction. Corrections were built using corresponding time series from the commercially available and more expensive Chaparral infrasound gauge, so that the corrected IML outputs were able to closely mimic the Chaparral output. Arrays of gauges are needed for atmospheric sound signal processing. Our West Texas experiment consisted of a 1.5 km aperture, 23-gauge infrasound/acoustic array of IMLs, with a compact, 12 m diameter grid-array of rented IMLs at the center. To optimize signal recording, signal-to-noise ratio needs to be quantified with respect to both frequency band and coherence length. The higher-frequency grid array consisted of 25 microphones arranged in a five by five pattern with 3 meter spacing, without spatial wind noise filtering hoses or pipes. The grid was within the distance limits of a single gauge's normal hose array, and data were used to perform a spatial noise correlation study. The highest correlation values were not found in the lower frequencies as anticipated, owing to a lack of sources in the lower range and the uncorrelated nature of wind noise. The highest values, with cross-correlation averages between 0.4 and 0.7 from 3 to 17 m between gauges, were found at night from 10 and 20 Hz due to a continuous local noise source and low wind. Data from the larger array were used to identify continuous and impulsive signals in the area that comprise the ambient noise field. Ground truth infrasound and acoustic, time and location data were taken for a highway site, a wind farm, and a natural gas compressor. Close-range sound data were taken with a single IML "traveler" gauge. Spectrograms and spectrum peaks were used to identify their source signatures. Two regional location techniques were also tested with data from the large array by using a propane cannon as a controlled, impulsive source. A comparison is presented of the Multiple Signal Classification Algorithm (MUSIC) to a simple, quadratic, circular wavefront algorithm. MUSIC was unable to effectively separate noise and source eignenvalues and eigenvectors due to spatial aliasing of the propane cannon signal and a lack of incoherent noise. Only 33 out of 80 usable shots were located by MUSIC within 100 m. Future work with the algorithm should focus on location of impulsive and continuous signals with development of methods for accurate separation of signal and noise eigenvectors in the presence of coherent noise and possible spatial aliasing. The circular wavefront algorithm performed better with our specific dataset and successfully located 70 out of 80 propane cannon shots within 100 m of the original location, 66 of which were within 20 m. This method has low computation requirements, making it well suited for real-time automated processing and smaller computers. Future research could focus on development of the method for an automated system and statistical impulsive noise filtering for higher accuracy.

  6. Sound field reproduction as an equivalent acoustical scattering problem.

    PubMed

    Fazi, Filippo Maria; Nelson, Philip A

    2013-11-01

    Given a continuous distribution of acoustic sources, the determination of the source strength that ensures the synthesis of a desired sound field is shown to be identical to the solution of an equivalent acoustic scattering problem. The paper begins with the presentation of the general theory that underpins sound field reproduction with secondary sources continuously arranged on the boundary of the reproduction region. The process of reproduction by a continuous source distribution is modeled by means of an integral operator (the single layer potential). It is then shown how the solution of the sound reproduction problem corresponds to that of an equivalent scattering problem. Analytical solutions are computed for two specific instances of this problem, involving, respectively, the use of a secondary source distribution in spherical and planar geometries. The results are shown to be the same as those obtained with analyses based on High Order Ambisonics and Wave Field Synthesis, respectively, thus bringing to light a fundamental analogy between these two methods of sound reproduction. Finally, it is shown how the physical optics (Kirchhoff) approximation enables the derivation of a high-frequency simplification for the problem under consideration, this in turn being related to the secondary source selection criterion reported in the literature on Wave Field Synthesis.

  7. Investigation of spherical loudspeaker arrays for local active control of sound.

    PubMed

    Peleg, Tomer; Rafaely, Boaz

    2011-10-01

    Active control of sound can be employed globally to reduce noise levels in an entire enclosure, or locally around a listener's head. Recently, spherical loudspeaker arrays have been studied as multiple-channel sources for local active control of sound, presenting the fundamental theory and several active control configurations. In this paper, important aspects of using a spherical loudspeaker array for local active control of sound are further investigated. First, the feasibility of creating sphere-shaped quiet zones away from the source is studied both theoretically and numerically, showing that these quiet zones are associated with sound amplification and poor system robustness. To mitigate the latter, the design of shell-shaped quiet zones around the source is investigated. A combination of two spherical sources is then studied with the aim of enlarging the quiet zone. The two sources are employed to generate quiet zones that surround a rigid sphere, investigating the application of active control around a listener's head. A significant improvement in performance is demonstrated in this case over a conventional headrest-type system that uses two monopole secondary sources. Finally, several simulations are presented to support the theoretical work and to demonstrate the performance and limitations of the system. © 2011 Acoustical Society of America

  8. Efficient techniques for wave-based sound propagation in interactive applications

    NASA Astrophysics Data System (ADS)

    Mehra, Ravish

    Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data-driven, rotating or time-varying directivity function at runtime. Unlike previous approaches, the listener directivity approach can be used to compute spatial audio (3D audio) for a moving, rotating listener at interactive rates. Lastly, we propose an efficient GPU-based time-domain solver for the wave equation that enables wave simulation up to the mid-frequency range in tens of minutes on a desktop computer. It is demonstrated that by carefully mapping all the components of the wave simulator to match the parallel processing capabilities of the graphics processors, significant improvement in performance can be achieved compared to the CPU-based simulators, while maintaining numerical accuracy. We validate these techniques with offline numerical simulations and measured data recorded in an outdoor scene. We present results of preliminary user evaluations conducted to study the impact of these techniques on user's immersion in virtual environment. We have integrated these techniques with the Half-Life 2 game engine, Oculus Rift head-mounted display, and Xbox game controller to enable users to experience high-quality acoustics effects and spatial audio in the virtual environment.

  9. Health-risk assessment of chemical contamination in Puget Sound seafood. Final report 1985-1988

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, L.

    1988-09-01

    This report provides resource management and health agencies with a general indication of the magnitude of potential human health risks associated with consumption of recreationally harvested seafoods from Puget Sound. Data collection and evaluation focused on a variety of metal and organic contaminants in fish, shellfish and edible seaweeds from 22 locations in the Sound. EPA risk assessment techniques were used to characterize risks to average and high consumer groups for both carcinogens and noncarcinogens. Theoretical risks associated with consumption of both average and high quantities of Puget Sound seafood appear to be comparable to or substantially less than thosemore » for fish and shellfish from other locations in the United States.« less

  10. Echolocation versus echo suppression in humans

    PubMed Central

    Wallmeier, Ludwig; Geßele, Nikodemus; Wiegrebe, Lutz

    2013-01-01

    Several studies have shown that blind humans can gather spatial information through echolocation. However, when localizing sound sources, the precedence effect suppresses spatial information of echoes, and thereby conflicts with effective echolocation. This study investigates the interaction of echolocation and echo suppression in terms of discrimination suppression in virtual acoustic space. In the ‘Listening’ experiment, sighted subjects discriminated between positions of a single sound source, the leading or the lagging of two sources, respectively. In the ‘Echolocation’ experiment, the sources were replaced by reflectors. Here, the same subjects evaluated echoes generated in real time from self-produced vocalizations and thereby discriminated between positions of a single reflector, the leading or the lagging of two reflectors, respectively. Two key results were observed. First, sighted subjects can learn to discriminate positions of reflective surfaces echo-acoustically with accuracy comparable to sound source discrimination. Second, in the Listening experiment, the presence of the leading source affected discrimination of lagging sources much more than vice versa. In the Echolocation experiment, however, the presence of both the lead and the lag strongly affected discrimination. These data show that the classically described asymmetry in the perception of leading and lagging sounds is strongly diminished in an echolocation task. Additional control experiments showed that the effect is owing to both the direct sound of the vocalization that precedes the echoes and owing to the fact that the subjects actively vocalize in the echolocation task. PMID:23986105

  11. Two dimensional sound field reproduction using higher order sources to exploit room reflections.

    PubMed

    Betlehem, Terence; Poletti, Mark A

    2014-04-01

    In this paper, sound field reproduction is performed in a reverberant room using higher order sources (HOSs) and a calibrating microphone array. Previously a sound field was reproduced with fixed directivity sources and the reverberation compensated for using digital filters. However by virtue of their directive properties, HOSs may be driven to not only avoid the creation of excess reverberation but also to use room reflection to contribute constructively to the desired sound field. The manner by which the loudspeakers steer the sound around the room is determined by measuring the acoustic transfer functions. The requirements on the number and order N of HOSs for accurate reproduction in a reverberant room are derived, showing a 2N + 1-fold decrease in the number of loudspeakers in comparison to using monopole sources. HOSs are shown applicable to rooms with a rich variety of wall reflections while in an anechoic room their advantages may be lost. Performance is investigated in a room using extensions of both the diffuse field model and a more rigorous image-source simulation method, which account for the properties of the HOSs. The robustness of the proposed method is validated by introducing measurement errors.

  12. Interface Design Implications for Recalling the Spatial Configuration of Virtual Auditory Environments

    NASA Astrophysics Data System (ADS)

    McMullen, Kyla A.

    Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.

  13. Noise reduction tests of large-scale-model externally blown flap using trailing-edge blowing and partial flap slot covering. [jet aircraft noise reduction

    NASA Technical Reports Server (NTRS)

    Mckinzie, D. J., Jr.; Burns, R. J.; Wagner, J. M.

    1976-01-01

    Noise data were obtained with a large-scale cold-flow model of a two-flap, under-the-wing, externally blown flap proposed for use on future STOL aircraft. The noise suppression effectiveness of locating a slot conical nozzle at the trailing edge of the second flap and of applying partial covers to the slots between the wing and flaps was evaluated. Overall-sound-pressure-level reductions of 5 db occurred below the wing in the flyover plane. Existing models of several noise sources were applied to the test results. The resulting analytical relation compares favorably with the test data. The noise source mechanisms were analyzed and are discussed.

  14. [Synchronous playing and acquiring of heart sounds and electrocardiogram based on labVIEW].

    PubMed

    Dan, Chunmei; He, Wei; Zhou, Jing; Que, Xiaosheng

    2008-12-01

    In this paper is described a comprehensive system, which can acquire heart sounds and electrocardiogram (ECG) in parallel, synchronize the display; and play of heart sound and make auscultation and check phonocardiogram to tie in. The hardware system with C8051F340 as the core acquires the heart sound and ECG synchronously, and then sends them to indicators, respectively. Heart sounds are displayed and played simultaneously by controlling the moment of writing to indicator and sound output device. In clinical testing, heart sounds can be successfully located with ECG and real-time played.

  15. A spatially collocated sound thrusts a flash into awareness

    PubMed Central

    Aller, Máté; Giani, Anette; Conrad, Verena; Watanabe, Masataka; Noppeney, Uta

    2015-01-01

    To interact effectively with the environment the brain integrates signals from multiple senses. It is currently unclear to what extent spatial information can be integrated across different senses in the absence of awareness. Combining dynamic continuous flash suppression (CFS) and spatial audiovisual stimulation, the current study investigated whether a sound facilitates a concurrent visual flash to elude flash suppression and enter perceptual awareness depending on audiovisual spatial congruency. Our results demonstrate that a concurrent sound boosts unaware visual signals into perceptual awareness. Critically, this process depended on the spatial congruency of the auditory and visual signals pointing towards low level mechanisms of audiovisual integration. Moreover, the concurrent sound biased the reported location of the flash as a function of flash visibility. The spatial bias of sounds on reported flash location was strongest for flashes that were judged invisible. Our results suggest that multisensory integration is a critical mechanism that enables signals to enter conscious perception. PMID:25774126

  16. Automated ultrasonic arterial vibrometry: detection and measurement

    NASA Astrophysics Data System (ADS)

    Plett, Melani I.; Beach, Kirk W.; Paun, Marla

    2000-04-01

    Since the invention of the stethoscope, the detection of vibrations and sounds from the body has been a touchstone of diagnosis. However, the method is limited to vibrations whose associated sounds transmit to the skin, with no means to determine the anatomic and physiological source of the vibrations save the cunning of the examiner. Using ultrasound quadrature phase demodulation methods similar to those of ultrasonic color flow imaging, we have developed a system to detect and measure tissue vibrations with amplitude excursions as small as 30 nanometers. The system uses wavelet analysis for sensitive and specific detection, as well as measurement, of short duration vibrations amidst clutter and time-varying, colored noise. Vibration detection rates in ROC curves from simulated data predict > 99.5% detections with < 1% false alarms for signal to noise ratios >= 0.5. Vibrations from in vivo arterial stenoses and punctures have been studied. The results show that vibration durations vary from 10 - 150 ms, frequencies from 100 - 1000 Hz, and amplitudes from 30 nanometers to several microns. By marking the location of vibration sources on ultrasound images, and using color to indicate amplitude, frequency or acoustic intensity, new diagnostic information is provided to aid disorder diagnosis and management.

  17. Evaluation of a scale-model experiment to investigate long-range acoustic propagation

    NASA Technical Reports Server (NTRS)

    Parrott, Tony L.; Mcaninch, Gerry L.; Carlberg, Ingrid A.

    1987-01-01

    Tests were conducted to evaluate the feasibility of using a scale-model experiment situated in an anechoic facility to investigate long-range sound propagation over ground terrain. For a nominal scale factor of 100:1, attenuations along a linear array of six microphones colinear with a continuous-wave type of sound source were measured over a wavelength range from 10 to 160 for a nominal test frequency of 10 kHz. Most tests were made for a hard model surface (plywood), but limited tests were also made for a soft model surface (plywood with felt). For grazing-incidence propagation over the hard surface, measured and predicted attenuation trends were consistent for microphone locations out to between 40 and 80 wavelengths. Beyond 80 wavelengths, significant variability was observed that was caused by disturbances in the propagation medium. Also, there was evidence of extraneous propagation-path contributions to data irregularities at more remote microphones. Sensitivity studies for the hard-surface and microphone indicated a 2.5 dB change in the relative excess attenuation for a systematic error in source and microphone elevations on the order of 1 mm. For the soft-surface model, no comparable sensitivity was found.

  18. Design and Implementation of Sound Searching Robots in Wireless Sensor Networks

    PubMed Central

    Han, Lianfu; Shen, Zhengguang; Fu, Changfeng; Liu, Chao

    2016-01-01

    A sound target-searching robot system which includes a 4-channel microphone array for sound collection, magneto-resistive sensor for declination measurement, and a wireless sensor networks (WSN) for exchanging information is described. It has an embedded sound signal enhancement, recognition and location method, and a sound searching strategy based on a digital signal processor (DSP). As the wireless network nodes, three robots comprise the WSN a personal computer (PC) in order to search the three different sound targets in task-oriented collaboration. The improved spectral subtraction method is used for noise reduction. As the feature of audio signal, Mel-frequency cepstral coefficient (MFCC) is extracted. Based on the K-nearest neighbor classification method, we match the trained feature template to recognize sound signal type. This paper utilizes the improved generalized cross correlation method to estimate time delay of arrival (TDOA), and then employs spherical-interpolation for sound location according to the TDOA and the geometrical position of the microphone array. A new mapping has been proposed to direct the motor to search sound targets flexibly. As the sink node, the PC receives and displays the result processed in the WSN, and it also has the ultimate power to make decision on the received results in order to improve their accuracy. The experiment results show that the designed three-robot system implements sound target searching function without collisions and performs well. PMID:27657088

  19. Design and Implementation of Sound Searching Robots in Wireless Sensor Networks.

    PubMed

    Han, Lianfu; Shen, Zhengguang; Fu, Changfeng; Liu, Chao

    2016-09-21

    A sound target-searching robot system which includes a 4-channel microphone array for sound collection, magneto-resistive sensor for declination measurement, and a wireless sensor networks (WSN) for exchanging information is described. It has an embedded sound signal enhancement, recognition and location method, and a sound searching strategy based on a digital signal processor (DSP). As the wireless network nodes, three robots comprise the WSN a personal computer (PC) in order to search the three different sound targets in task-oriented collaboration. The improved spectral subtraction method is used for noise reduction. As the feature of audio signal, Mel-frequency cepstral coefficient (MFCC) is extracted. Based on the K-nearest neighbor classification method, we match the trained feature template to recognize sound signal type. This paper utilizes the improved generalized cross correlation method to estimate time delay of arrival (TDOA), and then employs spherical-interpolation for sound location according to the TDOA and the geometrical position of the microphone array. A new mapping has been proposed to direct the motor to search sound targets flexibly. As the sink node, the PC receives and displays the result processed in the WSN, and it also has the ultimate power to make decision on the received results in order to improve their accuracy. The experiment results show that the designed three-robot system implements sound target searching function without collisions and performs well.

  20. Calculating far-field radiated sound pressure levels from NASTRAN output

    NASA Technical Reports Server (NTRS)

    Lipman, R. R.

    1986-01-01

    FAFRAP is a computer program which calculates far field radiated sound pressure levels from quantities computed by a NASTRAN direct frequency response analysis of an arbitrarily shaped structure. Fluid loading on the structure can be computed directly by NASTRAN or an added-mass approximation to fluid loading on the structure can be used. Output from FAFRAP includes tables of radiated sound pressure levels and several types of graphic output. FAFRAP results for monopole and dipole sources compare closely with an explicit calculation of the radiated sound pressure level for those sources.

Top