Sample records for external sound source

  1. The role of reverberation-related binaural cues in the externalization of speech.

    PubMed

    Catic, Jasmina; Santurette, Sébastien; Dau, Torsten

    2015-08-01

    The perception of externalization of speech sounds was investigated with respect to the monaural and binaural cues available at the listeners' ears in a reverberant environment. Individualized binaural room impulse responses (BRIRs) were used to simulate externalized sound sources via headphones. The measured BRIRs were subsequently modified such that the proportion of the response containing binaural vs monaural information was varied. Normal-hearing listeners were presented with speech sounds convolved with such modified BRIRs. Monaural reverberation cues were found to be sufficient for the externalization of a lateral sound source. In contrast, for a frontal source, an increased amount of binaural cues from reflections was required in order to obtain well externalized sound images. It was demonstrated that the interaction between the interaural cues of the direct sound and the reverberation strongly affects the perception of externalization. An analysis of the short-term binaural cues showed that the amount of fluctuations of the binaural cues corresponded well to the externalization ratings obtained in the listening tests. The results further suggested that the precedence effect is involved in the auditory processing of the dynamic binaural cues that are utilized for externalization perception.

  2. Source and listener directivity for interactive wave-based sound propagation.

    PubMed

    Mehra, Ravish; Antani, Lakulish; Kim, Sujeong; Manocha, Dinesh

    2014-04-01

    We present an approach to model dynamic, data-driven source and listener directivity for interactive wave-based sound propagation in virtual environments and computer games. Our directional source representation is expressed as a linear combination of elementary spherical harmonic (SH) sources. In the preprocessing stage, we precompute and encode the propagated sound fields due to each SH source. At runtime, we perform the SH decomposition of the varying source directivity interactively and compute the total sound field at the listener position as a weighted sum of precomputed SH sound fields. We propose a novel plane-wave decomposition approach based on higher-order derivatives of the sound field that enables dynamic HRTF-based listener directivity at runtime. We provide a generic framework to incorporate our source and listener directivity in any offline or online frequency-domain wave-based sound propagation algorithm. We have integrated our sound propagation system in Valve's Source game engine and use it to demonstrate realistic acoustic effects such as sound amplification, diffraction low-passing, scattering, localization, externalization, and spatial sound, generated by wave-based propagation of directional sources and listener in complex scenarios. We also present results from our preliminary user study.

  3. 24 CFR 51.103 - Criteria and standards.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...-night average sound level produced as the result of the accumulation of noise from all sources contributing to the external noise environment at the site. Day-night average sound level, abbreviated as DNL and symbolized as Ldn, is the 24-hour average sound level, in decibels, obtained after addition of 10...

  4. The Contribution of Head Movement to the Externalization and Internalization of Sounds

    PubMed Central

    Brimijoin, W. Owen; Boyd, Alan W.; Akeroyd, Michael A.

    2013-01-01

    Background When stimuli are presented over headphones, they are typically perceived as internalized; i.e., they appear to emanate from inside the head. Sounds presented in the free-field tend to be externalized, i.e., perceived to be emanating from a source in the world. This phenomenon is frequently attributed to reverberation and to the spectral characteristics of the sounds: those sounds whose spectrum and reverberation matches that of free-field signals arriving at the ear canal tend to be more frequently externalized. Another factor, however, is that the virtual location of signals presented over headphones moves in perfect concert with any movements of the head, whereas the location of free-field signals moves in opposition to head movements. The effects of head movement have not been systematically disentangled from reverberation and/or spectral cues, so we measured the degree to which movements contribute to externalization. Methodology/Principal Findings We performed two experiments: 1) Using motion tracking and free-field loudspeaker presentation, we presented signals that moved in their spatial location to match listeners’ head movements. 2) Using motion tracking and binaural room impulse responses, we presented filtered signals over headphones that appeared to remain static relative to the world. The results from experiment 1 showed that free-field signals from the front that move with the head are less likely to be externalized (23%) than those that remain fixed (63%). Experiment 2 showed that virtual signals whose position was fixed relative to the world are more likely to be externalized (65%) than those fixed relative to the head (20%), regardless of the fidelity of the individual impulse responses. Conclusions/Significance Head movements play a significant role in the externalization of sound sources. These findings imply tight integration between binaural cues and self motion cues and underscore the importance of self motion for spatial auditory perception. PMID:24312677

  5. Sound produced by an oscillating arc in a high-pressure gas

    NASA Astrophysics Data System (ADS)

    Popov, Fedor K.; Shneider, Mikhail N.

    2017-08-01

    We suggest a simple theory to describe the sound generated by small periodic perturbations of a cylindrical arc in a dense gas. Theoretical analysis was done within the framework of the non-self-consistent channel arc model and supplemented with time-dependent gas dynamic equations. It is shown that an arc with power amplitude oscillations on the order of several percent is a source of sound whose intensity is comparable with external ultrasound sources used in experiments to increase the yield of nanoparticles in the high pressure arc systems for nanoparticle synthesis.

  6. Interior sound field control using generalized singular value decomposition in the frequency domain.

    PubMed

    Pasco, Yann; Gauthier, Philippe-Aubert; Berry, Alain; Moreau, Stéphane

    2017-01-01

    The problem of controlling a sound field inside a region surrounded by acoustic control sources is considered. Inspired by the Kirchhoff-Helmholtz integral, the use of double-layer source arrays allows such a control and avoids the modification of the external sound field by the control sources by the approximation of the sources as monopole and radial dipole transducers. However, the practical implementation of the Kirchhoff-Helmholtz integral in physical space leads to large numbers of control sources and error sensors along with excessive controller complexity in three dimensions. The present study investigates the potential of the Generalized Singular Value Decomposition (GSVD) to reduce the controller complexity and separate the effect of control sources on the interior and exterior sound fields, respectively. A proper truncation of the singular basis provided by the GSVD factorization is shown to lead to effective cancellation of the interior sound field at frequencies below the spatial Nyquist frequency of the control sources array while leaving the exterior sound field almost unchanged. Proofs of concept are provided through simulations achieved for interior problems by simulations in a free field scenario with circular arrays and in a reflective environment with square arrays.

  7. Advanced Systems for Monitoring Underwater Sounds

    NASA Technical Reports Server (NTRS)

    Lane, Michael; Van Meter, Steven; Gilmore, Richard Grant; Sommer, Keith

    2007-01-01

    The term "Passive Acoustic Monitoring System" (PAMS) describes a developmental sensing-and-data-acquisition system for recording underwater sounds. The sounds (more precisely, digitized and preprocessed versions from acoustic transducers) are subsequently analyzed by a combination of data processing and interpretation to identify and/or, in some cases, to locate the sources of those sounds. PAMS was originally designed to locate the sources such as fish of species that one knows or seeks to identify. The PAMS unit could also be used to locate other sources, for example, marine life, human divers, and/or vessels. The underlying principles of passive acoustic sensing and analyzing acoustic-signal data in conjunction with temperature and salinity data are not new and not unique to PAMS. Part of the uniqueness of the PAMS design is that it is the first deep-sea instrumentation design to provide a capability for studying soniferous marine animals (especially fish) over the wide depth range described below. The uniqueness of PAMS also lies partly in a synergistic combination of advanced sensing, packaging, and data-processing design features with features adapted from proven marine instrumentation systems. This combination affords a versatility that enables adaptation to a variety of undersea missions using a variety of sensors. The interpretation of acoustic data can include visual inspection of power-spectrum plots for identification of spectral signatures of known biological species or artificial sources. Alternatively or in addition, data analysis could include determination of relative times of arrival of signals at different acoustic sensors arrayed at known locations. From these times of arrival, locations of acoustic sources (and errors in those locations) can be estimated. Estimates of relative locations of sources and sensors can be refined through analysis of the attenuation of sound in the intervening water in combination with water-temperature and salinity data acquired by instrumentation systems other than PAMS. A PAMS is packaged as a battery-powered unit, mated with external sensors, that can operate in the ocean at any depth from 2 m to 1 km. A PAMS includes a pressure housing, a deep-sea battery, a hydrophone (which is one of the mating external sensors), and an external monitor and keyboard box. In addition to acoustic transducers, external sensors can include temperature probes and, potentially, underwater cameras. The pressure housing contains a computer that includes a hard drive, DC-to- DC power converters, a post-amplifier board, a sound card, and a universal serial bus (USB) 4-port hub.

  8. Effectiveness of Interaural Delays Alone as Cues During Dynamic Sound Localization

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The contribution of interaural time differences (ITDs) to the localization of virtual sound sources with and without head motion was examined. Listeners estimated the apparent azimuth, elevation and distance of virtual sources presented over headphones. Stimuli (3 sec., white noise) were synthesized from minimum-phase representations of nonindividualized head-related transfer functions (HRTFs); binaural magnitude spectra were derived from the minimum phase estimates and ITDs were represented as a pure delay. During dynamic conditions, listeners were encouraged to move their heads; head position was tracked and stimuli were synthesized in real time using a Convolvotron to simulate a stationary external sound source. Two synthesis conditions were tested: (1) both interaural level differences (ILDs) and ITDs correctly correlated with source location and head motion, (2) ITDs correct, no ILDs (flat magnitude spectrum). Head movements reduced azimuth confusions primarily when interaural cues were correctly correlated, although a smaller effect was also seen for ITDs alone. Externalization was generally poor for ITD-only conditions and was enhanced by head motion only for normal HRTFs. Overall the data suggest that, while ITDs alone can provide a significant cue for azimuth, the errors most commonly associated with virtual sources are reduced by location-dependent magnitude cues.

  9. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    PubMed Central

    Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten

    2016-01-01

    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli. PMID:27853290

  10. Underwater hearing and sound localization with and without an air interface.

    PubMed

    Shupak, Avi; Sharoni, Zohara; Yanir, Yoav; Keynan, Yoav; Alfie, Yechezkel; Halpern, Pinchas

    2005-01-01

    Underwater hearing acuity and sound localization are improved by the presence of an air interface around the pinnae and inside the external ear canals. Hearing threshold and the ability to localize sound sources are reduced underwater. The resonance frequency of the external ear is lowered when the external ear canal is filled with water, and the impedance-matching ability of the middle ear is significantly reduced due to elevation of the ambient pressure, the water-mass load on the tympanic membrane, and the addition of a fluid-air interface during submersion. Sound lateralization on land is largely explained by the mechanisms of interaural intensity differences and interaural temporal or phase differences. During submersion, these differences are largely lost due to the increase in underwater sound velocity and cancellation of the head's acoustic shadow effect because of the similarity between the impedance of the skull and the surrounding water. Ten scuba divers wearing a regular opaque face mask or an opaque ProEar 2000 (Safe Dive, Ltd., Hofit, Israel) mask that enables the presence of air at ambient pressure in and around the ear made a dive to a depth of 3 m in the open sea. Four underwater speakers arranged on the horizontal plane at 90-degree intervals and at a distance of 5 m from the diver were used for testing pure-tone hearing thresholds (PTHT), the reception threshold for the recorded sound of a rubber-boat engine, and sound localization. For sound localization, the sound of the rubber boat's engine was randomly delivered by one speaker at a time at 40 dB HL above the recorded sound of a rubber-boat engine, and the diver was asked to point to the sound source. The azimuth was measured by the diver's companion using a navigation board. Underwater PTHT with both masks were significantly higher for frequencies of 250 to 6000 Hz when compared with the thresholds on land (p <0.0001). No differences were found in the PTHT or the reception threshold for the recorded sound of a rubber-boat engine for dry or wet ear conditions. There was no difference in the sound localization error between the regular mask and the ProEar 2000 mask. The presence of air around the pinna and inside the external ear canal did not improve underwater hearing sensitivity or sound localization. These results support the argument that bone conduction plays the main role in underwater hearing.

  11. Experimental Simulation of Active Control With On-line System Identification on Sound Transmission Through an Elastic Plate

    NASA Technical Reports Server (NTRS)

    1998-01-01

    An adaptive control algorithm with on-line system identification capability has been developed. One of the great advantages of this scheme is that an additional system identification mechanism such as an additional uncorrelated random signal generator as the source of system identification is not required. A time-varying plate-cavity system is used to demonstrate the control performance of this algorithm. The time-varying system consists of a stainless-steel plate which is bolted down on a rigid cavity opening where the cavity depth was changed with respect to time. For a given externally located harmonic sound excitation, the system identification and the control are simultaneously executed to minimize the transmitted sound in the cavity. The control performance of the algorithm is examined for two cases. First, all the water was drained, the external disturbance frequency is swept with 1 Hz/sec. The result shows an excellent frequency tracking capability with cavity internal sound suppression of 40 dB. For the second case, the water level is initially empty and then raised to 3/20 full in 60 seconds while the external sound excitation is fixed with a frequency. Hence, the cavity resonant frequency decreases and passes the external sound excitation frequency. The algorithm shows 40 dB transmitted noise suppression without compromising the system identification tracking capability.

  12. A neuronal network model with simplified tonotopicity for tinnitus generation and its relief by sound therapy.

    PubMed

    Nagashino, Hirofumi; Kinouchi, Yohsuke; Danesh, Ali A; Pandya, Abhijit S

    2013-01-01

    Tinnitus is the perception of sound in the ears or in the head where no external source is present. Sound therapy is one of the most effective techniques for tinnitus treatment that have been proposed. In order to investigate mechanisms of tinnitus generation and the clinical effects of sound therapy, we have proposed conceptual and computational models with plasticity using a neural oscillator or a neuronal network model. In the present paper, we propose a neuronal network model with simplified tonotopicity of the auditory system as more detailed structure. In this model an integrate-and-fire neuron model is employed and homeostatic plasticity is incorporated. The computer simulation results show that the present model can show the generation of oscillation and its cessation by external input. It suggests that the present framework is promising as a modeling for the tinnitus generation and the effects of sound therapy.

  13. Developmental vision determines the reference frame for the multisensory control of action.

    PubMed

    Röder, Brigitte; Kusmierek, Anna; Spence, Charles; Schicke, Tobias

    2007-03-13

    Both animal and human studies suggest that action goals are defined in external coordinates regardless of their sensory modality. The present study used an auditory-manual task to test whether the default use of such an external reference frame is innately determined or instead acquired during development because of the increasing dominance of vision over manual control. In Experiment I, congenitally blind, late blind, and age-matched sighted adults had to press a left or right response key depending on the bandwidth of pink noise bursts presented from either the left or right loudspeaker. Although the spatial location of the sounds was entirely task-irrelevant, all groups responded more efficiently with uncrossed hands when the sound was presented from the same side as the responding hand ("Simon effect"). This effect reversed with crossed hands only in the congenitally blind: They responded faster with the hand that was located contralateral to the sound source. In Experiment II, the instruction to the participants was changed: They now had to respond with the hand located next to the sound source. In contrast to Experiment I ("Simon-task"), this task required an explicit matching of the sound's location with the position of the responding hand. In Experiment II, the congenitally blind participants showed a significantly larger crossing deficit than both the sighted and late blind adults. This pattern of results implies that developmental vision induces the default use of an external coordinate frame for multisensory action control; this facilitates not only visual but also auditory-manual control.

  14. Investigation of genesis of gallop sounds in dogs by quantitative phonocardiography and digital frequency analysis.

    PubMed

    Aubert, A E; Denys, B G; Meno, F; Reddy, P S

    1985-05-01

    Several investigators have noted external gallop sounds to be of higher amplitude than their corresponding internal sounds (S3 and S4). In this study we hoped to determine if S3 and S4 are transmitted in the same manner as S1. In 11 closed-chest dogs, external (apical) and left ventricular pressures and sounds were recorded simultaneously with transducers with identical sensitivity and frequency responses. Volume and pressure overload and positive and negative inotropic drugs were used to generate gallop sounds. Recordings were made in the control state and after the various interventions. S3 and S4 were recorded in 17 experiments each. The amplitude of the external S1 was uniformly higher than that of internal S1 and internal gallop sounds were inconspicuous. With use of Fourier transforms, the gain function was determined by comparing internal to external S1. By inverse transform, the amplitude of the internal gallop sounds was predicted from external sounds. The internal sounds of significant amplitude were predicted in many instances, but the actual recordings showed no conspicuous sounds. The absence of internal gallop sounds of expected amplitude as calculated from the external gallop sounds and the gain function derived from the comparison of internal and external S1 make it very unlikely that external gallop sounds are derived from internal sounds.

  15. NeuroVR 1.5 - a free virtual reality platform for the assessment and treatment in clinical psychology and neuroscience.

    PubMed

    Riva, Giuseppe; Carelli, Laura; Gaggioli, Andrea; Gorini, Alessandra; Vigna, Cinzia; Corsi, Riccardo; Faletti, Gianluca; Vezzadini, Luca

    2009-01-01

    At MMVR 2007 we presented NeuroVR (http://www.neurovr.org) a free virtual reality platform based on open-source software. The software allows non-expert users to adapt the content of 14 pre-designed virtual environments to the specific needs of the clinical or experimental setting. Following the feedbacks of the 700 users who downloaded the first version, we developed a new version - NeuroVR 1.5 - that improves the possibility for the therapist to enhance the patient's feeling of familiarity and intimacy with the virtual scene, by using external sounds, photos or videos. Specifically, the new version now includes full sound support and the ability of triggering external sounds and videos using the keyboard. The outcomes of different trials made using NeuroVR will be presented and discussed.

  16. External Threat Risk Assessment Algorithm (ExTRAA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powell, Troy C.

    Two risk assessment algorithms and philosophies have been augmented and combined to form a new algorit hm, the External Threat Risk Assessment Algorithm (ExTRAA), that allows for effective and statistically sound analysis of external threat sources in relation to individual attack methods . In addition to the attack method use probability and the attack method employment consequence, t he concept of defining threat sources is added to the risk assessment process. Sample data is tabulated and depicted in radar plots and bar graphs for algorithm demonstration purposes. The largest success of ExTRAA is its ability to visualize the kind ofmore » r isk posed in a given situation using the radar plot method.« less

  17. Cerebellar contribution to the prediction of self-initiated sounds.

    PubMed

    Knolle, Franziska; Schröger, Erich; Kotz, Sonja A

    2013-10-01

    In everyday life we frequently make the fundamental distinction between sensory input resulting from our own actions and sensory input that is externally-produced. It has been speculated that making this distinction involves the use of an internal forward-model, which enables the brain to adjust its response to self-produced sensory input. In the auditory domain, this idea has been supported by event-related potential and evoked-magnetic field studies revealing that self-initiated sounds elicit a suppressed N100/M100 brain response compared to externally-produced sounds. Moreover, a recent study reveals that patients with cerebellar lesions do not show a significant N100-suppression effect. This result supports the theory that the cerebellum is essential for generating internal forward predictions. However, all except one study compared self-initiated and externally-produced auditory stimuli in separate conditions. Such a setup prevents an unambiguous interpretation of the N100-suppression effect when distinguishing self- and externally-produced sensory stimuli: the N100-suppression can also be explained by differences in the allocation of attention in different conditions. In the current electroencephalography (EEG)-study we investigated the N100-suppression effect in an altered design comparing (i) self-initiated sounds to externally-produced sounds that occurred intermixed with these self-initiated sounds (i.e., both sound types occurred in the same condition) or (ii) self-initiated sounds to externally-produced sounds that occurred in separate conditions. Results reveal that the cerebellum generates selective predictions in response to self-initiated sounds independent of condition type: cerebellar patients, in contrast to healthy controls, do not display an N100-suppression effect in response to self-initiated sounds when intermixed with externally-produced sounds. Furthermore, the effect is not influenced by the temporal proximity of externally-produced sounds to self-produced sounds. Controls and patients showed a P200-reduction in response to self-initiated sounds. This suggests the existence of an additional and probably more conscious mechanism for identifying self-generated sounds that does not functionally depend on the cerebellum. Copyright © 2012 Elsevier Srl. All rights reserved.

  18. National Oceanic and Atmospheric Administration's Cetacean and Sound Mapping Effort: Continuing Forward with an Integrated Ocean Noise Strategy.

    PubMed

    Harrison, Jolie; Ferguson, Megan; Gedamke, Jason; Hatch, Leila; Southall, Brandon; Van Parijs, Sofie

    2016-01-01

    To help manage chronic and cumulative impacts of human activities on marine mammals, the National Oceanic and Atmospheric Administration (NOAA) convened two working groups, the Underwater Sound Field Mapping Working Group (SoundMap) and the Cetacean Density and Distribution Mapping Working Group (CetMap), with overarching effort of both groups referred to as CetSound, which (1) mapped the predicted contribution of human sound sources to ocean noise and (2) provided region/time/species-specific cetacean density and distribution maps. Mapping products were presented at a symposium where future priorities were identified, including institutionalization/integration of the CetSound effort within NOAA-wide goals and programs, creation of forums and mechanisms for external input and funding, and expanded outreach/education. NOAA is subsequently developing an ocean noise strategy to articulate noise conservation goals and further identify science and management actions needed to support them.

  19. Directionality of nose-emitted echolocation calls from bats without a nose leaf (Plecotus auritus).

    PubMed

    Jakobsen, Lasse; Hallam, John; Moss, Cynthia F; Hedenström, Anders

    2018-02-13

    All echolocating bats and whales measured to date emit a directional bio-sonar beam that affords them a number of advantages over an omni-directional beam, i.e. reduced clutter, increased source level and inherent directional information. In this study, we investigated the importance of directional sound emission for navigation through echolocation by measuring the sonar beam of brown long-eared bats, Plecotus auritus Plecotus auritus emits sound through the nostrils but has no external appendages to readily facilitate a directional sound emission as found in most nose emitters. The study shows that P. auritus , despite lacking an external focusing apparatus, emits a directional echolocation beam (directivity index=13 dB) and that the beam is more directional vertically (-6 dB angle at 22 deg) than horizontally (-6 dB angle at 35 deg). Using a simple numerical model, we found that the recorded emission pattern is achievable if P. auritus emits sound through the nostrils as well as the mouth. The study thus supports the hypothesis that a directional echolocation beam is important for perception through echolocation and we propose that animals with similarly non-directional emitter characteristics may facilitate a directional sound emission by emitting sound through both the nostrils and the mouth. © 2018. Published by The Company of Biologists Ltd.

  20. Active localization of virtual sounds

    NASA Technical Reports Server (NTRS)

    Loomis, Jack M.; Hebert, C.; Cicinelli, J. G.

    1991-01-01

    We describe a virtual sound display built around a 12 MHz 80286 microcomputer and special purpose analog hardware. The display implements most of the primary cues for sound localization in the ear-level plane. Static information about direction is conveyed by interaural time differences and, for frequencies above 1800 Hz, by head sound shadow (interaural intensity differences) and pinna sound shadow. Static information about distance is conveyed by variation in sound pressure (first power law) for all frequencies, by additional attenuation in the higher frequencies (simulating atmospheric absorption), and by the proportion of direct to reverberant sound. When the user actively locomotes, the changing angular position of the source occasioned by head rotations provides further information about direction and the changing angular velocity produced by head translations (motion parallax) provides further information about distance. Judging both from informal observations by users and from objective data obtained in an experiment on homing to virtual and real sounds, we conclude that simple displays such as this are effective in creating the perception of external sounds to which subjects can home with accuracy and ease.

  1. [Functional anatomy of the cochlear nerve and the central auditory system].

    PubMed

    Simon, E; Perrot, X; Mertens, P

    2009-04-01

    The auditory pathways are a system of afferent fibers (through the cochlear nerve) and efferent fibers (through the vestibular nerve), which are not limited to a simple information transmitting system but create a veritable integration of the sound stimulus at the different levels, by analyzing its three fundamental elements: frequency (pitch), intensity, and spatial localization of the sound source. From the cochlea to the primary auditory cortex, the auditory fibers are organized anatomically in relation to the characteristic frequency of the sound signal that they transmit (tonotopy). Coding the intensity of the sound signal is based on temporal recruitment (the number of action potentials) and spatial recruitment (the number of inner hair cells recruited near the cell of the frequency that is characteristic of the stimulus). Because of binaural hearing, commissural pathways at each level of the auditory system and integration of the phase shift and the difference in intensity between signals coming from both ears, spatial localization of the sound source is possible. Finally, through the efferent fibers in the vestibular nerve, higher centers exercise control over the activity of the cochlea and adjust the peripheral hearing organ to external sound conditions, thus protecting the auditory system or increasing sensitivity by the attention given to the signal.

  2. Complete de-Dopplerization and acoustic holography for external noise of a high-speed train.

    PubMed

    Yang, Diange; Wen, Junjie; Miao, Feng; Wang, Ziteng; Gu, Xiaoan; Lian, Xiaomin

    2016-09-01

    Identification and measurement of moving sound sources are the bases for vehicle noise control. Acoustic holography has been applied in successfully identifying the moving sound source since the 1990s. However, due to the high demand for the accuracy of holographic data, currently the maximum velocity achieved by acoustic holography is just above 100 km/h. The objective of this study was to establish a method based on the complete Morse acoustic model to restore the measured signal in high-speed situations, and to propose a far-field acoustic holography method applicable for high-speed moving sound sources. Simulated comparisons of the proposed far-field acoustic holography with complete Morse model, the acoustic holography with simplified Morse model and traditional delay-and-sum beamforming were conducted. Experiments with a high-speed train running at the speed of 278 km/h validated the proposed far-field acoustic holography. This study extended the applications of acoustic holography to high-speed situations and established the basis for quantitative measurements of far-field acoustic holography.

  3. Noise Source Visualization Using a Digital Voice Recorder and Low-Cost Sensors

    PubMed Central

    Cho, Yong Thung

    2018-01-01

    Accurate sound visualization of noise sources is required for optimal noise control. Typically, noise measurement systems require microphones, an analog-digital converter, cables, a data acquisition system, etc., which may not be affordable for potential users. Also, many such systems are not highly portable and may not be convenient for travel. Handheld personal electronic devices such as smartphones and digital voice recorders with relatively lower costs and higher performance have become widely available recently. Even though such devices are highly portable, directly implementing them for noise measurement may lead to erroneous results since such equipment was originally designed for voice recording. In this study, external microphones were connected to a digital voice recorder to conduct measurements and the input received was processed for noise visualization. In this way, a low cost, compact sound visualization system was designed and introduced to visualize two actual noise sources for verification with different characteristics: an enclosed loud speaker and a small air compressor. Reasonable accuracy of noise visualization for these two sources was shown over a relatively wide frequency range. This very affordable and compact sound visualization system can be used for many actual noise visualization applications in addition to educational purposes. PMID:29614038

  4. How to precisely measure the volume velocity transfer function of physical vocal tract models by external excitation

    PubMed Central

    Mainka, Alexander; Kürbis, Steffen; Birkholz, Peter

    2018-01-01

    Recently, 3D printing has been increasingly used to create physical models of the vocal tract with geometries obtained from magnetic resonance imaging. These printed models allow measuring the vocal tract transfer function, which is not reliably possible in vivo for the vocal tract of living humans. The transfer functions enable the detailed examination of the acoustic effects of specific articulatory strategies in speaking and singing, and the validation of acoustic plane-wave models for realistic vocal tract geometries in articulatory speech synthesis. To measure the acoustic transfer function of 3D-printed models, two techniques have been described: (1) excitation of the models with a broadband sound source at the glottis and measurement of the sound pressure radiated from the lips, and (2) excitation of the models with an external source in front of the lips and measurement of the sound pressure inside the models at the glottal end. The former method is more frequently used and more intuitive due to its similarity to speech production. However, the latter method avoids the intricate problem of constructing a suitable broadband glottal source and is therefore more effective. It has been shown to yield a transfer function similar, but not exactly equal to the volume velocity transfer function between the glottis and the lips, which is usually used to characterize vocal tract acoustics. Here, we revisit this method and show both, theoretically and experimentally, how it can be extended to yield the precise volume velocity transfer function of the vocal tract. PMID:29543829

  5. Corollary discharge provides the sensory content of inner speech.

    PubMed

    Scott, Mark

    2013-09-01

    Inner speech is one of the most common, but least investigated, mental activities humans perform. It is an internal copy of one's external voice and so is similar to a well-established component of motor control: corollary discharge. Corollary discharge is a prediction of the sound of one's voice generated by the motor system. This prediction is normally used to filter self-caused sounds from perception, which segregates them from externally caused sounds and prevents the sensory confusion that would otherwise result. The similarity between inner speech and corollary discharge motivates the theory, tested here, that corollary discharge provides the sensory content of inner speech. The results reported here show that inner speech attenuates the impact of external sounds. This attenuation was measured using a context effect (an influence of contextual speech sounds on the perception of subsequent speech sounds), which weakens in the presence of speech imagery that matches the context sound. Results from a control experiment demonstrated this weakening in external speech as well. Such sensory attenuation is a hallmark of corollary discharge.

  6. Postflight analysis of the single-axis acoustic system on SPAR VI and recommendations for future flights

    NASA Technical Reports Server (NTRS)

    Naumann, R. J.; Oran, W. A.; Whymark, R. R.; Rey, C.

    1981-01-01

    The single axis acoustic levitator that was flown on SPAR VI malfunctioned. The results of a series of tests, analyses, and investigation of hypotheses that were undertaken to determine the probable cause of failure are presented, together with recommendations for future flights of the apparatus. The most probable causes of the SPAR VI failure were lower than expected sound intensity due to mechanical degradation of the sound source, and an unexpected external force that caused the experiment sample to move radially and eventually be lost from the acoustic energy well.

  7. Noise reduction tests of large-scale-model externally blown flap using trailing-edge blowing and partial flap slot covering. [jet aircraft noise reduction

    NASA Technical Reports Server (NTRS)

    Mckinzie, D. J., Jr.; Burns, R. J.; Wagner, J. M.

    1976-01-01

    Noise data were obtained with a large-scale cold-flow model of a two-flap, under-the-wing, externally blown flap proposed for use on future STOL aircraft. The noise suppression effectiveness of locating a slot conical nozzle at the trailing edge of the second flap and of applying partial covers to the slots between the wing and flaps was evaluated. Overall-sound-pressure-level reductions of 5 db occurred below the wing in the flyover plane. Existing models of several noise sources were applied to the test results. The resulting analytical relation compares favorably with the test data. The noise source mechanisms were analyzed and are discussed.

  8. Helicopter external noise prediction and reduction

    NASA Astrophysics Data System (ADS)

    Lewy, Serge

    Helicopter external noise is a major challenge for the manufacturers, both in the civil domain and in the military domain. The strongest acoustic sources are due to the main rotor. Two flight conditions are analyzed in detail because radiated sound is then very loud and very impulsive: (1) high-speed flight, with large thickness and shear terms on the advancing blade side; and (2) descent flight, with blade-vortex interaction for certain rates of descent. In both cases, computational results were obtained and tests on new blade designs have been conducted in wind tunnels. These studies prove that large noise reduction can be achieved. It is shown in conclusion, however, that the other acoustic sources (tail rotor, turboshaft engines) must not be neglected to define a quiet helicopter.

  9. Perception of Animacy from the Motion of a Single Sound Object.

    PubMed

    Nielsen, Rasmus Høll; Vuust, Peter; Wallentin, Mikkel

    2015-02-01

    Research in the visual modality has shown that the presence of certain dynamics in the motion of an object has a strong effect on whether or not the entity is perceived as animate. Cues for animacy are, among others, self-propelled motion and direction changes that are seemingly not caused by entities external to, or in direct contact with, the moving object. The present study aimed to extend this research into the auditory domain by determining if similar dynamics could influence the perceived animacy of a sound source. In two experiments, participants were presented with single, synthetically generated 'mosquito' sounds moving along trajectories in space, and asked to rate how certain they were that each sound-emitting entity was alive. At a random point on a linear motion trajectory, the sound source would deviate from its initial path and speed. Results confirm findings from the visual domain that a change in the velocity of motion is positively correlated with perceived animacy, and changes in direction were found to influence animacy judgment as well. This suggests that an ability to facilitate and sustain self-movement is perceived as a living quality not only in the visual domain, but in the auditory domain as well. © 2015 SAGE Publications.

  10. Psychophysics and Neuronal Bases of Sound Localization in Humans

    PubMed Central

    Ahveninen, Jyrki; Kopco, Norbert; Jääskeläinen, Iiro P.

    2013-01-01

    Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory “where” pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. PMID:23886698

  11. Basic experimental study of the coupling between flow instabilities and incident sound

    NASA Astrophysics Data System (ADS)

    Ahuja, K. K.

    1984-03-01

    Whether a solid trailing edge is required to produce efficient coupling between sound and instability waves in a shear layer was investigated. The differences found in the literature on the theoretical notions about receptivity, and a need to resolve them by way of well-planned experiments are discussed. Instability waves in the shear layer of a subsonic jet, excited by a point sound source located external to the jet, were first visualized using an ensemble averaging technique. Various means were adopted to shield the sound reaching the nozzle lip. It was found that the low frequency sound couples more efficiently at distances downstream of the nozzle. To substantiate the findings further, a supersonic screeching jet was tested such that it passed through a small opening in a baffle placed parallel to the exit plane. The measured feedback or screech frequencies and also the excited flow disturbances changed drastically on traversing the baffle axially thus providing a strong indication that a trailing edge is not necessary for efficient coupling between sound and flow.

  12. Basic experimental study of the coupling between flow instabilities and incident sound

    NASA Technical Reports Server (NTRS)

    Ahuja, K. K.

    1984-01-01

    Whether a solid trailing edge is required to produce efficient coupling between sound and instability waves in a shear layer was investigated. The differences found in the literature on the theoretical notions about receptivity, and a need to resolve them by way of well-planned experiments are discussed. Instability waves in the shear layer of a subsonic jet, excited by a point sound source located external to the jet, were first visualized using an ensemble averaging technique. Various means were adopted to shield the sound reaching the nozzle lip. It was found that the low frequency sound couples more efficiently at distances downstream of the nozzle. To substantiate the findings further, a supersonic screeching jet was tested such that it passed through a small opening in a baffle placed parallel to the exit plane. The measured feedback or screech frequencies and also the excited flow disturbances changed drastically on traversing the baffle axially thus providing a strong indication that a trailing edge is not necessary for efficient coupling between sound and flow.

  13. Control of boundary layer transition location and plate vibration in the presence of an external acoustic field

    NASA Technical Reports Server (NTRS)

    Maestrello, L.; Grosveld, F. W.

    1991-01-01

    The experiment is aimed at controlling the boundary layer transition location and the plate vibration when excited by a flow and an upstream sound source. Sound has been found to affect the flow at the leading edge and the response of a flexible plate in a boundary layer. Because the sound induces early transition, the panel vibration is acoustically coupled to the turbulent boundary layer by the upstream radiation. Localized surface heating at the leading edge delays the transition location downstream of the flexible plate. The response of the plate excited by a turbulent boundary layer (without sound) shows that the plate is forced to vibrate at different frequencies and with different amplitudes as the flow velocity changes indicating that the plate is driven by the convective waves of the boundary layer. The acoustic disturbances induced by the upstream sound dominate the response of the plate when the boundary layer is either turbulent or laminar. Active vibration control was used to reduce the sound induced displacement amplitude of the plate.

  14. Diversity of acoustic tracheal system and its role for directional hearing in crickets

    PubMed Central

    2013-01-01

    Background Sound localization in small insects can be a challenging task due to physical constraints in deriving sufficiently large interaural intensity differences (IIDs) between both ears. In crickets, sound source localization is achieved by a complex type of pressure difference receiver consisting of four potential sound inputs. Sound acts on the external side of two tympana but additionally reaches the internal tympanal surface via two external sound entrances. Conduction of internal sound is realized by the anatomical arrangement of connecting trachea. A key structure is a trachea coupling both ears which is characterized by an enlarged part in its midline (i.e., the acoustic vesicle) accompanied with a thin membrane (septum). This facilitates directional sensitivity despite an unfavorable relationship between wavelength of sound and body size. Here we studied the morphological differences of the acoustic tracheal system in 40 cricket species (Gryllidae, Mogoplistidae) and species of outgroup taxa (Gryllotalpidae, Rhaphidophoridae, Gryllacrididae) of the suborder Ensifera comprising hearing and non hearing species. Results We found a surprisingly high variation of acoustic tracheal systems and almost all investigated species using intraspecific acoustic communication were characterized by an acoustic vesicle associated with a medial septum. The relative size of the acoustic vesicle - a structure most crucial for deriving high IIDs - implies an important role for sound localization. Most remarkable in this respect was the size difference of the acoustic vesicle between species; those with a more unfavorable ratio of body size to sound wavelength tend to exhibit a larger acoustic vesicle. On the other hand, secondary loss of acoustic signaling was nearly exclusively associated with the absence of both acoustic vesicle and septum. Conclusion The high diversity of acoustic tracheal morphology observed between species might reflect different steps in the evolution of the pressure difference receiver; with a precursor structure already present in ancestral non-hearing species. In addition, morphological transitions of the acoustic vesicle suggest a possible adaptive role for the generation of binaural directional cues. PMID:24131512

  15. Sound transmission through a double-panel construction lined with poroelastic material in the presence of mean flow

    NASA Astrophysics Data System (ADS)

    Zhou, Jie; Bhaskar, Atul; Zhang, Xin

    2013-08-01

    This paper investigates the sound transmission characteristics through a system of double-panel lined with poroelastic material in the core. The panels are surrounded by external and internal fluid media where a uniform external mean flow exists on one side. Biot's theory is used to model the porous material. Three types of constructions—bonded-bonded, bonded-unbonded and unbonded-unbonded—are considered. The effect of Mach number of the external flow on the sound transmission over a wide frequency range in a diffuse sound field is examined. External mean flow is shown to give a modest increase in transmission loss at low frequency, but a significant increase at high frequency. It is brought out that calculations based on static air on the incidence side provide a conservative estimate of sound transmission through the sandwich structure. The acoustic performance of the sandwich panel for different configurations is presented. The effect of curvature of the panel is also brought out by using shallow shell theory.

  16. Hearing Sensation Levels of Emitted Biosonar Clicks in an Echolocating Atlantic Bottlenose Dolphin

    PubMed Central

    Li, Songhai; Nachtigall, Paul E.; Breese, Marlee; Supin, Alexander Ya.

    2012-01-01

    Emitted biosonar clicks and auditory evoked potential (AEP) responses triggered by the clicks were synchronously recorded during echolocation in an Atlantic bottlenose dolphin (Tursiops truncatus) trained to wear suction-cup EEG electrodes and to detect targets by echolocation. Three targets with target strengths of −34, −28, and −22 dB were used at distances of 2 to 6.5 m for each target. The AEP responses were sorted according to the corresponding emitted click source levels in 5-dB bins and averaged within each bin to extract biosonar click-related AEPs from noise. The AEP amplitudes were measured peak-to-peak and plotted as a function of click source levels for each target type, distance, and target-present or target-absent condition. Hearing sensation levels of the biosonar clicks were evaluated by comparing the functions of the biosonar click-related AEP amplitude-versus-click source level to a function of external (in free field) click-related AEP amplitude-versus-click sound pressure level. The results indicated that the dolphin's hearing sensation levels to her own biosonar clicks were equal to that of external clicks with sound pressure levels 16 to 36 dB lower than the biosonar click source levels, varying with target type, distance, and condition. These data may be assumed to indicate that the bottlenose dolphin possesses effective protection mechanisms to isolate the self-produced intense biosonar beam from the animal's ears during echolocation. PMID:22238654

  17. Hearing sensation levels of emitted biosonar clicks in an echolocating Atlantic bottlenose dolphin.

    PubMed

    Li, Songhai; Nachtigall, Paul E; Breese, Marlee; Supin, Alexander Ya

    2012-01-01

    Emitted biosonar clicks and auditory evoked potential (AEP) responses triggered by the clicks were synchronously recorded during echolocation in an Atlantic bottlenose dolphin (Tursiops truncatus) trained to wear suction-cup EEG electrodes and to detect targets by echolocation. Three targets with target strengths of -34, -28, and -22 dB were used at distances of 2 to 6.5 m for each target. The AEP responses were sorted according to the corresponding emitted click source levels in 5-dB bins and averaged within each bin to extract biosonar click-related AEPs from noise. The AEP amplitudes were measured peak-to-peak and plotted as a function of click source levels for each target type, distance, and target-present or target-absent condition. Hearing sensation levels of the biosonar clicks were evaluated by comparing the functions of the biosonar click-related AEP amplitude-versus-click source level to a function of external (in free field) click-related AEP amplitude-versus-click sound pressure level. The results indicated that the dolphin's hearing sensation levels to her own biosonar clicks were equal to that of external clicks with sound pressure levels 16 to 36 dB lower than the biosonar click source levels, varying with target type, distance, and condition. These data may be assumed to indicate that the bottlenose dolphin possesses effective protection mechanisms to isolate the self-produced intense biosonar beam from the animal's ears during echolocation.

  18. Finite element modelling of sound transmission from outer to inner ear.

    PubMed

    Areias, Bruno; Santos, Carla; Natal Jorge, Renato M; Gentil, Fernanda; Parente, Marco Pl

    2016-11-01

    The ear is one of the most complex organs in the human body. Sound is a sequence of pressure waves, which propagates through a compressible media such as air. The pinna concentrates the sound waves into the external auditory meatus. In this canal, the sound is conducted to the tympanic membrane. The tympanic membrane transforms the pressure variations into mechanical displacements, which are then transmitted to the ossicles. The vibration of the stapes footplate creates pressure waves in the fluid inside the cochlea; these pressure waves stimulate the hair cells, generating electrical signals which are sent to the brain through the cochlear nerve, where they are decoded. In this work, a three-dimensional finite element model of the human ear is developed. The model incorporates the tympanic membrane, ossicular bones, part of temporal bone (external auditory meatus and tympanic cavity), middle ear ligaments and tendons, cochlear fluid, skin, ear cartilage, jaw and the air in external auditory meatus and tympanic cavity. Using the finite element method, the magnitude and the phase angle of the umbo and stapes footplate displacement are calculated. Two slightly different models are used: one model takes into consideration the presence of air in the external auditory meatus while the other does not. The middle ear sound transfer function is determined for a stimulus of 60 dB SPL, applied to the outer surface of the air in the external auditory meatus. The obtained results are compared with previously published data in the literature. This study highlights the importance of external auditory meatus in the sound transmission. The pressure gain is calculated for the external auditory meatus.

  19. Attention to memory: orienting attention to sound object representations.

    PubMed

    Backer, Kristina C; Alain, Claude

    2014-01-01

    Despite a growing acceptance that attention and memory interact, and that attention can be focused on an active internal mental representation (i.e., reflective attention), there has been a paucity of work focusing on reflective attention to 'sound objects' (i.e., mental representations of actual sound sources in the environment). Further research on the dynamic interactions between auditory attention and memory, as well as its degree of neuroplasticity, is important for understanding how sound objects are represented, maintained, and accessed in the brain. This knowledge can then guide the development of training programs to help individuals with attention and memory problems. This review article focuses on attention to memory with an emphasis on behavioral and neuroimaging studies that have begun to explore the mechanisms that mediate reflective attentional orienting in vision and more recently, in audition. Reflective attention refers to situations in which attention is oriented toward internal representations rather than focused on external stimuli. We propose four general principles underlying attention to short-term memory. Furthermore, we suggest that mechanisms involved in orienting attention to visual object representations may also apply for orienting attention to sound object representations.

  20. On noninvasive assessment of acoustic fields acting on the fetus

    NASA Astrophysics Data System (ADS)

    Antonets, V. A.; Kazakov, V. V.

    2014-05-01

    The aim of this study is to verify a noninvasive technique for assessing the characteristics of acoustic fields in the audible range arising in the uterus under the action of maternal voice, external sounds, and vibrations. This problem is very important in view of actively developed methods for delivery of external sounds to the uterus: music, maternal voice recordings, sounds from outside the mother's body, etc., that supposedly support development of the fetus at the prenatal stage psychologically and cognitively. However, the parameters of acoustic signals have been neither measured nor normalized, which may be dangerous for the fetus and hinder actual assessment of their impact on fetal development. The authors show that at frequencies below 1 kHz, acoustic pressure in the uterus may be measured noninvasively using a hydrophone placed in a soft capsule filled with liquid. It was found that the acoustic field at frequencies up to 1 kHz arising in the uterus under the action of an external sound field has amplitude-frequency parameters close to those of the external field; i.e., the external field penetrates the uterus with hardly any difficulty.

  1. Space Shuttle Crawler Transporter Sound Attenuation Study

    NASA Technical Reports Server (NTRS)

    Margasahayam, Ravi N.; MacDonald, Rod; Faszer, Clifford

    2004-01-01

    The crawler transporter (CT) is the world's largest tracked vehicle known, weighing 6 million pounds with a length of 131 feet and a width of 113 feet. The Kennedy Space Center (KSC) has two CTs that were designed and built for the Apollo program in the 1960's, maintained and retrofitted for use in the Space Shuttle program. As a key element of the Space Shuttle ground systems, the crawler transports the entire 12-million-pound stack comprising the orbiter, the mobile launch platform (MLP), the external tank (ET), and the solid rocket boosters (SRB) from the Vehicle Assembly Building (VAB) to the launch pad. This rollout, constituting a 3.5-5.0-mile journey at a top speed of 0.9 miles-per-hour, requires over 8 hours to reach either Launch Complex 39A or B. This activity is only a prelude to the spectacle of sound and fury of the Space Shuttle launch to orbit in less than 10 minutes and traveling at orbital velocities of Mach 24. This paper summarizes preliminary results from the Crawler Transporter Sound Attenuation Study, encompassing test and engineering analysis of significant sound sources to measure and record full frequency spectrum and intensity of the various noise sources and to analyze the conditions of vibration. Additionally, data such as ventilation criteria, plus operational procedures were considered to provide a comprehensive noise suppression design for implementation. To date, sound attenuation study and results on Crawler 2 have shown significant noise reductions ranging from 5 to 24 dBA.

  2. External mean flow influence on sound transmission through finite clamped double-wall sandwich panels

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Catalan, Jean-Cédric

    2017-09-01

    This paper studies the influence of an external mean flow on the sound transmission through finite clamped double-wall sandwich panels lined with poroelastic materials. Biot's theory is employed to describe wave propagation in poroelastic materials and various configurations of coupling the poroelastic layer to the facing plates are considered. The clamped boundary of finite panels are dealt with by the modal superposition theory and the weighted residual (Garlekin) method, leading to a matrix equation solution for the sound transmission loss (STL) through the structure. The theoretical model is validated against existing theories of infinite sandwich panels with and without an external flow. The numerical results of a single incident wave show that the external mean flow has significant effects on the STL which are coupled with the clamped boundary effect dominating in the low-frequency range. The external mean flow also influences considerably the limiting incidence angle of the panel system and the effect of the incidence angle on the STL. However, the influences of the azimuthal angle and the external flow orientation are negligible.

  3. Investigation of the effects of a moving acoustic medium on jet noise measurements

    NASA Technical Reports Server (NTRS)

    Cole, J. E., III; Palmer, D. W.

    1976-01-01

    Noise from an unheated sonic jet in the presence of an external flow is measured in a free-jet wind tunnel using microphones located both inside and outside the flow. Comparison of the data is made with results of similar studies. The results are also compared with theoretical predictions of the source strength for jet noise in the presence of flow and of the effects of sound propagation through a shear layer.

  4. 21 CFR 870.2860 - Heart sound transducer.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Heart sound transducer. 870.2860 Section 870.2860...) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Monitoring Devices § 870.2860 Heart sound transducer. (a) Identification. A heart sound transducer is an external transducer that exhibits a change in...

  5. 21 CFR 870.2860 - Heart sound transducer.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Heart sound transducer. 870.2860 Section 870.2860...) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Monitoring Devices § 870.2860 Heart sound transducer. (a) Identification. A heart sound transducer is an external transducer that exhibits a change in...

  6. 21 CFR 870.2860 - Heart sound transducer.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Heart sound transducer. 870.2860 Section 870.2860...) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Monitoring Devices § 870.2860 Heart sound transducer. (a) Identification. A heart sound transducer is an external transducer that exhibits a change in...

  7. 21 CFR 870.2860 - Heart sound transducer.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Heart sound transducer. 870.2860 Section 870.2860...) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Monitoring Devices § 870.2860 Heart sound transducer. (a) Identification. A heart sound transducer is an external transducer that exhibits a change in...

  8. 21 CFR 870.2860 - Heart sound transducer.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Heart sound transducer. 870.2860 Section 870.2860...) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Monitoring Devices § 870.2860 Heart sound transducer. (a) Identification. A heart sound transducer is an external transducer that exhibits a change in...

  9. Judging sound rotation when listeners and sounds rotate: Sound source localization is a multisystem process.

    PubMed

    Yost, William A; Zhong, Xuan; Najam, Anbar

    2015-11-01

    In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.

  10. Noise Attenuation Performance Assessment of the Joint Helmet Mounted Cueing System (JHMCS)

    DTIC Science & Technology

    2010-08-01

    Flash Drive (CFD) memory (Figure 9) and Sound Professionals SP-TFB-2 Miniature Binaural Microphones with the Sound Professionals SP-SPSB-1 Slim-line...flight noise. Sound Professionals binaural microphones were placed to record both internal and external sounds. One microphone was attached to the

  11. Mathematically trivial control of sound using a parametric beam focusing source.

    PubMed

    Tanaka, Nobuo; Tanaka, Motoki

    2011-01-01

    By exploiting a case regarded as trivial, this paper presents global active noise control using a parametric beam focusing source (PBFS). As with a dipole model, one is used for a primary sound source and the other for a control sound source, the control effect for minimizing a total acoustic power depends on the distance between the two. When the distance becomes zero, the total acoustic power becomes null, hence nothing less than a trivial case. Because of the constraints in practice, there exist difficulties in placing a control source close enough to a primary source. However, by projecting a sound beam of a parametric array loudspeaker onto the target sound source (primary source), a virtual sound source may be created on the target sound source, thereby enabling the collocation of the sources. In order to further ensure feasibility of the trivial case, a PBFS is then introduced in an effort to meet the size of the two sources. Reflected sound wave of the PBFS, which is tantamount to the virtual sound source output, aims to suppress the primary sound. Finally, a numerical analysis as well as an experiment is conducted, verifying the validity of the proposed methodology.

  12. Self-generated sounds of locomotion and ventilation and the evolution of human rhythmic abilities.

    PubMed

    Larsson, Matz

    2014-01-01

    It has been suggested that the basic building blocks of music mimic sounds of moving humans, and because the brain was primed to exploit such sounds, they eventually became incorporated in human culture. However, that raises further questions. Why do genetically close, culturally well-developed apes lack musical abilities? Did our switch to bipedalism influence the origins of music? Four hypotheses are raised: (1) Human locomotion and ventilation can mask critical sounds in the environment. (2) Synchronization of locomotion reduces that problem. (3) Predictable sounds of locomotion may stimulate the evolution of synchronized behavior. (4) Bipedal gait and the associated sounds of locomotion influenced the evolution of human rhythmic abilities. Theoretical models and research data suggest that noise of locomotion and ventilation may mask critical auditory information. People often synchronize steps subconsciously. Human locomotion is likely to produce more predictable sounds than those of non-human primates. Predictable locomotion sounds may have improved our capacity of entrainment to external rhythms and to feel the beat in music. A sense of rhythm could aid the brain in distinguishing among sounds arising from discrete sources and also help individuals to synchronize their movements with one another. Synchronization of group movement may improve perception by providing periods of relative silence and by facilitating auditory processing. The adaptive value of such skills to early ancestors may have been keener detection of prey or stalkers and enhanced communication. Bipedal walking may have influenced the development of entrainment in humans and thereby the evolution of rhythmic abilities.

  13. Understanding the intentional acoustic behavior of humpback whales: a production-based approach.

    PubMed

    Cazau, Dorian; Adam, Olivier; Laitman, Jeffrey T; Reidenberg, Joy S

    2013-09-01

    Following a production-based approach, this paper deals with the acoustic behavior of humpback whales. This approach investigates various physical factors, which are either internal (e.g., physiological mechanisms) or external (e.g., environmental constraints) to the respiratory tractus of the whale, for their implications in sound production. This paper aims to describe a functional scenario of this tractus for the generation of vocal sounds. To do so, a division of this tractus into three different configurations is proposed, based on the air recirculation process which determines air sources and laryngeal valves. Then, assuming a vocal function (in sound generation or modification) for several specific anatomical components, an acoustic characterization of each of these configurations is proposed to link different spectral features, namely, fundamental frequencies and formant structures, to specific vocal production mechanisms. A discussion around the question of whether the whale is able to fully exploit the acoustic potential of its respiratory tractus is eventually provided.

  14. Sound waves and flexural mode dynamics in two-dimensional crystals

    NASA Astrophysics Data System (ADS)

    Michel, K. H.; Scuracchio, P.; Peeters, F. M.

    2017-09-01

    Starting from a Hamiltonian with anharmonic coupling between in-plane acoustic displacements and out-of-plane (flexural) modes, we derived coupled equations of motion for in-plane displacements correlations and flexural mode density fluctuations. Linear response theory and time-dependent thermal Green's functions techniques are applied in order to obtain different response functions. As external perturbations we allow for stresses and thermal heat sources. The displacement correlations are described by a Dyson equation where the flexural density distribution enters as an additional perturbation. The flexural density distribution satisfies a kinetic equation where the in-plane lattice displacements act as a perturbation. In the hydrodynamic limit this system of coupled equations is at the basis of a unified description of elastic and thermal phenomena, such as isothermal versus adiabatic sound motion and thermal conductivity versus second sound. The general theory is formulated in view of application to graphene, two-dimensional h-BN, and 2H-transition metal dichalcogenides and oxides.

  15. A Flexible 360-Degree Thermal Sound Source Based on Laser Induced Graphene

    PubMed Central

    Tao, Lu-Qi; Liu, Ying; Ju, Zhen-Yi; Tian, He; Xie, Qian-Yi; Yang, Yi; Ren, Tian-Ling

    2016-01-01

    A flexible sound source is essential in a whole flexible system. It’s hard to integrate a conventional sound source based on a piezoelectric part into a whole flexible system. Moreover, the sound pressure from the back side of a sound source is usually weaker than that from the front side. With the help of direct laser writing (DLW) technology, the fabrication of a flexible 360-degree thermal sound source becomes possible. A 650-nm low-power laser was used to reduce the graphene oxide (GO). The stripped laser induced graphene thermal sound source was then attached to the surface of a cylindrical bottle so that it could emit sound in a 360-degree direction. The sound pressure level and directivity of the sound source were tested, and the results were in good agreement with the theoretical results. Because of its 360-degree sound field, high flexibility, high efficiency, low cost, and good reliability, the 360-degree thermal acoustic sound source will be widely applied in consumer electronics, multi-media systems, and ultrasonic detection and imaging. PMID:28335239

  16. Direct comparison of the impact of head tracking, reverberation, and individualized head-related transfer functions on the spatial perception of a virtual speech source

    NASA Technical Reports Server (NTRS)

    Begault, D. R.; Wenzel, E. M.; Anderson, M. R.

    2001-01-01

    A study of sound localization performance was conducted using headphone-delivered virtual speech stimuli, rendered via HRTF-based acoustic auralization software and hardware, and blocked-meatus HRTF measurements. The independent variables were chosen to evaluate commonly held assumptions in the literature regarding improved localization: inclusion of head tracking, individualized HRTFs, and early and diffuse reflections. Significant effects were found for azimuth and elevation error, reversal rates, and externalization.

  17. ERP correlates of processing the auditory consequences of own versus observed actions.

    PubMed

    Ghio, Marta; Scharmach, Katrin; Bellebaum, Christian

    2018-06-01

    Research has so far focused on neural mechanisms that allow us to predict the sensory consequences of our own actions, thus also contributing to ascribing them to ourselves as agents. Less attention has been devoted to processing the sensory consequences of observed actions ascribed to another human agent. Focusing on audition, there is consistent evidence of a reduction of the auditory N1 ERP for self- versus externally generated sounds, while ERP correlates of processing sensory consequences of observed actions are mainly unexplored. In a between-groups ERP study, we compared sounds generated by self-performed (self group) or observed (observation group) button presses with externally generated sounds, which were presented either intermixed with action-generated sounds or in a separate condition. Results revealed an overall reduction of the N1 amplitude for processing action- versus externally generated sounds in both the intermixed and the separate condition, with no difference between the groups. Further analyses, however, suggested that an N1 attenuation effect relative to the intermixed condition at frontal electrode sites might exist only for the self but not for the observation group. For both groups, we found a reduction of the P2 amplitude for processing action- versus all externally generated sounds. We discuss whether the N1 and the P2 reduction can be interpreted in terms of predictive mechanisms for both action execution and observation, and to what extent these components might reflect also the feeling of (self) agency and the judgment of agency (i.e., ascribing agency either to the self or to others). © 2017 Society for Psychophysiological Research.

  18. Characterizing, synthesizing, and/or canceling out acoustic signals from sound sources

    DOEpatents

    Holzrichter, John F [Berkeley, CA; Ng, Lawrence C [Danville, CA

    2007-03-13

    A system for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate and animate sound sources. Electromagnetic sensors monitor excitation sources in sound producing systems, such as animate sound sources such as the human voice, or from machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The systems disclosed enable accurate calculation of transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.

  19. Source-to-sensation level ratio of transmitted biosonar pulses in an echolocating false killer whale.

    PubMed

    Supin, Alexander Ya; Nachtigall, Paul E; Breese, Marlee

    2006-07-01

    Transmitted biosonar pulses, and the brain auditory evoked potentials (AEPs) associated with those pulses, were synchronously recorded in a false killer whale Pseudorca crassidens trained to accept suction-cup EEG electrodes and to detect targets by echolocation. AEP amplitude was investigated as a function of the transmitted biosonar pulse source level. For that, a few thousand of the individual AEP records were sorted according to the spontaneously varied amplitude of synchronously recorded biosonar pulses. In each of the sorting bins (in 5-dB steps) AEP records were averaged to extract AEP from noise; AEP amplitude was plotted as a function of the biosonar pulse source level. For comparison, AEPs were recorded to external (in free field) sound pulses of a waveform and spectrum similar to those of the biosonar pulses; amplitude of these AEPs was plotted as a function of sound pressure level. A comparison of these two functions has shown that, depending on the presence or absence of a target, the sensitivity of the whale's hearing to its own transmitted biosonar pulses was 30 to 45 dB lower than might be expected in a free acoustic field.

  20. The Relative Contribution of Interaural Time and Magnitude Cues to Dynamic Sound Localization

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    This paper presents preliminary data from a study examining the relative contribution of interaural time differences (ITDs) and interaural level differences (ILDs) to the localization of virtual sound sources both with and without head motion. The listeners' task was to estimate the apparent direction and distance of virtual sources (broadband noise) presented over headphones. Stimuli were synthesized from minimum phase representations of nonindividualized directional transfer functions; binaural magnitude spectra were derived from the minimum phase estimates and ITDs were represented as a pure delay. During dynamic conditions, listeners were encouraged to move their heads; the position of the listener's head was tracked and the stimuli were synthesized in real time using a Convolvotron to simulate a stationary external sound source. ILDs and ITDs were either correctly or incorrectly correlated with head motion: (1) both ILDs and ITDs correctly correlated, (2) ILDs correct, ITD fixed at 0 deg azimuth and 0 deg elevation, (3) ITDs correct, ILDs fixed at 0 deg, 0 deg. Similar conditions were run for static conditions except that none of the cues changed with head motion. The data indicated that, compared to static conditions, head movements helped listeners to resolve confusions primarily when ILDs were correctly correlated, although a smaller effect was also seen for correct ITDs. Together with the results for static conditions, the data suggest that localization tends to be dominated by the cue that is most reliable or consistent, when reliability is defined by consistency over time as well as across frequency bands.

  1. The Schultz MIDI Benchmarking Toolbox for MIDI interfaces, percussion pads, and sound cards.

    PubMed

    Schultz, Benjamin G

    2018-04-17

    The Musical Instrument Digital Interface (MIDI) was readily adopted for auditory sensorimotor synchronization experiments. These experiments typically use MIDI percussion pads to collect responses, a MIDI-USB converter (or MIDI-PCI interface) to record responses on a PC and manipulate feedback, and an external MIDI sound module to generate auditory feedback. Previous studies have suggested that auditory feedback latencies can be introduced by these devices. The Schultz MIDI Benchmarking Toolbox (SMIDIBT) is an open-source, Arduino-based package designed to measure the point-to-point latencies incurred by several devices used in the generation of response-triggered auditory feedback. Experiment 1 showed that MIDI messages are sent and received within 1 ms (on average) in the absence of any external MIDI device. Latencies decreased when the baud rate increased above the MIDI protocol default (31,250 bps). Experiment 2 benchmarked the latencies introduced by different MIDI-USB and MIDI-PCI interfaces. MIDI-PCI was superior to MIDI-USB, primarily because MIDI-USB is subject to USB polling. Experiment 3 tested three MIDI percussion pads. Both the audio and MIDI message latencies were significantly greater than 1 ms for all devices, and there were significant differences between percussion pads and instrument patches. Experiment 4 benchmarked four MIDI sound modules. Audio latencies were significantly greater than 1 ms, and there were significant differences between sound modules and instrument patches. These experiments suggest that millisecond accuracy might not be achievable with MIDI devices. The SMIDIBT can be used to benchmark a range of MIDI devices, thus allowing researchers to make informed decisions when choosing testing materials and to arrive at an acceptable latency at their discretion.

  2. Directional Hearing and Sound Source Localization in Fishes.

    PubMed

    Sisneros, Joseph A; Rogers, Peter H

    2016-01-01

    Evidence suggests that the capacity for sound source localization is common to mammals, birds, reptiles, and amphibians, but surprisingly it is not known whether fish locate sound sources in the same manner (e.g., combining binaural and monaural cues) or what computational strategies they use for successful source localization. Directional hearing and sound source localization in fishes continues to be important topics in neuroethology and in the hearing sciences, but the empirical and theoretical work on these topics have been contradictory and obscure for decades. This chapter reviews the previous behavioral work on directional hearing and sound source localization in fishes including the most recent experiments on sound source localization by the plainfin midshipman fish (Porichthys notatus), which has proven to be an exceptional species for fish studies of sound localization. In addition, the theoretical models of directional hearing and sound source localization for fishes are reviewed including a new model that uses a time-averaged intensity approach for source localization that has wide applicability with regard to source type, acoustic environment, and time waveform.

  3. Neural Decoding of Bistable Sounds Reveals an Effect of Intention on Perceptual Organization

    PubMed Central

    2018-01-01

    Auditory signals arrive at the ear as a mixture that the brain must decompose into distinct sources based to a large extent on acoustic properties of the sounds. An important question concerns whether listeners have voluntary control over how many sources they perceive. This has been studied using pure high (H) and low (L) tones presented in the repeating pattern HLH-HLH-, which can form a bistable percept heard either as an integrated whole (HLH-) or as segregated into high (H-H-) and low (-L-) sequences. Although instructing listeners to try to integrate or segregate sounds affects reports of what they hear, this could reflect a response bias rather than a perceptual effect. We had human listeners (15 males, 12 females) continuously report their perception of such sequences and recorded neural activity using MEG. During neutral listening, a classifier trained on patterns of neural activity distinguished between periods of integrated and segregated perception. In other conditions, participants tried to influence their perception by allocating attention either to the whole sequence or to a subset of the sounds. They reported hearing the desired percept for a greater proportion of time than when listening neutrally. Critically, neural activity supported these reports; stimulus-locked brain responses in auditory cortex were more likely to resemble the signature of segregation when participants tried to hear segregation than when attempting to perceive integration. These results indicate that listeners can influence how many sound sources they perceive, as reflected in neural responses that track both the input and its perceptual organization. SIGNIFICANCE STATEMENT Can we consciously influence our perception of the external world? We address this question using sound sequences that can be heard either as coming from a single source or as two distinct auditory streams. Listeners reported spontaneous changes in their perception between these two interpretations while we recorded neural activity to identify signatures of such integration and segregation. They also indicated that they could, to some extent, choose between these alternatives. This claim was supported by corresponding changes in responses in auditory cortex. By linking neural and behavioral correlates of perception, we demonstrate that the number of objects that we perceive can depend not only on the physical attributes of our environment, but also on how we intend to experience it. PMID:29440556

  4. Effect of additional warning sounds on pedestrians' detection of electric vehicles: An ecological approach.

    PubMed

    Fleury, Sylvain; Jamet, Éric; Roussarie, Vincent; Bosc, Laure; Chamard, Jean-Christophe

    2016-12-01

    Virtually silent electric vehicles (EVs) may pose a risk for pedestrians. This paper describes two studies that were conducted to assess the influence of different types of external sounds on EV detectability. In the first study, blindfolded participants had to detect an approaching EV with either no warning sounds at all or one of three types of sound we tested. In the second study, designed to replicate the results of the first one in an ecological setting, the EV was driven along a road and the experimenters counted the number of people who turned their heads in its direction. Results of the first study showed that adding external sounds improve EV detection, and modulating the frequency and increasing the pitch of these sounds makes them more effective. This improvement was confirmed in the ecological context. Consequently, pitch variation and frequency modulation should both be taken into account in future AVAS design. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Sound source identification and sound radiation modeling in a moving medium using the time-domain equivalent source method.

    PubMed

    Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang

    2015-05-01

    Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.

  6. A study of the variable impedance surface concept as a means for reducing noise from jet interaction with deployed lift-augmenting flaps

    NASA Technical Reports Server (NTRS)

    Hayden, R. E.; Kadman, Y.; Chanaud, R. C.

    1972-01-01

    The feasibility of quieting the externally-blown-flap (EBF) noise sources which are due to interaction of jet exhaust flow with deployed flaps was demonstrated on a 1/15-scale 3-flap EBF model. Sound field characteristics were measured and noise reduction fundamentals were reviewed in terms of source models. Test of the 1/15-scale model showed broadband noise reductions of up to 20 dB resulting from combination of variable impedance flap treatment and mesh grids placed in the jet flow upstream of the flaps. Steady-state lift, drag, and pitching moment were measured with and without noise reduction treatment.

  7. Nonreciprocal Linear Transmission of Sound in a Viscous Environment with Broken P Symmetry.

    PubMed

    Walker, E; Neogi, A; Bozhko, A; Zubov, Yu; Arriaga, J; Heo, H; Ju, J; Krokhin, A A

    2018-05-18

    Reciprocity is a fundamental property of the wave equation in a linear medium that originates from time-reversal symmetry, or T symmetry. For electromagnetic waves, reciprocity can be violated by an external magnetic field. It is much harder to realize nonreciprocity for acoustic waves. Here we report the first experimental observation of linear nonreciprocal transmission of ultrasound through a water-submerged phononic crystal consisting of asymmetric rods. Viscosity of water is the factor that breaks the T symmetry. Asymmetry, or broken P symmetry along the direction of sound propagation, is the second necessary factor for nonreciprocity. Experimental results are in agreement with numerical simulations based on the Navier-Stokes equation. Our study demonstrates that a medium with broken PT symmetry is acoustically nonreciprocal. The proposed passive nonreciprocal device is cheap, robust, and does not require an energy source.

  8. Nonreciprocal Linear Transmission of Sound in a Viscous Environment with Broken P Symmetry

    NASA Astrophysics Data System (ADS)

    Walker, E.; Neogi, A.; Bozhko, A.; Zubov, Yu.; Arriaga, J.; Heo, H.; Ju, J.; Krokhin, A. A.

    2018-05-01

    Reciprocity is a fundamental property of the wave equation in a linear medium that originates from time-reversal symmetry, or T symmetry. For electromagnetic waves, reciprocity can be violated by an external magnetic field. It is much harder to realize nonreciprocity for acoustic waves. Here we report the first experimental observation of linear nonreciprocal transmission of ultrasound through a water-submerged phononic crystal consisting of asymmetric rods. Viscosity of water is the factor that breaks the T symmetry. Asymmetry, or broken P symmetry along the direction of sound propagation, is the second necessary factor for nonreciprocity. Experimental results are in agreement with numerical simulations based on the Navier-Stokes equation. Our study demonstrates that a medium with broken PT symmetry is acoustically nonreciprocal. The proposed passive nonreciprocal device is cheap, robust, and does not require an energy source.

  9. The auditory and non-auditory brain areas involved in tinnitus. An emergent property of multiple parallel overlapping subnetworks

    PubMed Central

    Vanneste, Sven; De Ridder, Dirk

    2012-01-01

    Tinnitus is the perception of a sound in the absence of an external sound source. It is characterized by sensory components such as the perceived loudness, the lateralization, the tinnitus type (pure tone, noise-like) and associated emotional components, such as distress and mood changes. Source localization of quantitative electroencephalography (qEEG) data demonstrate the involvement of auditory brain areas as well as several non-auditory brain areas such as the anterior cingulate cortex (dorsal and subgenual), auditory cortex (primary and secondary), dorsal lateral prefrontal cortex, insula, supplementary motor area, orbitofrontal cortex (including the inferior frontal gyrus), parahippocampus, posterior cingulate cortex and the precuneus, in different aspects of tinnitus. Explaining these non-auditory brain areas as constituents of separable subnetworks, each reflecting a specific aspect of the tinnitus percept increases the explanatory power of the non-auditory brain areas involvement in tinnitus. Thus, the unified percept of tinnitus can be considered an emergent property of multiple parallel dynamically changing and partially overlapping subnetworks, each with a specific spontaneous oscillatory pattern and functional connectivity signature. PMID:22586375

  10. Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology

    NASA Astrophysics Data System (ADS)

    Niwa, Kenta; Nishino, Takanori; Takeda, Kazuya

    A sound field reproduction method is proposed that uses blind source separation and a head-related transfer function. In the proposed system, multichannel acoustic signals captured at distant microphones are decomposed to a set of location/signal pairs of virtual sound sources based on frequency-domain independent component analysis. After estimating the locations and the signals of the virtual sources by convolving the controlled acoustic transfer functions with each signal, the spatial sound is constructed at the selected point. In experiments, a sound field made by six sound sources is captured using 48 distant microphones and decomposed into sets of virtual sound sources. Since subjective evaluation shows no significant difference between natural and reconstructed sound when six virtual sources and are used, the effectiveness of the decomposing algorithm as well as the virtual source representation are confirmed.

  11. Flyover noise characteristics of a tilt-wing V/STOL aircraft (XC-142A)

    NASA Technical Reports Server (NTRS)

    Pegg, R. J.; Henderson, H. R.; Hilton, D. A.

    1974-01-01

    A field noise measurement investigation was conducted during the flight testing of an XC-142A tilt-wing V/STOL aircraft to define its external noise characteristics. Measured time histories of overall sound pressure level show that noise levels are higher at lower airspeeds and decrease with increased speed up to approximately 160 knots. The primary noise sources were the four high-speed, main propellers. Flyover-noise time histories calculated by existing techniques for propeller noise prediction are in reasonable agreement with the experimental data.

  12. A method for evaluating the relation between sound source segregation and masking

    PubMed Central

    Lutfi, Robert A.; Liu, Ching-Ju

    2011-01-01

    Sound source segregation refers to the ability to hear as separate entities two or more sound sources comprising a mixture. Masking refers to the ability of one sound to make another sound difficult to hear. Often in studies, masking is assumed to result from a failure of segregation, but this assumption may not always be correct. Here a method is offered to identify the relation between masking and sound source segregation in studies and an example is given of its application. PMID:21302979

  13. Reduction of external noise of mobile energy facilities by using active noise control system in muffler

    NASA Astrophysics Data System (ADS)

    Polivaev, O. I.; Kuznetsov, A. N.; Larionov, A. N.; Beliansky, R. G.

    2018-03-01

    The paper describes a method for the reducing emission of low-frequency noise of modern automotive vehicles into the environment. The importance of reducing the external noise of modern mobile energy facilities made in Russia is substantiated. Standard methods for controlling external noise in technology are of low efficiency when low-frequency sound waves are reduced. In this case, it is in the low-frequency zone of the sound range that the main power of the noise emitted by the machinery lies. The most effective way to reduce such sound waves is to use active noise control systems. A design of a muffler using a similar system is presented. This muffler allowed one to reduce the emission of increased noise levels into the environment by 7-11 dB and to increase acoustic comfort at the operator's workplace by 3-5 dB.

  14. Sound source localization identification accuracy: Envelope dependencies.

    PubMed

    Yost, William A

    2017-07-01

    Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.

  15. System and method for characterizing synthesizing and/or canceling out acoustic signals from inanimate sound sources

    DOEpatents

    Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.

    2003-01-01

    A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.

  16. System and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources

    DOEpatents

    Holzrichter, John F; Burnett, Greg C; Ng, Lawrence C

    2013-05-21

    A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.

  17. System and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources

    DOEpatents

    Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.

    2007-10-16

    A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.

  18. SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization

    PubMed Central

    Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah

    2014-01-01

    Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431

  19. Tinnitus retraining therapy: a different view on tinnitus.

    PubMed

    Jastreboff, Pawel J; Jastreboff, Margaret M

    2006-01-01

    Tinnitus retraining therapy (TRT) is a method for treating tinnitus and decreased sound tolerance, based on the neurophysiological model of tinnitus. This model postulates involvement of the limbic and autonomic nervous systems in all cases of clinically significant tinnitus and points out the importance of both conscious and subconscious connections, which are governed by principles of conditioned reflexes. The treatments for tinnitus and misophonia are based on the concept of extinction of these reflexes, labeled as habituation. TRT aims at inducing changes in the mechanisms responsible for transferring signal (i.e., tinnitus, or external sound in the case of misophonia) from the auditory system to the limbic and autonomic nervous systems, and through this, remove signal-induced reactions without attempting to directly attenuate the tinnitus source or tinnitus/misophonia-evoked reactions. As such, TRT is effective for any type of tinnitus regardless of its etiology. TRT consists of: (1) counseling based on the neurophysiological model of tinnitus, and (2) sound therapy (with or without instrumentation). The main role of counseling is to reclassify tinnitus into the category of neutral stimuli. The role of sound therapy is to decrease the strength of the tinnitus signal. It is crucial to assess and treat tinnitus, decreased sound tolerance, and hearing loss simultaneously. Results from various groups have shown that TRT can be an effective method of treatment. Copyright (c) 2006 S. Karger AG, Basel.

  20. The effect of spatial distribution on the annoyance caused by simultaneous sounds

    NASA Astrophysics Data System (ADS)

    Vos, Joos; Bronkhorst, Adelbert W.; Fedtke, Thomas

    2004-05-01

    A considerable part of the population is exposed to simultaneous and/or successive environmental sounds from different sources. In many cases, these sources are different with respect to their locations also. In a laboratory study, it was investigated whether the annoyance caused by the multiple sounds is affected by the spatial distribution of the sources. There were four independent variables: (1) sound category (stationary or moving), (2) sound type (stationary: lawn-mower, leaf-blower, and chain saw; moving: road traffic, railway, and motorbike), (3) spatial location (left, right, and combinations), and (4) A-weighted sound exposure level (ASEL of single sources equal to 50, 60, or 70 dB). In addition to the individual sounds in isolation, various combinations of two or three different sources within each sound category and sound level were presented for rating. The annoyance was mainly determined by sound level and sound source type. In most cases there were neither significant main effects of spatial distribution nor significant interaction effects between spatial distribution and the other variables. It was concluded that for rating the spatially distrib- uted sounds investigated, the noise dose can simply be determined by a summation of the levels for the left and right channels. [Work supported by CEU.

  1. Sound Transmission through a Cylindrical Sandwich Shell with Honeycomb Core

    NASA Technical Reports Server (NTRS)

    Tang, Yvette Y.; Robinson, Jay H.; Silcox, Richard J.

    1996-01-01

    Sound transmission through an infinite cylindrical sandwich shell is studied in the context of the transmission of airborne sound into aircraft interiors. The cylindrical shell is immersed in fluid media and excited by an oblique incident plane sound wave. The internal and external fluids are different and there is uniform airflow in the external fluid medium. An explicit expression of transmission loss is derived in terms of modal impedance of the fluids and the shell. The results show the effects of (a) the incident angles of the plane wave; (b) the flight conditions of Mach number and altitude of the aircraft; (c) the ratios between the core thickness and the total thickness of the shell; and (d) the structural loss factors on the transmission loss. Comparisons of the transmission loss are made among different shell constructions and different shell theories.

  2. Spherical loudspeaker array for local active control of sound.

    PubMed

    Rafaely, Boaz

    2009-05-01

    Active control of sound has been employed to reduce noise levels around listeners' head using destructive interference from noise-canceling sound sources. Recently, spherical loudspeaker arrays have been studied as multiple-channel sound sources, capable of generating sound fields with high complexity. In this paper, the potential use of a spherical loudspeaker array for local active control of sound is investigated. A theoretical analysis of the primary and secondary sound fields around a spherical sound source reveals that the natural quiet zones for the spherical source have a shell-shape. Using numerical optimization, quiet zones with other shapes are designed, showing potential for quiet zones with extents that are significantly larger than the well-known limit of a tenth of a wavelength for monopole sources. The paper presents several simulation examples showing quiet zones in various configurations.

  3. Localizing the sources of two independent noises: Role of time varying amplitude differences

    PubMed Central

    Yost, William A.; Brown, Christopher A.

    2013-01-01

    Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region. PMID:23556597

  4. Localizing the sources of two independent noises: role of time varying amplitude differences.

    PubMed

    Yost, William A; Brown, Christopher A

    2013-04-01

    Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region.

  5. The effects of alterations in the osseous external auditory canal on perceived sound quality.

    PubMed

    van Spronsen, Erik; Brienesse, Patrick; Ebbens, Fenna A; Waterval, Jerome J; Dreschler, Wouter A

    2015-10-01

    To evaluate the perceptual effect of the altered shape of the osseous external auditory canal (OEAC) on sound quality. Prospective study. Twenty subjects with normal hearing were presented with six simulated sound conditions representing the acoustic properties of six different ear canals (three normal ears and three cavities). The six different real ear unaided responses of these ear canals were used to filter Dutch sentences, resulting in six simulated sound conditions. A seventh unfiltered reference condition was used for comparison. Sound quality was evaluated using paired comparison ratings and a visual analog scale (VAS). Significant differences in sound quality were found between the normal and cavity conditions (all P < .001) using both the seven-point paired comparison rating and the VAS. No significant differences were found between the reference and normal conditions. Sound quality deteriorates when the OEAC is altered into a cavity. This proof of concept study shows that the altered acoustic quality of the OEAC after radical cavity surgery may lead to a clearly perceived deterioration in sound quality. Nevertheless, some questions remain about the extent to which these changes are affected by habituation and by other changes in middle ear anatomy and functionality. 4 © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  6. The Effects of Ambient Conditions on Helicopter Rotor Source Noise Modeling

    NASA Technical Reports Server (NTRS)

    Schmitz, Frederic H.; Greenwood, Eric

    2011-01-01

    A new physics-based method called Fundamental Rotorcraft Acoustic Modeling from Experiments (FRAME) is used to demonstrate the change in rotor harmonic noise of a helicopter operating at different ambient conditions. FRAME is based upon a non-dimensional representation of the governing acoustic and performance equations of a single rotor helicopter. Measured external noise is used together with parameter identification techniques to develop a model of helicopter external noise that is a hybrid between theory and experiment. The FRAME method is used to evaluate the main rotor harmonic noise of a Bell 206B3 helicopter operating at different altitudes. The variation with altitude of Blade-Vortex Interaction (BVI) noise, known to be a strong function of the helicopter s advance ratio, is dependent upon which definition of airspeed is flown by the pilot. If normal flight procedures are followed and indicated airspeed (IAS) is held constant, the true airspeed (TAS) of the helicopter increases with altitude. This causes an increase in advance ratio and a decrease in the speed of sound which results in large changes to BVI noise levels. Results also show that thickness noise on this helicopter becomes more intense at high altitudes where advancing tip Mach number increases because the speed of sound is decreasing and advance ratio increasing for the same indicated airspeed. These results suggest that existing measurement-based empirically derived helicopter rotor noise source models may give incorrect noise estimates when they are used at conditions where data were not measured and may need to be corrected for mission land-use planning purposes.

  7. An integrated analysis-synthesis array system for spatial sound fields.

    PubMed

    Bai, Mingsian R; Hua, Yi-Hsin; Kuo, Chia-Hao; Hsieh, Yu-Hao

    2015-03-01

    An integrated recording and reproduction array system for spatial audio is presented within a generic framework akin to the analysis-synthesis filterbanks in discrete time signal processing. In the analysis stage, a microphone array "encodes" the sound field by using the plane-wave decomposition. Direction of arrival of plane-wave components that comprise the sound field of interest are estimated by multiple signal classification. Next, the source signals are extracted by using a deconvolution procedure. In the synthesis stage, a loudspeaker array "decodes" the sound field by reconstructing the plane-wave components obtained in the analysis stage. This synthesis stage is carried out by pressure matching in the interior domain of the loudspeaker array. The deconvolution problem is solved by truncated singular value decomposition or convex optimization algorithms. For high-frequency reproduction that suffers from the spatial aliasing problem, vector panning is utilized. Listening tests are undertaken to evaluate the deconvolution method, vector panning, and a hybrid approach that combines both methods to cover frequency ranges below and above the spatial aliasing frequency. Localization and timbral attributes are considered in the subjective evaluation. The results show that the hybrid approach performs the best in overall preference. In addition, there is a trade-off between reproduction performance and the external radiation.

  8. The effect of external mean flow on sound transmission through double-walled cylindrical shells lined with poroelastic material

    NASA Astrophysics Data System (ADS)

    Zhou, Jie; Bhaskar, Atul; Zhang, Xin

    2014-03-01

    Sound transmission through a system of double shells, lined with poroelastic material in the presence of external mean flow, is studied. The porous material is modeled as an equivalent fluid because shear wave contributions are known to be insignificant. This is achieved by accounting for the energetically most dominant wave types in the calculations. The transmission characteristics of the sandwich construction are presented for different incidence angles and Mach numbers over a wide frequency range. It is noted that the transmission loss exhibits three dips on the frequency axis as opposed to flat panels where there are only two such frequencies—results are discussed in the light of these observations. Flow is shown to decrease the transmission loss below the ring frequency, but increase this above the ring frequency due to the negative stiffness and the damping effect added by the flow. In the absence of external mean flow, porous material provides superior insulation for most part of the frequency band of interest. However, in the presence of external flow, this is true only below the ring frequency—above this frequency, the presence of air gap in sandwich constructions is the dominant factor that determines the acoustic performance. In the absence of external flow, an air gap always improves sound insulation.

  9. Issues in Humanoid Audition and Sound Source Localization by Active Audition

    NASA Astrophysics Data System (ADS)

    Nakadai, Kazuhiro; Okuno, Hiroshi G.; Kitano, Hiroaki

    In this paper, we present an active audition system which is implemented on the humanoid robot "SIG the humanoid". The audition system for highly intelligent humanoids localizes sound sources and recognizes auditory events in the auditory scene. Active audition reported in this paper enables SIG to track sources by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. However, such an active head movement inevitably creates motor noises.The system adaptively cancels motor noises using motor control signals and the cover acoustics. The experimental result demonstrates that active audition by integration of audition, vision, and motor control attains sound source tracking in variety of conditions.onditions.

  10. Sound source localization method in an environment with flow based on Amiet-IMACS

    NASA Astrophysics Data System (ADS)

    Wei, Long; Li, Min; Qin, Sheng; Fu, Qiang; Yang, Debin

    2017-05-01

    A sound source localization method is proposed to localize and analyze the sound source in an environment with airflow. It combines the improved mapping of acoustic correlated sources (IMACS) method and Amiet's method, and is called Amiet-IMACS. It can localize uncorrelated and correlated sound sources with airflow. To implement this approach, Amiet's method is used to correct the sound propagation path in 3D, which improves the accuracy of the array manifold matrix and decreases the position error of the localized source. Then, the mapping of acoustic correlated sources (MACS) method, which is as a high-resolution sound source localization algorithm, is improved by self-adjusting the constraint parameter at each irritation process to increase convergence speed. A sound source localization experiment using a pair of loud speakers in an anechoic wind tunnel under different flow speeds is conducted. The experiment exhibits the advantage of Amiet-IMACS in localizing a more accurate sound source position compared with implementing IMACS alone in an environment with flow. Moreover, the aerodynamic noise produced by a NASA EPPLER 862 STRUT airfoil model in airflow with a velocity of 80 m/s is localized using the proposed method, which further proves its effectiveness in a flow environment. Finally, the relationship between the source position of this airfoil model and its frequency, along with its generation mechanism, is determined and interpreted.

  11. On sound transmission through double-walled cylindrical shells lined with poroelastic material: Comparison with Zhou's results and further effect of external mean flow

    NASA Astrophysics Data System (ADS)

    Liu, Yu; He, Chuanbo

    2015-12-01

    In this discussion, the corrections to the errors found in the derivations and the numerical code of a recent analytical study (Zhou et al. Journal of Sound and Vibration 333 (7) (2014) 1972-1990) on sound transmission through double-walled cylindrical shells lined with poroelastic material are presented and discussed, as well as the further effect of the external mean flow on the transmission loss. After applying the corrections, the locations of the characteristic frequencies of thin shells remain unchanged, as well as the TL results above the ring frequency where BU and UU remain the best configurations in sound insulation performance. In the low-frequency region below the ring frequency, however, the corrections attenuate the TL amplitude significantly for BU and UU, and hence the BB configuration exhibits the best performance which is consistent with previous observations for flat sandwich panels.

  12. Electromagnetic sounding of the Earth's crust in the region of superdeep boreholes of Yamal-Nenets autonomous district using the fields of natural and controlled sources

    NASA Astrophysics Data System (ADS)

    Zhamaletdinov, A. A.; Petrishchev, M. S.; Shevtsov, A. N.; Kolobov, V. V.; Selivanov, V. N.; Barannik, M. B.; Tereshchenko, E. D.; Grigoriev, V. F.; Sergushin, P. A.; Kopytenko, E. A.; Biryulya, M. A.; Skorokhodov, A. A.; Esipko, O. A.; Damaskin, R. V.

    2013-11-01

    Electromagnetic soundings with the fields of natural (magnetotelluric (MT), and audio magnetotelluric (AMT)) and high-power controlled sources have been carried out in the region of the SG-6 (Tyumen) and SG-7 (En-Yakhin) superdeep boreholes in the Yamal-Nenets autonomous district (YaNAD). In the controlled-source soundings, the electromagnetic field was generated by the VL Urengoi-Pangody 220-kV industrial power transmission line (PTL), which has a length of 114 km, and ultralow-frequency (ULF) Zevs radiating antenna located at a distance of 2000 km from the signal recording sites. In the soundings with the Urengoi-Pangody PTL, the Energiya-2 generator capable of supplying up to 200 kW of power and Energiya-3 portable generator with a power of 2 kW were used as the sources. These generators were designed and manufactured at the Kola Science Center of the Russian Academy of Sciences. The soundings with the Energiya-2 generator were conducted in the frequency range from 0.38 to 175 Hz. The external generator was connected to the PTL in upon the agreement with the Yamal-Nenets Enterprise of Main Electric Networks, a branch of OAO FSK ES of Western Siberia. The connection was carried out by the wire-ground scheme during the routine maintenance of PTL in the nighttime. The highest-quality signals were recorded in the region of the SG-7 (En-Yakhin) superdeep borehole, where the industrial noise is lowest. The results of the inversion of the soundings with PTL and Zevs ULF transmitter completely agree with each other and with the data of electric logging. The MT-AMT data provide additional information about the deep structure of the region in the low-frequency range (below 1Hz). It is established that the section of SG-6 and SG-7 boreholes contains conductive layers in the depth intervals from 0.15 to 0.3 km and from 1 to 1.5 km. These layers are associated with the variations in the lithological composition, porosity, and fluid saturation of the rocks. The top of the poorly conductive Permian-Triassic complex is identified at a depth of about 7 km. On the basis of the MT data in the lowest frequency band (hourly and longer periods) with the observations at the Novosibirsk observatory taken into account, the distribution of electric resistivity up to a depth of 800 km is reconstructed. This distribution can be used as additional information when calculating the temperature and rheology of the lithosphere and upper mantle in West Siberia. The results of our studies demonstrate the high potential of the complex electromagnetic soundings with natural and controlled sources in the study of deep structure of the lithosphere and tracing deep oil-and-gas-bearing horizons in the sedimentary cover of the West Siberian Platform within the Yamal-Nenets autonomous district.

  13. Active room compensation for sound reinforcement using sound field separation techniques.

    PubMed

    Heuchel, Franz M; Fernandez-Grande, Efren; Agerkvist, Finn T; Shabalina, Elena

    2018-03-01

    This work investigates how the sound field created by a sound reinforcement system can be controlled at low frequencies. An indoor control method is proposed which actively absorbs the sound incident on a reflecting boundary using an array of secondary sources. The sound field is separated into incident and reflected components by a microphone array close to the secondary sources, enabling the minimization of reflected components by means of optimal signals for the secondary sources. The method is purely feed-forward and assumes constant room conditions. Three different sound field separation techniques for the modeling of the reflections are investigated based on plane wave decomposition, equivalent sources, and the Spatial Fourier transform. Simulations and an experimental validation are presented, showing that the control method performs similarly well at enhancing low frequency responses with the three sound separation techniques. Resonances in the entire room are reduced, although the microphone array and secondary sources are confined to a small region close to the reflecting wall. Unlike previous control methods based on the creation of a plane wave sound field, the investigated method works in arbitrary room geometries and primary source positions.

  14. Active sources in the cutoff of centrifugal fans to reduce the blade tones at higher-order duct mode frequencies

    NASA Astrophysics Data System (ADS)

    Neise, W.; Koopmann, G. H.

    1991-01-01

    A previously developed (e.g., Neise and Koopmann, 1984; Koopmann et al., 1988) active noise control technique in which the unwanted acoustic signals from centrifugal fans are suppressed by placing two externally driven sources near the cutoff of the casing was applied to the frequency region where not only plane sound waves are propagational in the fan ducts but also higher-order acoustic modes. Using a specially designed fan noise testing facility, the performance of two fans (280-mm impeller diam and 508 mm diam) was monitored with static pressure taps mounted peripherally around the inlet nozzle. Experimental results show that the aerodynamically generated source pressure field around the cutoff is too complex to be successfully counterimaged by only two active sources introduced in this region. It is suggested that, for an efficient application of this noise control technique in the higher-order mode frequency regime, it is neccessary to use an active source involving larger number of individually driven loudspeakers.

  15. An evaluation of talker localization based on direction of arrival estimation and statistical sound source identification

    NASA Astrophysics Data System (ADS)

    Nishiura, Takanobu; Nakamura, Satoshi

    2002-11-01

    It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.

  16. Sound source localization and segregation with internally coupled ears: the treefrog model

    PubMed Central

    Christensen-Dalsgaard, Jakob

    2016-01-01

    Acoustic signaling plays key roles in mediating many of the reproductive and social behaviors of anurans (frogs and toads). Moreover, acoustic signaling often occurs at night, in structurally complex habitats, such as densely vegetated ponds, and in dense breeding choruses characterized by high levels of background noise and acoustic clutter. Fundamental to anuran behavior is the ability of the auditory system to determine accurately the location from where sounds originate in space (sound source localization) and to assign specific sounds in the complex acoustic milieu of a chorus to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating directions for future research on these animals that will require the collaborative efforts of biologists, physicists, and roboticists. PMID:27730384

  17. Application of acoustic radiosity methods to noise propagation within buildings

    NASA Astrophysics Data System (ADS)

    Muehleisen, Ralph T.; Beamer, C. Walter

    2005-09-01

    The prediction of sound pressure levels in rooms from transmitted sound is a difficult problem. The sound energy in the source room incident on the common wall must be accurately predicted. In the receiving room, the propagation of sound from the planar wall source must also be accurately predicted. The radiosity method naturally computes the spatial distribution of sound energy incident on a wall and also naturally predicts the propagation of sound from a planar area source. In this paper, the application of the radiosity method to sound transmission problems is introduced and explained.

  18. Inter-noise 89 - Engineering for environmental noise control; Proceedings of the International Conference on Noise Control Engineering, Newport Beach, CA, Dec. 4-6, 1989. Vols. 1 & 2

    NASA Astrophysics Data System (ADS)

    Maling, George C., Jr.

    Recent advances in noise analysis and control theory and technology are discussed in reviews and reports. Topics addressed include noise generation; sound-wave propagation; noise control by external treatments; vibration and shock generation, transmission, isolation, and reduction; multiple sources and paths of environmental noise; noise perception and the physiological and psychological effects of noise; instrumentation, signal processing, and analysis techniques; and noise standards and legal aspects. Diagrams, drawings, graphs, photographs, and tables of numerical data are provided.

  19. Ejectable underwater sound source recovery assembly

    NASA Technical Reports Server (NTRS)

    Irick, S. C. (Inventor)

    1974-01-01

    An underwater sound source is described that may be ejectably mounted on any mobile device that travels over water, to facilitate in the location and recovery of the device when submerged. A length of flexible line maintains a connection between the mobile device and the sound source. During recovery, the sound source is located be particularly useful in the recovery of spent rocket motors that bury in the ocean floor upon impact.

  20. Theoretical Tinnitus Framework: A Neurofunctional Model.

    PubMed

    Ghodratitoostani, Iman; Zana, Yossi; Delbem, Alexandre C B; Sani, Siamak S; Ekhtiari, Hamed; Sanchez, Tanit G

    2016-01-01

    Subjective tinnitus is the conscious (attended) awareness perception of sound in the absence of an external source and can be classified as an auditory phantom perception. Earlier literature establishes three distinct states of conscious perception as unattended, attended, and attended awareness conscious perception. The current tinnitus development models depend on the role of external events congruently paired with the causal physical events that precipitate the phantom perception. We propose a novel Neurofunctional Tinnitus Model to indicate that the conscious (attended) awareness perception of phantom sound is essential in activating the cognitive-emotional value. The cognitive-emotional value plays a crucial role in governing attention allocation as well as developing annoyance within tinnitus clinical distress. Structurally, the Neurofunctional Tinnitus Model includes the peripheral auditory system, the thalamus, the limbic system, brainstem, basal ganglia, striatum, and the auditory along with prefrontal cortices. Functionally, we assume the model includes presence of continuous or intermittent abnormal signals at the peripheral auditory system or midbrain auditory paths. Depending on the availability of attentional resources, the signals may or may not be perceived. The cognitive valuation process strengthens the lateral-inhibition and noise canceling mechanisms in the mid-brain, which leads to the cessation of sound perception and renders the signal evaluation irrelevant. However, the "sourceless" sound is eventually perceived and can be cognitively interpreted as suspicious or an indication of a disease in which the cortical top-down processes weaken the noise canceling effects. This results in an increase in cognitive and emotional negative reactions such as depression and anxiety. The negative or positive cognitive-emotional feedbacks within the top-down approach may have no relation to the previous experience of the patients. They can also be associated with aversive stimuli similar to abnormal neural activity in generating the phantom sound. Cognitive and emotional reactions depend on general personality biases toward evaluative conditioning combined with a cognitive-emotional negative appraisal of stimuli such as the case of people with present hypochondria. We acknowledge that the projected Neurofunctional Tinnitus Model does not cover all tinnitus variations and patients. To support our model, we present evidence from several studies using neuroimaging, electrophysiology, brain lesion, and behavioral techniques.

  1. Theoretical Tinnitus Framework: A Neurofunctional Model

    PubMed Central

    Ghodratitoostani, Iman; Zana, Yossi; Delbem, Alexandre C. B.; Sani, Siamak S.; Ekhtiari, Hamed; Sanchez, Tanit G.

    2016-01-01

    Subjective tinnitus is the conscious (attended) awareness perception of sound in the absence of an external source and can be classified as an auditory phantom perception. Earlier literature establishes three distinct states of conscious perception as unattended, attended, and attended awareness conscious perception. The current tinnitus development models depend on the role of external events congruently paired with the causal physical events that precipitate the phantom perception. We propose a novel Neurofunctional Tinnitus Model to indicate that the conscious (attended) awareness perception of phantom sound is essential in activating the cognitive-emotional value. The cognitive-emotional value plays a crucial role in governing attention allocation as well as developing annoyance within tinnitus clinical distress. Structurally, the Neurofunctional Tinnitus Model includes the peripheral auditory system, the thalamus, the limbic system, brainstem, basal ganglia, striatum, and the auditory along with prefrontal cortices. Functionally, we assume the model includes presence of continuous or intermittent abnormal signals at the peripheral auditory system or midbrain auditory paths. Depending on the availability of attentional resources, the signals may or may not be perceived. The cognitive valuation process strengthens the lateral-inhibition and noise canceling mechanisms in the mid-brain, which leads to the cessation of sound perception and renders the signal evaluation irrelevant. However, the “sourceless” sound is eventually perceived and can be cognitively interpreted as suspicious or an indication of a disease in which the cortical top-down processes weaken the noise canceling effects. This results in an increase in cognitive and emotional negative reactions such as depression and anxiety. The negative or positive cognitive-emotional feedbacks within the top-down approach may have no relation to the previous experience of the patients. They can also be associated with aversive stimuli similar to abnormal neural activity in generating the phantom sound. Cognitive and emotional reactions depend on general personality biases toward evaluative conditioning combined with a cognitive-emotional negative appraisal of stimuli such as the case of people with present hypochondria. We acknowledge that the projected Neurofunctional Tinnitus Model does not cover all tinnitus variations and patients. To support our model, we present evidence from several studies using neuroimaging, electrophysiology, brain lesion, and behavioral techniques. PMID:27594822

  2. Visual Presentation Effects on Identification of Multiple Environmental Sounds

    PubMed Central

    Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio

    2016-01-01

    This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478

  3. The effect of brain lesions on sound localization in complex acoustic environments.

    PubMed

    Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg

    2014-05-01

    Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.

  4. Phase Synchronization and Desynchronization of Structural Response Induced by Turbulent and External Sound

    NASA Technical Reports Server (NTRS)

    Maestrello, Lucio

    2002-01-01

    Acoustic and turbulent boundary layer flow loadings over a flexible structure are used to study the spatial-temporal dynamics of the response of the structure. The stability of the spatial synchronization and desynchronization by an active external force is investigated with an array of coupled transducers on the structure. In the synchronous state, the structural phase is locked, which leads to the formation of spatial patterns while the amplitude peaks exhibit chaotic behaviors. Large amplitude, spatially symmetric loading is superimposed on broadband, but in the desynchronized state, the spectrum broadens and the phase space is lost. The resulting pattern bears a striking resemblance to phase turbulence. The transition is achieved by using a low power external actuator to trigger broadband behaviors from the knowledge of the external acoustic load inducing synchronization. The changes are made favorably and efficiently to alter the frequency distribution of power, not the total power level. Before synchronization effects are seen, the panel response to the turbulent boundary layer loading is discontinuously spatio-temporally correlated. The stability develops from different competing wavelengths; the spatial scale is significantly shorter than when forced with the superimposed external sound. When the external sound level decreases and the synchronized phases are lost, changes in the character of the spectra can be linked to the occurrence of spatial phase transition. These changes can develop broadband response. Synchronized responses of fuselage structure panels have been observed in subsonic and supersonic aircraft; results from two flights tests are discussed.

  5. Source levels of social sounds in migrating humpback whales (Megaptera novaeangliae).

    PubMed

    Dunlop, Rebecca A; Cato, Douglas H; Noad, Michael J; Stokes, Dale M

    2013-07-01

    The source level of an animal sound is important in communication, since it affects the distance over which the sound is audible. Several measurements of source levels of whale sounds have been reported, but the accuracy of many is limited because the distance to the source and the acoustic transmission loss were estimated rather than measured. This paper presents measurements of source levels of social sounds (surface-generated and vocal sounds) of humpback whales from a sample of 998 sounds recorded from 49 migrating humpback whale groups. Sources were localized using a wide baseline five hydrophone array and transmission loss was measured for the site. Social vocalization source levels were found to range from 123 to 183 dB re 1 μPa @ 1 m with a median of 158 dB re 1 μPa @ 1 m. Source levels of surface-generated social sounds ("breaches" and "slaps") were narrower in range (133 to 171 dB re 1 μPa @ 1 m) but slightly higher in level (median of 162 dB re 1 μPa @ 1 m) compared to vocalizations. The data suggest that group composition has an effect on group vocalization source levels in that singletons and mother-calf-singing escort groups tend to vocalize at higher levels compared to other group compositions.

  6. Pitch matching accuracy of trained singers, untrained subjects with talented singing voices, and untrained subjects with nontalented singing voices in conditions of varying feedback.

    PubMed

    Watts, Christopher; Murphy, Jessica; Barnes-Burroughs, Kathryn

    2003-06-01

    At a physiological level, the act of singing involves control and coordination of several systems involved in the production of sound, including respiration, phonation, resonance, and afferent systems used to monitor production. The ability to produce a melodious singing voice (eg, in tune with accurate pitch) is dependent on control over these motor and sensory systems. To test this position, trained singers and untrained subjects with and without expressed singing talent were asked to match pitches of target pure tones. The ability to match pitch reflected the ability to accurately integrate sensory perception with motor planning and execution. Pitch-matching accuracy was measured at the onset of phonation (prephonatory set) before external feedback could be utilized to adjust the voiced source, during phonation when external auditory feedback could be utilized, and during phonation when external auditory feedback was masked. Results revealed trained singers and untrained subjects with singing talent were no different in their pitch-matching abilities when measured before or after external feedback could be utilized. The untrained subjects with singing talent were also significantly more accurate than the trained singers when external auditory feedback was masked. Both groups were significantly more accurate than the untrained subjects without singing talent.

  7. Dynamic Spatial Hearing by Human and Robot Listeners

    NASA Astrophysics Data System (ADS)

    Zhong, Xuan

    This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.

  8. Wave field synthesis of moving virtual sound sources with complex radiation properties.

    PubMed

    Ahrens, Jens; Spors, Sascha

    2011-11-01

    An approach to the synthesis of moving virtual sound sources with complex radiation properties in wave field synthesis is presented. The approach exploits the fact that any stationary sound source of finite spatial extent radiates spherical waves at sufficient distance. The angular dependency of the radiation properties of the source under consideration is reflected by the amplitude and phase distribution on the spherical wave fronts. The sound field emitted by a uniformly moving monopole source is derived and the far-field radiation properties of the complex virtual source under consideration are incorporated in order to derive a closed-form expression for the loudspeaker driving signal. The results are illustrated via numerical simulations of the synthesis of the sound field of a sample moving complex virtual source.

  9. Forced sound transmission through a finite-sized single leaf panel subject to a point source excitation.

    PubMed

    Wang, Chong

    2018-03-01

    In the case of a point source in front of a panel, the wavefront of the incident wave is spherical. This paper discusses spherical sound waves transmitting through a finite sized panel. The forced sound transmission performance that predominates in the frequency range below the coincidence frequency is the focus. Given the point source located along the centerline of the panel, forced sound transmission coefficient is derived through introducing the sound radiation impedance for spherical incident waves. It is found that in addition to the panel mass, forced sound transmission loss also depends on the distance from the source to the panel as determined by the radiation impedance. Unlike the case of plane incident waves, sound transmission performance of a finite sized panel does not necessarily converge to that of an infinite panel, especially when the source is away from the panel. For practical applications, the normal incidence sound transmission loss expression of plane incident waves can be used if the distance between the source and panel d and the panel surface area S satisfy d/S>0.5. When d/S ≈0.1, the diffuse field sound transmission loss expression may be a good approximation. An empirical expression for d/S=0  is also given.

  10. External Acoustic Liners for Multi-Functional Aircraft Noise Reduction

    NASA Technical Reports Server (NTRS)

    Jones, Michael G. (Inventor); Czech, Michael J. (Inventor); Howerton, Brian M. (Inventor); Thomas, Russell H. (Inventor); Nark, Douglas M. (Inventor)

    2017-01-01

    Acoustic liners for aircraft noise reduction include one or more chambers that are configured to provide a pressure-release surface such that the engine noise generation process is inhibited and/or absorb sound by converting the sound into heat energy. The size and shape of the chambers can be selected to inhibit the noise generation process and/or absorb sound at selected frequencies.

  11. Effects of external and gap mean flows on sound transmission through a double-wall sandwich panel

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Sebastian, Alexis

    2015-05-01

    This paper studies analytically the effects of an external mean flow and an internal gap mean flow on sound transmission through a double-wall sandwich panel lined with poroelastic materials. Biot's theory is employed to describe wave propagation in poroelastic materials, and the transfer matrix method with three types of boundary conditions is applied to solve the system simultaneously. The random incidence transmission loss in a diffuse field is calculated numerically, and the limiting angle of incidence due to total internal reflection is discussed in detail. The numerical predictions suggest that the sound insulation performance of such a double-wall panel is enhanced considerably by both external and gap mean flows particularly in the high-frequency range. Similar effects on transmission loss are observed for the two mean flows. It is shown that the effect of the gap mean flow depends on flow velocity, flow direction, gap depth and fluid properties and also that the fluid properties within the gap appear to influence the transmission loss more effectively than the gap flow. Despite the implementation difficulty in practice, an internal gap flow provides more design space for tuning the sound insulation performance of a double-wall sandwich panel and has great potential for active/passive noise control.

  12. Noise Source Identification in a Reverberant Field Using Spherical Beamforming

    NASA Astrophysics Data System (ADS)

    Choi, Young-Chul; Park, Jin-Ho; Yoon, Doo-Byung; Kwon, Hyu-Sang

    Identification of noise sources, their locations and strengths, has been taken great attention. The method that can identify noise sources normally assumes that noise sources are located at a free field. However, the sound in a reverberant field consists of that coming directly from the source plus sound reflected or scattered by the walls or objects in the field. In contrast to the exterior sound field, reflections are added to sound field. Therefore, the source location estimated by the conventional methods may give unacceptable error. In this paper, we explain the effects of reverberant field on interior source identification process and propose the method that can identify noise sources in the reverberant field.

  13. A Numerical Experiment on the Role of Surface Shear Stress in the Generation of Sound

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Wang, Meng; Merriam, Marshal (Technical Monitor)

    1996-01-01

    The sound generated due to a localized flow over an infinite flat surface is considered. It is known that the unsteady surface pressure, while appearing in a formal solution to the Lighthill equation, does not constitute a source of sound but rather represents the effect of image quadrupoles. The question of whether a similar surface shear stress term constitutes a true source of dipole sound is less settled. Some have boldly assumed it is a true source while others have argued that, like the surface pressure, it depends on the sound field (via an acoustic boundary layer) and is therefore not a true source. A numerical experiment based on the viscous, compressible Navier-Stokes equations was undertaken to investigate the issue. A small region of a wall was oscillated tangentially. The directly computed sound field was found to to agree with an acoustic analogy based calculation which regards the surface shear as an acoustically compact dipole source of sound.

  14. CLIVAR Mode Water Dynamics Experiment (CLIMODE), Fall 2006 R/V Oceanus Voyage 434, November 16, 2006-December 3, 2006

    DTIC Science & Technology

    2007-12-01

    except for the dive zero time which needed to be programmed during the cruise when the deployment schedule dates were confirmed. _ ACM - Aanderaa ACM...guards bolted on to complete the frame prior to deployment. Sound Source - Sound sources were scheduled to be redeployed. Sound sources were originally...battery voltages and a vacuum. A +27 second time drift was noted and the time was reset. The sound source was scheduled to go to full power on November

  15. Active implant for optoacoustic natural sound enhancement

    NASA Astrophysics Data System (ADS)

    Mohrdiek, S.; Fretz, M.; Jose James, R.; Spinola Durante, G.; Burch, T.; Kral, A.; Rettenmaier, A.; Milani, R.; Putkonen, M.; Noell, W.; Ortsiefer, M.; Daly, A.; Vinciguerra, V.; Garnham, C.; Shah, D.

    2017-02-01

    This paper summarizes the results of an EU project called ACTION: ACTive Implant for Optoacoustic Natural sound enhancement. The project is based on a recent discovery that relatively low levels of pulsed infrared laser light are capable of triggering activity in hair cells of the partially hearing (hearing impaired) cochlea and vestibule. The aim here is the development of a self-contained, smart, highly miniaturized system to provide optoacoustic stimuli directly from an array of miniature light sources in the cochlea. Optoacoustic compound action potentials (oaCAP) are generated by the light source fully inserted into the unmodified cochlea. Previously, the same could only be achieved with external light sources connected to a fiber optic light guide. This feat is achieved by integrating custom made VCSEL arrays at a wavelength of about 1550 nm onto small flexible substrates. The laser light is collimated by a specially designed silicon-based ultra-thin lens (165 um thick) to get the energy density required for the generation of oaCAP signals. A dramatic miniaturization of the packaging technology is also required. A long term biocompatible and hermetic sapphire housing with a size of less than a 1 cubic millimeter and miniature Pt/PtIr feedthroughs is developed, using a low temperature laser assisted process for sealing. A biofouling thin film protection layer is developed to avoid fibrinogen and cell growth on the system.

  16. Statistics of natural reverberation enable perceptual separation of sound and space

    PubMed Central

    Traer, James; McDermott, Josh H.

    2016-01-01

    In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us. PMID:27834730

  17. Statistics of natural reverberation enable perceptual separation of sound and space.

    PubMed

    Traer, James; McDermott, Josh H

    2016-11-29

    In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us.

  18. An integrated system for dynamic control of auditory perspective in a multichannel sound field

    NASA Astrophysics Data System (ADS)

    Corey, Jason Andrew

    An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to create a perceptually strong impression of source location and movement within a simulated space.

  19. Sound localization and auditory response capabilities in round goby (Neogobius melanostomus)

    NASA Astrophysics Data System (ADS)

    Rollo, Audrey K.; Higgs, Dennis M.

    2005-04-01

    A fundamental role in vertebrate auditory systems is determining the direction of a sound source. While fish show directional responses to sound, sound localization remains in dispute. The species used in the current study, Neogobius melanostomus (round goby) uses sound in reproductive contexts, with both male and female gobies showing directed movement towards a calling male. The two-choice laboratory experiment was used (active versus quiet speaker) to analyze behavior of gobies in response to sound stimuli. When conspecific male spawning sounds were played, gobies moved in a direct path to the active speaker, suggesting true localization to sound. Of the animals that responded to conspecific sounds, 85% of the females and 66% of the males moved directly to the sound source. Auditory playback of natural and synthetic sounds showed differential behavioral specificity. Of gobies that responded, 89% were attracted to the speaker playing Padogobius martensii sounds, 87% to 100 Hz tone, 62% to white noise, and 56% to Gobius niger sounds. Swimming speed, as well as mean path angle to the speaker, will be presented during the presentation. Results suggest a strong localization of the round goby to a sound source, with some differential sound specificity.

  20. Experiments of multichannel least-square methods for sound field reproduction inside aircraft mock-up: Objective evaluations

    NASA Astrophysics Data System (ADS)

    Gauthier, P.-A.; Camier, C.; Lebel, F.-A.; Pasco, Y.; Berry, A.; Langlois, J.; Verron, C.; Guastavino, C.

    2016-08-01

    Sound environment reproduction of various flight conditions in aircraft mock-ups is a valuable tool for the study, prediction, demonstration and jury testing of interior aircraft sound quality and annoyance. To provide a faithful reproduced sound environment, time, frequency and spatial characteristics should be preserved. Physical sound field reproduction methods for spatial sound reproduction are mandatory to immerse the listener's body in the proper sound fields so that localization cues are recreated at the listener's ears. Vehicle mock-ups pose specific problems for sound field reproduction. Confined spaces, needs for invisible sound sources and very specific acoustical environment make the use of open-loop sound field reproduction technologies such as wave field synthesis (based on free-field models of monopole sources) not ideal. In this paper, experiments in an aircraft mock-up with multichannel least-square methods and equalization are reported. The novelty is the actual implementation of sound field reproduction with 3180 transfer paths and trim panel reproduction sources in laboratory conditions with a synthetic target sound field. The paper presents objective evaluations of reproduced sound fields using various metrics as well as sound field extrapolation and sound field characterization.

  1. Investigation of noise sources and propagation in external gear pumps

    NASA Astrophysics Data System (ADS)

    Opperwall, Timothy J.

    Oil hydraulics is widely accepted as the best technology for transmitting power in many engineering applications due to its advantages in power density, control, layout flexibility, and efficiency. Due to these advantages, hydraulic systems are present in many different applications including construction, agriculture, aerospace, automotive, forestry, medical, and manufacturing, just to identify a few. Many of these applications involve the systems in close proximity to human operators and passengers where noise is one of the main constraints to the acceptance and spread of this technology. As a key component in power transfer, displacement machines can be major sources of noise in hydraulic systems. Thus, investigation into the sources of noise and discovering strategies to reduce noise is a key part of applying fluid power systems to a wider range of applications, as well as improving the performance of current hydraulic systems. The present research aims to leverage previous efforts and develop new models and experimental techniques in the topic of noise generation caused by hydrostatic units. This requires challenging and surpassing current accepted methods in the understanding of noise in fluid power systems. This research seeks to expand on the previous experimental and modeling efforts by directly considering the effect that system and component design changes apply on the total sound power and the sound frequency components emitted from displacement machines and the attached lines. The case of external gear pumps is taken as reference for a new model to understand the generation and transmission of noise from the sources out to the environment. The lumped parameter model HYGESim (HYdraulic GEar machine Simulator) was expanded to investigate the dynamic forces on the solid bodies caused by the pump operation and to predict interactions with the attached system. Vibration and sound radiation were then predicted using a combined finite element and boundary element vibro-acoustic model as well as the influence of additional models for system components to better understand the essential problems of noise generation in hydraulic systems. This model is a step forward for the field due to the coupling of an advanced internal model of pump operation coupled to a detailed vibro-acoustic model. Several experimental studies were also completed in order to advance the current science. The first study validated the pump model in terms of outlet pressure ripple prediction through comparison to experimentally measured results for the reference pump as well as prototype pumps designed for low outlet pressure ripple. The second study focused on the air-borne noise through sound pressure and intensity measurements on reference and prototype pumps at steady-state operating conditions. A third study over a wide range of operating speeds and pressures was completed to explore the impact of operating condition and system design to greater detail through measuring noise and vibration in the working fluid, the system structures, and the air. Applying the knowledge gained through experimental and simulation studies has brought new advances in the understanding of the physics of noise generation and propagation in hydraulic components and systems. The focus of the combined simulation and modeling approach is to clearly understand the different contributions from noise sources and surpasses the previous methods that focus on the outlet pressure ripple alone as a source of noise. The application of the new modeling and experimental approach allows for new advances which directly contribute to advancing the science of noise in hydraulic applications and the design of new quieter hydrostatic units and hydraulic systems.

  2. Measuring the Speed of Sound Using Only a Computer

    ERIC Educational Resources Information Center

    Bin, Mo

    2013-01-01

    corresponding time to cover that distance. But sound travels rapidly, covering about one meter in three milliseconds. This challenge can be met by using only a computer and an external microphone. A fixed frequency (1000 Hz) is fed into the computer's speaker and the…

  3. Modeling of reverberant room responses for two-dimensional spatial sound field analysis and synthesis.

    PubMed

    Bai, Mingsian R; Li, Yi; Chiang, Yi-Hao

    2017-10-01

    A unified framework is proposed for analysis and synthesis of two-dimensional spatial sound field in reverberant environments. In the sound field analysis (SFA) phase, an unbaffled 24-element circular microphone array is utilized to encode the sound field based on the plane-wave decomposition. Depending on the sparsity of the sound sources, the SFA stage can be implemented in two manners. For sparse-source scenarios, a one-stage algorithm based on compressive sensing algorithm is utilized. Alternatively, a two-stage algorithm can be used, where the minimum power distortionless response beamformer is used to localize the sources and Tikhonov regularization algorithm is used to extract the source amplitudes. In the sound field synthesis (SFS), a 32-element rectangular loudspeaker array is employed to decode the target sound field using pressure matching technique. To establish the room response model, as required in the pressure matching step of the SFS phase, an SFA technique for nonsparse-source scenarios is utilized. Choice of regularization parameters is vital to the reproduced sound field. In the SFS phase, three SFS approaches are compared in terms of localization performance and voice reproduction quality. Experimental results obtained in a reverberant room are presented and reveal that an accurate room response model is vital to immersive rendering of the reproduced sound field.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballachey, B.E.; Kloecker, K.A.

    Ten moderately to heavily oiled sea otters were collected in Prince William Sound during the Exxon Valdez oil spill and up to seven tissues from each were analyzed for hydrocarbons. Aliphatic and aromatic hydrocarbons were detected in all tissues. Concentrations of aromatic hydrocarbons in fat samples were an order of magnitude higher than in other tissues. The patterns of distribution of these hydrocarbons suggested crude oil as the source of contamination. However, there was variation among oiled otters in the concentrations of individual hydrocarbons, which may be due to differing proximate causes of mortality and varying lengths of time andmore » sea otters survived following oil exposure. The concentrations of both aliphatic and aromatic hydrocarbons in the tissues of the ten oiled sea otters generally were higher than in tissues from 7 sea otters with no external oiling that were collected from prince William Sound in 1989 and 1990, or from 12 sea otters collected from an area in southeast Alaska which had not experienced an oil spill.« less

  5. Sound Source Localization and Speech Understanding in Complex Listening Environments by Single-sided Deaf Listeners After Cochlear Implantation.

    PubMed

    Zeitler, Daniel M; Dorman, Michael F; Natale, Sarah J; Loiselle, Louise; Yost, William A; Gifford, Rene H

    2015-09-01

    To assess improvements in sound source localization and speech understanding in complex listening environments after unilateral cochlear implantation for single-sided deafness (SSD). Nonrandomized, open, prospective case series. Tertiary referral center. Nine subjects with a unilateral cochlear implant (CI) for SSD (SSD-CI) were tested. Reference groups for the task of sound source localization included young (n = 45) and older (n = 12) normal-hearing (NH) subjects and 27 bilateral CI (BCI) subjects. Unilateral cochlear implantation. Sound source localization was tested with 13 loudspeakers in a 180 arc in front of the subject. Speech understanding was tested with the subject seated in an 8-loudspeaker sound system arrayed in a 360-degree pattern. Directionally appropriate noise, originally recorded in a restaurant, was played from each loudspeaker. Speech understanding in noise was tested using the Azbio sentence test and sound source localization quantified using root mean square error. All CI subjects showed poorer-than-normal sound source localization. SSD-CI subjects showed a bimodal distribution of scores: six subjects had scores near the mean of those obtained by BCI subjects, whereas three had scores just outside the 95th percentile of NH listeners. Speech understanding improved significantly in the restaurant environment when the signal was presented to the side of the CI. Cochlear implantation for SSD can offer improved speech understanding in complex listening environments and improved sound source localization in both children and adults. On tasks of sound source localization, SSD-CI patients typically perform as well as BCI patients and, in some cases, achieve scores at the upper boundary of normal performance.

  6. Position-dependent hearing in three species of bushcrickets (Tettigoniidae, Orthoptera)

    PubMed Central

    Lakes-Harlan, Reinhard; Scherberich, Jan

    2015-01-01

    A primary task of auditory systems is the localization of sound sources in space. Sound source localization in azimuth is usually based on temporal or intensity differences of sounds between the bilaterally arranged ears. In mammals, localization in elevation is possible by transfer functions at the ear, especially the pinnae. Although insects are able to locate sound sources, little attention is given to the mechanisms of acoustic orientation to elevated positions. Here we comparatively analyse the peripheral hearing thresholds of three species of bushcrickets in respect to sound source positions in space. The hearing thresholds across frequencies depend on the location of a sound source in the three-dimensional hearing space in front of the animal. Thresholds differ for different azimuthal positions and for different positions in elevation. This position-dependent frequency tuning is species specific. Largest differences in thresholds between positions are found in Ancylecha fenestrata. Correspondingly, A. fenestrata has a rather complex ear morphology including cuticular folds covering the anterior tympanal membrane. The position-dependent tuning might contribute to sound source localization in the habitats. Acoustic orientation might be a selective factor for the evolution of morphological structures at the bushcricket ear and, speculatively, even for frequency fractioning in the ear. PMID:26543574

  7. Position-dependent hearing in three species of bushcrickets (Tettigoniidae, Orthoptera).

    PubMed

    Lakes-Harlan, Reinhard; Scherberich, Jan

    2015-06-01

    A primary task of auditory systems is the localization of sound sources in space. Sound source localization in azimuth is usually based on temporal or intensity differences of sounds between the bilaterally arranged ears. In mammals, localization in elevation is possible by transfer functions at the ear, especially the pinnae. Although insects are able to locate sound sources, little attention is given to the mechanisms of acoustic orientation to elevated positions. Here we comparatively analyse the peripheral hearing thresholds of three species of bushcrickets in respect to sound source positions in space. The hearing thresholds across frequencies depend on the location of a sound source in the three-dimensional hearing space in front of the animal. Thresholds differ for different azimuthal positions and for different positions in elevation. This position-dependent frequency tuning is species specific. Largest differences in thresholds between positions are found in Ancylecha fenestrata. Correspondingly, A. fenestrata has a rather complex ear morphology including cuticular folds covering the anterior tympanal membrane. The position-dependent tuning might contribute to sound source localization in the habitats. Acoustic orientation might be a selective factor for the evolution of morphological structures at the bushcricket ear and, speculatively, even for frequency fractioning in the ear.

  8. Computational study of the interaction between a shock and a near-wall vortex using a weighted compact nonlinear scheme

    NASA Astrophysics Data System (ADS)

    Zuo, Zhifeng; Maekawa, Hiroshi

    2014-02-01

    The interaction between a moderate-strength shock wave and a near-wall vortex is studied numerically by solving the two-dimensional, unsteady compressible Navier-Stokes equations using a weighted compact nonlinear scheme with a simple low-dissipation advection upstream splitting method for flux splitting. Our main purpose is to clarify the development of the flow field and the generation of sound waves resulting from the interaction. The effects of the vortex-wall distance on the sound generation associated with variations in the flow structures are also examined. The computational results show that three sound sources are involved in this problem: (i) a quadrupolar sound source due to the shock-vortex interaction; (ii) a dipolar sound source due to the vortex-wall interaction; and (iii) a dipolar sound source due to unsteady wall shear stress. The sound field is the combination of the sound waves produced by all three sound sources. In addition to the interaction of the incident shock with the vortex, a secondary shock-vortex interaction is caused by the reflection of the reflected shock (MR2) from the wall. The flow field is dominated by the primary and secondary shock-vortex interactions. The generation mechanism of the third sound, which is newly discovered, due to the MR2-vortex interaction is presented. The pressure variations generated by (ii) become significant with decreasing vortex-wall distance. The sound waves caused by (iii) are extremely weak compared with those caused by (i) and (ii) and are negligible in the computed sound field.

  9. Reconstruction of sound source signal by analytical passive TR in the environment with airflow

    NASA Astrophysics Data System (ADS)

    Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu

    2017-03-01

    In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.

  10. Localization of sound sources in a room with one microphone

    NASA Astrophysics Data System (ADS)

    Peić Tukuljac, Helena; Lissek, Hervé; Vandergheynst, Pierre

    2017-08-01

    Estimation of the location of sound sources is usually done using microphone arrays. Such settings provide an environment where we know the difference between the received signals among different microphones in the terms of phase or attenuation, which enables localization of the sound sources. In our solution we exploit the properties of the room transfer function in order to localize a sound source inside a room with only one microphone. The shape of the room and the position of the microphone are assumed to be known. The design guidelines and limitations of the sensing matrix are given. Implementation is based on the sparsity in the terms of voxels in a room that are occupied by a source. What is especially interesting about our solution is that we provide localization of the sound sources not only in the horizontal plane, but in the terms of the 3D coordinates inside the room.

  11. Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation

    PubMed Central

    Oliva, Aude

    2017-01-01

    Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals. PMID:28451630

  12. Converting a Monopole Emission into a Dipole Using a Subwavelength Structure

    NASA Astrophysics Data System (ADS)

    Fan, Xu-Dong; Zhu, Yi-Fan; Liang, Bin; Cheng, Jian-chun; Zhang, Likun

    2018-03-01

    High-efficiency emission of multipoles is unachievable by a source much smaller than the wavelength, preventing compact acoustic devices for generating directional sound beams. Here, we present a primary scheme towards solving this problem by numerically and experimentally enclosing a monopole sound source in a structure with a dimension of around 1 /10 sound wavelength to emit a dipolar field. The radiated sound power is found to be more than twice that of a bare dipole. Our study of efficient emission of directional low-frequency sound from a monopole source in a subwavelength space may have applications such as focused ultrasound for imaging, directional underwater sound beams, miniaturized sonar, etc.

  13. Effect of the spectrum of a high-intensity sound source on the sound-absorbing properties of a resonance-type acoustic lining

    NASA Astrophysics Data System (ADS)

    Ipatov, M. S.; Ostroumov, M. N.; Sobolev, A. F.

    2012-07-01

    Experimental results are presented on the effect of both the sound pressure level and the type of spectrum of a sound source on the impedance of an acoustic lining. The spectra under study include those of white noise, a narrow-band signal, and a signal with a preset waveform. It is found that, to obtain reliable data on the impedance of an acoustic lining from the results of interferometric measurements, the total sound pressure level of white noise or the maximal sound pressure level of a pure tone (at every oscillation frequency) needs to be identical to the total sound pressure level of the actual source at the site of acoustic lining on the channel wall.

  14. 3D Sound Techniques for Sound Source Elevation in a Loudspeaker Listening Environment

    NASA Astrophysics Data System (ADS)

    Kim, Yong Guk; Jo, Sungdong; Kim, Hong Kook; Jang, Sei-Jin; Lee, Seok-Pil

    In this paper, we propose several 3D sound techniques for sound source elevation in stereo loudspeaker listening environments. The proposed method integrates a head-related transfer function (HRTF) for sound positioning and early reflection for adding reverberant circumstance. In addition, spectral notch filtering and directional band boosting techniques are also included for increasing elevation perception capability. In order to evaluate the elevation performance of the proposed method, subjective listening tests are conducted using several kinds of sound sources such as white noise, sound effects, speech, and music samples. It is shown from the tests that the degrees of perceived elevation by the proposed method are around the 17º to 21º when the stereo loudspeakers are located on the horizontal plane.

  15. Techniques and instrumentation for the measurement of transient sound energy flux

    NASA Astrophysics Data System (ADS)

    Watkinson, P. S.; Fahy, F. J.

    1983-12-01

    The evaluation of sound intensity distributions, and sound powers, of essentially continuous sources such as automotive engines, electric motors, production line machinery, furnaces, earth moving machinery and various types of process plants were studied. Although such systems are important sources of community disturbance and, to a lesser extent, of industrial health hazard, the most serious sources of hearing hazard in industry are machines operating on an impact principle, such as drop forges, hammers and punches. Controlled experiments to identify major noise source regions and mechanisms are difficult because it is normally impossible to install them in quiet, anechoic environments. The potential for sound intensity measurement to provide a means of overcoming these difficulties has given promising results, indicating the possibility of separation of directly radiated and reverberant sound fields. However, because of the complexity of transient sound fields, a fundamental investigation is necessary to establish the practicability of intensity field decomposition, which is basic to source characterization techniques.

  16. Perceptual constancy in auditory perception of distance to railway tracks.

    PubMed

    De Coensel, Bert; Nilsson, Mats E; Berglund, Birgitta; Brown, A L

    2013-07-01

    Distance to a sound source can be accurately estimated solely from auditory information. With a sound source such as a train that is passing by at a relatively large distance, the most important auditory information for the listener for estimating its distance consists of the intensity of the sound, spectral changes in the sound caused by air absorption, and the motion-induced rate of change of intensity. However, these cues are relative because prior information/experience of the sound source-its source power, its spectrum and the typical speed at which it moves-is required for such distance estimates. This paper describes two listening experiments that allow investigation of further prior contextual information taken into account by listeners-viz., whether they are indoors or outdoors. Asked to estimate the distance to the track of a railway, it is shown that listeners assessing sounds heard inside the dwelling based their distance estimates on the expected train passby sound level outdoors rather than on the passby sound level actually experienced indoors. This form of perceptual constancy may have consequences for the assessment of annoyance caused by railway noise.

  17. Recent paleoseismicity record in Prince William Sound, Alaska, USA

    NASA Astrophysics Data System (ADS)

    Kuehl, Steven A.; Miller, Eric J.; Marshall, Nicole R.; Dellapenna, Timothy M.

    2017-12-01

    Sedimentological and geochemical investigation of sediment cores collected in the deep (>400 m) central basin of Prince William Sound, along with geochemical fingerprinting of sediment source areas, are used to identify earthquake-generated sediment gravity flows. Prince William Sound receives sediment from two distinct sources: from offshore (primarily Copper River) through Hinchinbrook Inlet, and from sources within the Sound (primarily Columbia Glacier). These sources are found to have diagnostic elemental ratios indicative of provenance; Copper River Basin sediments were significantly higher in Sr/Pb and Cu/Pb, whereas Prince William Sound sediments were significantly higher in K/Ca and Rb/Sr. Within the past century, sediment gravity flows deposited within the deep central channel of Prince William Sound have robust geochemical (provenance) signatures that can be correlated with known moderate to large earthquakes in the region. Given the thick Holocene sequence in the Sound ( 200 m) and correspondingly high sedimentation rates (>1 cm year-1), this relationship suggests that sediments within the central basin of Prince William Sound may contain an extraordinary high-resolution record of paleoseismicity in the region.

  18. Action planning and predictive coding when speaking

    PubMed Central

    Wang, Jun; Mathalon, Daniel H.; Roach, Brian J.; Reilly, James; Keedy, Sarah; Sweeney, John A.; Ford, Judith M.

    2014-01-01

    Across the animal kingdom, sensations resulting from an animal's own actions are processed differently from sensations resulting from external sources, with self-generated sensations being suppressed. A forward model has been proposed to explain this process across sensorimotor domains. During vocalization, reduced processing of one's own speech is believed to result from a comparison of speech sounds to corollary discharges of intended speech production generated from efference copies of commands to speak. Until now, anatomical and functional evidence validating this model in humans has been indirect. Using EEG with anatomical MRI to facilitate source localization, we demonstrate that inferior frontal gyrus activity during the 300ms before speaking was associated with suppressed processing of speech sounds in auditory cortex around 100ms after speech onset (N1). These findings indicate that an efference copy from speech areas in prefrontal cortex is transmitted to auditory cortex, where it is used to suppress processing of anticipated speech sounds. About 100ms after N1, a subsequent auditory cortical component (P2) was not suppressed during talking. The combined N1 and P2 effects suggest that although sensory processing is suppressed as reflected in N1, perceptual gaps are filled as reflected in the lack of P2 suppression, explaining the discrepancy between sensory suppression and preserved sensory experiences. These findings, coupled with the coherence between relevant brain regions before and during speech, provide new mechanistic understanding of the complex interactions between action planning and sensory processing that provide for differentiated tagging and monitoring of one's own speech, processes disrupted in neuropsychiatric disorders. PMID:24423729

  19. Externalizing behavior from early childhood to adolescence: Prediction from inhibition, language, parenting, and attachment.

    PubMed

    Roskam, Isabelle

    2018-03-22

    The aim of the current research was to disentangle four theoretically sound models of externalizing behavior etiology (i.e., attachment, language, inhibition, and parenting) by testing their relation with behavioral trajectories from early childhood to adolescence. The aim was achieved through a 10-year prospective longitudinal study conducted over five waves with 111 referred children aged 3 to 5 years at the onset of the study. Clinical referral was primarily based on externalizing behavior. A multimethod (questionnaires, testing, and observations) approach was used to estimate the four predictors in early childhood. In line with previous studies, the results show a significant decrease of externalizing behavior from early childhood to adolescence. The decline was negatively related to mothers' coercive parenting and positively related to attachment security in early childhood, but not related to inhibition and language. The study has implications for research into externalizing behavior etiology recommending to gather hypotheses from various theoretically sound models to put them into competition with one another. The study also has implications for clinical practice by providing clear indications for prevention and early intervention.

  20. Multiple sound source localization using gammatone auditory filtering and direct sound componence detection

    NASA Astrophysics Data System (ADS)

    Chen, Huaiyu; Cao, Li

    2017-06-01

    In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.

  1. Series expansions of rotating two and three dimensional sound fields.

    PubMed

    Poletti, M A

    2010-12-01

    The cylindrical and spherical harmonic expansions of oscillating sound fields rotating at a constant rate are derived. These expansions are a generalized form of the stationary sound field expansions. The derivations are based on the representation of interior and exterior sound fields using the simple source approach and determination of the simple source solutions with uniform rotation. Numerical simulations of rotating sound fields are presented to verify the theory.

  2. Approaches to the study of neural coding of sound source location and sound envelope in real environments

    PubMed Central

    Kuwada, Shigeyuki; Bishop, Brian; Kim, Duck O.

    2012-01-01

    The major functions of the auditory system are recognition (what is the sound) and localization (where is the sound). Although each of these has received considerable attention, rarely are they studied in combination. Furthermore, the stimuli used in the bulk of studies did not represent sound location in real environments and ignored the effects of reverberation. Another ignored dimension is the distance of a sound source. Finally, there is a scarcity of studies conducted in unanesthetized animals. We illustrate a set of efficient methods that overcome these shortcomings. We use the virtual auditory space method (VAS) to efficiently present sounds at different azimuths, different distances and in different environments. Additionally, this method allows for efficient switching between binaural and monaural stimulation and alteration of acoustic cues singly or in combination to elucidate neural mechanisms underlying localization and recognition. Such procedures cannot be performed with real sound field stimulation. Our research is designed to address the following questions: Are IC neurons specialized to process what and where auditory information? How does reverberation and distance of the sound source affect this processing? How do IC neurons represent sound source distance? Are neural mechanisms underlying envelope processing binaural or monaural? PMID:22754505

  3. Characterizing large river sounds: Providing context for understanding the environmental effects of noise produced by hydrokinetic turbines.

    PubMed

    Bevelhimer, Mark S; Deng, Z Daniel; Scherelis, Constantin

    2016-01-01

    Underwater noise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels and natural underwater sound sources in static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines on the Mississippi River, where sounds created by flowing water are part of all measurements, both natural ambient and anthropogenic sources. Vessel sizes ranged from a small fishing boat with 60 hp outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, which were, on average, 40 dB greater than the sound created by an operating hydrokinetic turbine. A comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed sound attenuation.

  4. Békésy's contributions to our present understanding of sound conduction to the inner ear.

    PubMed

    Puria, Sunil; Rosowski, John J

    2012-11-01

    In our daily lives we hear airborne sounds that travel primarily through the external and middle ear to the cochlear sensory epithelium. We also hear sounds that travel to the cochlea via a second sound-conduction route, bone conduction. This second pathway is excited by vibrations of the head and body that result from substrate vibrations, direct application of vibrational stimuli to the head or body, or vibrations induced by airborne sound. The sensation of bone-conducted sound is affected by the presence of the external and middle ear, but is not completely dependent upon their function. Measurements of the differential sensitivity of patients to airborne sound and direct vibration of the head are part of the routine battery of clinical tests used to separate conductive and sensorineural hearing losses. Georg von Békésy designed a careful set of experiments and pioneered many measurement techniques on human cadaver temporal bones, in physical models, and in human subjects to elucidate the basic mechanisms of air- and bone-conducted sound. Looking back one marvels at the sheer number of experiments he performed on sound conduction, mostly by himself without the aid of students or research associates. Békésy's work had a profound impact on the field of middle-ear mechanics and bone conduction fifty years ago when he received his Nobel Prize. Today many of Békésy's ideas continue to be investigated and extended, some have been supported by new evidence, some have been refuted, while others remain to be tested. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Sound Source Localization Using Non-Conformal Surface Sound Field Transformation Based on Spherical Harmonic Wave Decomposition

    PubMed Central

    Zhang, Lanyue; Ding, Dandan; Yang, Desen; Wang, Jia; Shi, Jie

    2017-01-01

    Spherical microphone arrays have been paid increasing attention for their ability to locate a sound source with arbitrary incident angle in three-dimensional space. Low-frequency sound sources are usually located by using spherical near-field acoustic holography. The reconstruction surface and holography surface are conformal surfaces in the conventional sound field transformation based on generalized Fourier transform. When the sound source is on the cylindrical surface, it is difficult to locate by using spherical surface conformal transform. The non-conformal sound field transformation by making a transfer matrix based on spherical harmonic wave decomposition is proposed in this paper, which can achieve the transformation of a spherical surface into a cylindrical surface by using spherical array data. The theoretical expressions of the proposed method are deduced, and the performance of the method is simulated. Moreover, the experiment of sound source localization by using a spherical array with randomly and uniformly distributed elements is carried out. Results show that the non-conformal surface sound field transformation from a spherical surface to a cylindrical surface is realized by using the proposed method. The localization deviation is around 0.01 m, and the resolution is around 0.3 m. The application of the spherical array is extended, and the localization ability of the spherical array is improved. PMID:28489065

  6. Sound Symbolism Facilitates Word Learning in 14-Month-Olds

    PubMed Central

    Imai, Mutsumi; Miyazaki, Michiko; Yeung, H. Henny; Hidaka, Shohei; Kantartzis, Katerina; Okada, Hiroyuki; Kita, Sotaro

    2015-01-01

    Sound symbolism, or the nonarbitrary link between linguistic sound and meaning, has often been discussed in connection with language evolution, where the oral imitation of external events links phonetic forms with their referents (e.g., Ramachandran & Hubbard, 2001). In this research, we explore whether sound symbolism may also facilitate synchronic language learning in human infants. Sound symbolism may be a useful cue particularly at the earliest developmental stages of word learning, because it potentially provides a way of bootstrapping word meaning from perceptual information. Using an associative word learning paradigm, we demonstrated that 14-month-old infants could detect Köhler-type (1947) shape-sound symbolism, and could use this sensitivity in their effort to establish a word-referent association. PMID:25695741

  7. Brain Areas Controlling Heart Rate Variability in Tinnitus and Tinnitus-Related Distress

    PubMed Central

    Vanneste, Sven; De Ridder, Dirk

    2013-01-01

    Background Tinnitus is defined as an intrinsic sound perception that cannot be attributed to an external sound source. Distress in tinnitus patients is related to increased beta activity in the dorsal part of the anterior cingulate and the amount of distress correlates with network activity consisting of the amygdala-anterior cingulate cortex-insula-parahippocampus. Previous research also revealed that distress is associated to a higher sympathetic (OS) tone in tinnitus patients and tinnitus suppression to increased parasympathetic (PS) tone. Methodology The aim of the present study is to investigate the relationship between tinnitus distress and the autonomic nervous system and find out which cortical areas are involved in the autonomic nervous system influences in tinnitus distress by the use of source localized resting state electroencephalogram (EEG) recordings and electrocardiogram (ECG). Twenty-one tinnitus patients were included in this study. Conclusions The results indicate that the dorsal and subgenual anterior cingulate, as well as the left and right insula are important in the central control of heart rate variability in tinnitus patients. Whereas the sympathovagal balance is controlled by the subgenual and pregenual anterior cingulate cortex, the right insula controls sympathetic activity and the left insula the parasympathetic activity. The perceived distress in tinnitus patients seems to be sympathetically mediated. PMID:23533644

  8. Sound reduction by metamaterial-based acoustic enclosure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Shanshan; Li, Pei; Zhou, Xiaoming

    In many practical systems, acoustic radiation control on noise sources contained within a finite volume by an acoustic enclosure is of great importance, but difficult to be accomplished at low frequencies due to the enhanced acoustic-structure interaction. In this work, we propose to use acoustic metamaterials as the enclosure to efficiently reduce sound radiation at their negative-mass frequencies. Based on a circularly-shaped metamaterial model, sound radiation properties by either central or eccentric sources are analyzed by numerical simulations for structured metamaterials. The parametric analyses demonstrate that the barrier thickness, the cavity size, the source type, and the eccentricity of themore » source have a profound effect on the sound reduction. It is found that increasing the thickness of the metamaterial barrier is an efficient approach to achieve large sound reduction over the negative-mass frequencies. These results are helpful in designing highly efficient acoustic enclosures for blockage of sound in low frequencies.« less

  9. Cross-correlation, triangulation, and curved-wavefront focusing of coral reef sound using a bi-linear hydrophone array.

    PubMed

    Freeman, Simon E; Buckingham, Michael J; Freeman, Lauren A; Lammers, Marc O; D'Spain, Gerald L

    2015-01-01

    A seven element, bi-linear hydrophone array was deployed over a coral reef in the Papahãnaumokuãkea Marine National Monument, Northwest Hawaiian Islands, in order to investigate the spatial, temporal, and spectral properties of biological sound in an environment free of anthropogenic influences. Local biological sound sources, including snapping shrimp and other organisms, produced curved-wavefront acoustic arrivals at the array, allowing source location via focusing to be performed over an area of 1600 m(2). Initially, however, a rough estimate of source location was obtained from triangulation of pair-wise cross-correlations of the sound. Refinements to these initial source locations, and source frequency information, were then obtained using two techniques, conventional and adaptive focusing. It was found that most of the sources were situated on or inside the reef structure itself, rather than over adjacent sandy areas. Snapping-shrimp-like sounds, all with similar spectral characteristics, originated from individual sources predominantly in one area to the east of the array. To the west, the spectral and spatial distributions of the sources were more varied, suggesting the presence of a multitude of heterogeneous biological processes. In addition to the biological sounds, some low-frequency noise due to distant breaking waves was received from end-fire north of the array.

  10. Underwater Acoustic Source Localisation Among Blind and Sighted Scuba Divers: Comparative study.

    PubMed

    Cambi, Jacopo; Livi, Ludovica; Livi, Walter

    2017-05-01

    Many blind individuals demonstrate enhanced auditory spatial discrimination or localisation of sound sources in comparison to sighted subjects. However, this hypothesis has not yet been confirmed with regards to underwater spatial localisation. This study therefore aimed to investigate underwater acoustic source localisation among blind and sighted scuba divers. This study took place between February and June 2015 in Elba, Italy, and involved two experimental groups of divers with either acquired (n = 20) or congenital (n = 10) blindness and a control group of 30 sighted divers. Each subject took part in five attempts at an under-water acoustic source localisation task, in which the divers were requested to swim to the source of a sound originating from one of 24 potential locations. The control group had their sight obscured during the task. The congenitally blind divers demonstrated significantly better underwater sound localisation compared to the control group or those with acquired blindness ( P = 0.0007). In addition, there was a significant correlation between years of blindness and underwater sound localisation ( P <0.0001). Congenital blindness was found to positively affect the ability of a diver to recognise the source of a sound in an underwater environment. As the correct localisation of sounds underwater may help individuals to avoid imminent danger, divers should perform sound localisation tests during training sessions.

  11. The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl.

    PubMed

    Baxter, Caitlin S; Nelson, Brian S; Takahashi, Terry T

    2013-02-01

    Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643-655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map.

  12. Sensitivity to Angular and Radial Source Movements as a Function of Acoustic Complexity in Normal and Impaired Hearing

    PubMed Central

    Grimm, Giso; Hohmann, Volker; Laugesen, Søren; Neher, Tobias

    2017-01-01

    In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far (‘radial’) and left-right (‘angular’) movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup. PMID:28675088

  13. Sensitivity to Angular and Radial Source Movements as a Function of Acoustic Complexity in Normal and Impaired Hearing.

    PubMed

    Lundbeck, Micha; Grimm, Giso; Hohmann, Volker; Laugesen, Søren; Neher, Tobias

    2017-01-01

    In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far ('radial') and left-right ('angular') movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup.

  14. Simulating cartilage conduction sound to estimate the sound pressure level in the external auditory canal

    NASA Astrophysics Data System (ADS)

    Shimokura, Ryota; Hosoi, Hiroshi; Nishimura, Tadashi; Iwakura, Takashi; Yamanaka, Toshiaki

    2015-01-01

    When the aural cartilage is made to vibrate it generates sound directly into the external auditory canal which can be clearly heard. Although the concept of cartilage conduction can be applied to various speech communication and music industrial devices (e.g. smartphones, music players and hearing aids), the conductive performance of such devices has not yet been defined because the calibration methods are different from those currently used for air and bone conduction. Thus, the aim of this study was to simulate the cartilage conduction sound (CCS) using a head and torso simulator (HATS) and a model of aural cartilage (polyurethane resin pipe) and compare the results with experimental ones. Using the HATS, we found the simulated CCS at frequencies above 2 kHz corresponded to the average measured CCS from seven subjects. Using a model of skull bone and aural cartilage, we found that the simulated CCS at frequencies lower than 1.5 kHz agreed with the measured CCS. Therefore, a combination of these two methods can be used to estimate the CCS with high accuracy.

  15. Method for noninvasive determination of acoustic properties of fluids inside pipes

    DOEpatents

    None

    2016-08-02

    A method for determining the composition of fluids flowing through pipes from noninvasive measurements of acoustic properties of the fluid is described. The method includes exciting a first transducer located on the external surface of the pipe through which the fluid under investigation is flowing, to generate an ultrasound chirp signal, as opposed to conventional pulses. The chirp signal is received by a second transducer disposed on the external surface of the pipe opposing the location of the first transducer, from which the transit time through the fluid is determined and the sound speed of the ultrasound in the fluid is calculated. The composition of a fluid is calculated from the sound speed therein. The fluid density may also be derived from measurements of sound attenuation. Several signal processing approaches are described for extracting the transit time information from the data with the effects of the pipe wall having been subtracted.

  16. Effects of End CAP and Aspect Ratio on Transmission of Sound across a Truss-Like Periodic Double Panel

    NASA Astrophysics Data System (ADS)

    EL-RAHEB, M.; WAGNER, P.

    2002-02-01

    Transmission of sound across 2-D truss-like periodic double panels separated by an air gap and in contact with an acoustic fluid on the external faces is analyzed. Each panel is made of repeated cells. Combining the transfer matrices of the unit cell forms a set of equations for the overall elastic frequency response. The acoustic pressure in the fluids is expressed using a source boundary element method. Adding rigid reflecting end caps confines the air in the gap between panels which influences sound transmission. Measured values of transmission loss differ from the 2-D model by the wide low-frequency dip of the mass-spring-mass or “msm” resonance also termed the “air gap resonance”. In this case, the panels act as rigid masses and the air gap acts as an adiabatic air spring. Results from the idealized 3-D and 2-D models, incorporating rigid cavities and elastic plates, reveal that the “msm” dip is absent in 2-D models radiating into a semi-infinite medium. The dip strengthens as aspect ratio approaches unity. Even when the dip disappears in 2-D, TL rises more steeply for frequencies above the “msm” frequency.

  17. Characterizing large river sounds: Providing context for understanding the environmental effects of noise produced by hydrokinetic turbines

    DOE PAGES

    Bevelhimer, Mark S.; Deng, Z. Daniel; Scherelis, Constantin C.

    2016-01-06

    Underwaternoise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels and natural underwater sound sources in static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines on the Mississippi River, where soundsmore » created by flowing water are part of all measurements, both natural ambient and anthropogenic sources. Vessel sizes ranged from a small fishing boat with 60 hp outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, which were, on average, 40 dB greater than the sound created by an operating hydrokinetic turbine. As a result, a comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed sound attenuation.« less

  18. Underwater auditory localization by a swimming harbor seal (Phoca vitulina).

    PubMed

    Bodson, Anais; Miersch, Lars; Mauck, Bjoern; Dehnhardt, Guido

    2006-09-01

    The underwater sound localization acuity of a swimming harbor seal (Phoca vitulina) was measured in the horizontal plane at 13 different positions. The stimulus was either a double sound (two 6-kHz pure tones lasting 0.5 s separated by an interval of 0.2 s) or a single continuous sound of 1.2 s. Testing was conducted in a 10-m-diam underwater half circle arena with hidden loudspeakers installed at the exterior perimeter. The animal was trained to swim along the diameter of the half circle and to change its course towards the sound source as soon as the signal was given. The seal indicated the sound source by touching its assumed position at the board of the half circle. The deviation of the seals choice from the actual sound source was measured by means of video analysis. In trials with the double sound the seal localized the sound sources with a mean deviation of 2.8 degrees and in trials with the single sound with a mean deviation of 4.5 degrees. In a second experiment minimum audible angles of the stationary animal were found to be 9.8 degrees in front and 9.7 degrees in the back of the seal's head.

  19. Personal sound zone reproduction with room reflections

    NASA Astrophysics Data System (ADS)

    Olik, Marek

    Loudspeaker-based sound systems, capable of a convincing reproduction of different audio streams to listeners in the same acoustic enclosure, are a convenient alternative to headphones. Such systems aim to generate "sound zones" in which target sound programmes are to be reproduced with minimum interference from any alternative programmes. This can be achieved with appropriate filtering of the source (loudspeaker) signals, so that the target sound's energy is directed to the chosen zone while being attenuated elsewhere. The existing methods are unable to produce the required sound energy ratio (acoustic contrast) between the zones with a small number of sources when strong room reflections are present. Optimization of parameters is therefore required for systems with practical limitations to improve their performance in reflective acoustic environments. One important parameter is positioning of sources with respect to the zones and room boundaries. The first contribution of this thesis is a comparison of the key sound zoning methods implemented on compact and distributed geometrical source arrangements. The study presents previously unpublished detailed evaluation and ranking of such arrangements for systems with a limited number of sources in a reflective acoustic environment similar to a domestic room. Motivated by the requirement to investigate the relationship between source positioning and performance in detail, the central contribution of this thesis is a study on optimizing source arrangements when strong individual room reflections occur. Small sound zone systems are studied analytically and numerically to reveal relationships between the geometry of source arrays and performance in terms of acoustic contrast and array effort (related to system efficiency). Three novel source position optimization techniques are proposed to increase the contrast, and geometrical means of reducing the effort are determined. Contrary to previously published case studies, this work presents a systematic examination of the key problem of first order reflections and proposes general optimization techniques, thus forming an important contribution. The remaining contribution considers evaluation and comparison of the proposed techniques with two alternative approaches to sound zone generation under reflective conditions: acoustic contrast control (ACC) combined with anechoic source optimization and sound power minimization (SPM). The study provides a ranking of the examined approaches which could serve as a guideline for method selection for rooms with strong individual reflections.

  20. Marine mammal audibility of selected shallow-water survey sources.

    PubMed

    MacGillivray, Alexander O; Racca, Roberto; Li, Zizheng

    2014-01-01

    Most attention about the acoustic effects of marine survey sound sources on marine mammals has focused on airgun arrays, with other common sources receiving less scrutiny. Sound levels above hearing threshold (sensation levels) were modeled for six marine mammal species and seven different survey sources in shallow water. The model indicated that odontocetes were most likely to hear sounds from mid-frequency sources (fishery, communication, and hydrographic systems), mysticetes from low-frequency sources (sub-bottom profiler and airguns), and pinnipeds from both mid- and low-frequency sources. High-frequency sources (side-scan and multibeam) generated the lowest estimated sensation levels for all marine mammal species groups.

  1. Performance of active feedforward control systems in non-ideal, synthesized diffuse sound fields.

    PubMed

    Misol, Malte; Bloch, Christian; Monner, Hans Peter; Sinapius, Michael

    2014-04-01

    The acoustic performance of passive or active panel structures is usually tested in sound transmission loss facilities. A reverberant sending room, equipped with one or a number of independent sound sources, is used to generate a diffuse sound field excitation which acts as a disturbance source on the structure under investigation. The spatial correlation and coherence of such a synthesized non-ideal diffuse-sound-field excitation, however, might deviate significantly from the ideal case. This has consequences for the operation of an active feedforward control system which heavily relies on the acquisition of coherent disturbance source information. This work, therefore, evaluates the spatial correlation and coherence of ideal and non-ideal diffuse sound fields and considers the implications on the performance of a feedforward control system. The system under consideration is an aircraft-typical double panel system, equipped with an active sidewall panel (lining), which is realized in a transmission loss facility. Experimental results for different numbers of sound sources in the reverberation room are compared to simulation results of a comparable generic double panel system excited by an ideal diffuse sound field. It is shown that the number of statistically independent noise sources acting on the primary structure of the double panel system depends not only on the type of diffuse sound field but also on the sample lengths of the processed signals. The experimental results show that the number of reference sensors required for a defined control performance exhibits an inverse relationship to control filter length.

  2. The influence of underwater data transmission sounds on the displacement behaviour of captive harbour seals (Phoca vitulina).

    PubMed

    Kastelein, Ronald A; van der Heul, Sander; Verboom, Willem C; Triesscheijn, Rob J V; Jennings, Nancy V

    2006-02-01

    To prevent grounding of ships and collisions between ships in shallow coastal waters, an underwater data collection and communication network (ACME) using underwater sounds to encode and transmit data is currently under development. Marine mammals might be affected by ACME sounds since they may use sound of a similar frequency (around 12 kHz) for communication, orientation, and prey location. If marine mammals tend to avoid the vicinity of the acoustic transmitters, they may be kept away from ecologically important areas by ACME sounds. One marine mammal species that may be affected in the North Sea is the harbour seal (Phoca vitulina). No information is available on the effects of ACME-like sounds on harbour seals, so this study was carried out as part of an environmental impact assessment program. Nine captive harbour seals were subjected to four sound types, three of which may be used in the underwater acoustic data communication network. The effect of each sound was judged by comparing the animals' location in a pool during test periods to that during baseline periods, during which no sound was produced. Each of the four sounds could be made into a deterrent by increasing its amplitude. The seals reacted by swimming away from the sound source. The sound pressure level (SPL) at the acoustic discomfort threshold was established for each of the four sounds. The acoustic discomfort threshold is defined as the boundary between the areas that the animals generally occupied during the transmission of the sounds and the areas that they generally did not enter during transmission. The SPLs at the acoustic discomfort thresholds were similar for each of the sounds (107 dB re 1 microPa). Based on this discomfort threshold SPL, discomfort zones at sea for several source levels (130-180 dB re 1 microPa) of the sounds were calculated, using a guideline sound propagation model for shallow water. The discomfort zone is defined as the area around a sound source that harbour seals are expected to avoid. The definition of the discomfort zone is based on behavioural discomfort, and does not necessarily coincide with the physical discomfort zone. Based on these results, source levels can be selected that have an acceptable effect on harbour seals in particular areas. The discomfort zone of a communication sound depends on the sound, the source level, and the propagation characteristics of the area in which the sound system is operational. The source level of the communication system should be adapted to each area (taking into account the width of a sea arm, the local sound propagation, and the importance of an area to the affected species). The discomfort zone should not coincide with ecologically important areas (for instance resting, breeding, suckling, and feeding areas), or routes between these areas.

  3. A description of externally recorded womb sounds in human subjects during gestation

    PubMed Central

    Daland, Robert; Kesavan, Kalpashri; Macey, Paul M.; Zeltzer, Lonnie; Harper, Ronald M.

    2018-01-01

    Objective Reducing environmental noise benefits premature infants in neonatal intensive care units (NICU), but excessive reduction may lead to sensory deprivation, compromising development. Instead of minimal noise levels, environments that mimic intrauterine soundscapes may facilitate infant development by providing a sound environment reflecting fetal life. This soundscape may support autonomic and emotional development in preterm infants. We aimed to assess the efficacy and feasibility of external non-invasive recordings in pregnant women, endeavoring to capture intra-abdominal or womb sounds during pregnancy with electronic stethoscopes and build a womb sound library to assess sound trends with gestational development. We also compared these sounds to popular commercial womb sounds marketed to new parents. Study design Intra-abdominal sounds from 50 mothers in their second and third trimester (13 to 40 weeks) of pregnancy were recorded for 6 minutes in a quiet clinic room with 4 electronic stethoscopes, placed in the right upper and lower quadrants, and left upper and lower quadrants of the abdomen. These recording were partitioned into 2-minute intervals in three different positions: standing, sitting and lying supine. Maternal and gestational age, Body Mass Index (BMI) and time since last meal were collected during recordings. Recordings were analyzed using long-term average spectral and waveform analysis, and compared to sounds from non-pregnant abdomens and commercially-marketed womb sounds selected for their availability, popularity, and claims they mimic the intrauterine environment. Results Maternal sounds shared certain common characteristics, but varied with gestational age. With fetal development, the maternal abdomen filtered high (500–5,000 Hz) and mid-frequency (100–500 Hz) energy bands, but no change appeared in contributions from low-frequency signals (10–100 Hz) with gestational age. Variation appeared between mothers, suggesting a resonant chamber role for intra-abdominal space. Compared to commercially-marketed sounds, womb signals were dominated by bowel sounds, were of lower frequency, and showed more variation in intensity. Conclusions High-fidelity intra-abdominal or womb sounds during pregnancy can be recorded non-invasively. Recordings vary with gestational age, and show a predominance of low frequency noise and bowel sounds which are distinct from popular commercial products. Such recordings may be utilized to determine whether sounds influence preterm infant development in the NICU. PMID:29746604

  4. A description of externally recorded womb sounds in human subjects during gestation.

    PubMed

    Parga, Joanna J; Daland, Robert; Kesavan, Kalpashri; Macey, Paul M; Zeltzer, Lonnie; Harper, Ronald M

    2018-01-01

    Reducing environmental noise benefits premature infants in neonatal intensive care units (NICU), but excessive reduction may lead to sensory deprivation, compromising development. Instead of minimal noise levels, environments that mimic intrauterine soundscapes may facilitate infant development by providing a sound environment reflecting fetal life. This soundscape may support autonomic and emotional development in preterm infants. We aimed to assess the efficacy and feasibility of external non-invasive recordings in pregnant women, endeavoring to capture intra-abdominal or womb sounds during pregnancy with electronic stethoscopes and build a womb sound library to assess sound trends with gestational development. We also compared these sounds to popular commercial womb sounds marketed to new parents. Intra-abdominal sounds from 50 mothers in their second and third trimester (13 to 40 weeks) of pregnancy were recorded for 6 minutes in a quiet clinic room with 4 electronic stethoscopes, placed in the right upper and lower quadrants, and left upper and lower quadrants of the abdomen. These recording were partitioned into 2-minute intervals in three different positions: standing, sitting and lying supine. Maternal and gestational age, Body Mass Index (BMI) and time since last meal were collected during recordings. Recordings were analyzed using long-term average spectral and waveform analysis, and compared to sounds from non-pregnant abdomens and commercially-marketed womb sounds selected for their availability, popularity, and claims they mimic the intrauterine environment. Maternal sounds shared certain common characteristics, but varied with gestational age. With fetal development, the maternal abdomen filtered high (500-5,000 Hz) and mid-frequency (100-500 Hz) energy bands, but no change appeared in contributions from low-frequency signals (10-100 Hz) with gestational age. Variation appeared between mothers, suggesting a resonant chamber role for intra-abdominal space. Compared to commercially-marketed sounds, womb signals were dominated by bowel sounds, were of lower frequency, and showed more variation in intensity. High-fidelity intra-abdominal or womb sounds during pregnancy can be recorded non-invasively. Recordings vary with gestational age, and show a predominance of low frequency noise and bowel sounds which are distinct from popular commercial products. Such recordings may be utilized to determine whether sounds influence preterm infant development in the NICU.

  5. A new model of sensorimotor coupling in the development of speech.

    PubMed

    Westermann, Gert; Reck Miranda, Eduardo

    2004-05-01

    We present a computational model that learns a coupling between motor parameters and their sensory consequences in vocal production during a babbling phase. Based on the coupling, preferred motor parameters and prototypically perceived sounds develop concurrently. Exposure to an ambient language modifies perception to coincide with the sounds from the language. The model develops motor mirror neurons that are active when an external sound is perceived. An extension to visual mirror neurons for oral gestures is suggested.

  6. Monitoring Anthropogenic Ocean Sound from Shipping Using an Acoustic Sensor Network and a Compressive Sensing Approach.

    PubMed

    Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian

    2016-03-22

    Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches.

  7. Monitoring Anthropogenic Ocean Sound from Shipping Using an Acoustic Sensor Network and a Compressive Sensing Approach †

    PubMed Central

    Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian

    2016-01-01

    Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches. PMID:27011187

  8. Underwater Acoustic Source Localisation Among Blind and Sighted Scuba Divers

    PubMed Central

    Cambi, Jacopo; Livi, Ludovica; Livi, Walter

    2017-01-01

    Objectives Many blind individuals demonstrate enhanced auditory spatial discrimination or localisation of sound sources in comparison to sighted subjects. However, this hypothesis has not yet been confirmed with regards to underwater spatial localisation. This study therefore aimed to investigate underwater acoustic source localisation among blind and sighted scuba divers. Methods This study took place between February and June 2015 in Elba, Italy, and involved two experimental groups of divers with either acquired (n = 20) or congenital (n = 10) blindness and a control group of 30 sighted divers. Each subject took part in five attempts at an under-water acoustic source localisation task, in which the divers were requested to swim to the source of a sound originating from one of 24 potential locations. The control group had their sight obscured during the task. Results The congenitally blind divers demonstrated significantly better underwater sound localisation compared to the control group or those with acquired blindness (P = 0.0007). In addition, there was a significant correlation between years of blindness and underwater sound localisation (P <0.0001). Conclusion Congenital blindness was found to positively affect the ability of a diver to recognise the source of a sound in an underwater environment. As the correct localisation of sounds underwater may help individuals to avoid imminent danger, divers should perform sound localisation tests during training sessions. PMID:28690888

  9. The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl

    PubMed Central

    Baxter, Caitlin S.; Takahashi, Terry T.

    2013-01-01

    Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643–655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map. PMID:23175801

  10. How far away is plug 'n' play? Assessing the near-term potential of sonification and auditory display

    NASA Technical Reports Server (NTRS)

    Bargar, Robin

    1995-01-01

    The commercial music industry offers a broad range of plug 'n' play hardware and software scaled to music professionals and scaled to a broad consumer market. The principles of sound synthesis utilized in these products are relevant to application in virtual environments (VE). However, the closed architectures used in commercial music synthesizers are prohibitive to low-level control during real-time rendering, and the algorithms and sounds themselves are not standardized from product to product. To bring sound into VE requires a new generation of open architectures designed for human-controlled performance from interfaces embedded in immersive environments. This presentation addresses the state of the sonic arts in scientific computing and VE, analyzes research challenges facing sound computation, and offers suggestions regarding tools we might expect to become available during the next few years. A list of classes of audio functionality in VE includes sonification -- the use of sound to represent data from numerical models; 3D auditory display (spatialization and localization, also called externalization); navigation cues for positional orientation and for finding items or regions inside large spaces; voice recognition for controlling the computer; external communications between users in different spaces; and feedback to the user concerning his own actions or the state of the application interface. To effectively convey this considerable variety of signals, we apply principles of acoustic design to ensure the messages are neither confusing nor competing. We approach the design of auditory experience through a comprehensive structure for messages, and message interplay we refer to as an Automated Sound Environment. Our research addresses real-time sound synthesis, real-time signal processing and localization, interactive control of high-dimensional systems, and synchronization of sound and graphics.

  11. Comparison of sound reproduction using higher order loudspeakers and equivalent line arrays in free-field conditions.

    PubMed

    Poletti, Mark A; Betlehem, Terence; Abhayapala, Thushara D

    2014-07-01

    Higher order sound sources of Nth order can radiate sound with 2N + 1 orthogonal radiation patterns, which can be represented as phase modes or, equivalently, amplitude modes. This paper shows that each phase mode response produces a spiral wave front with a different spiral rate, and therefore a different direction of arrival of sound. Hence, for a given receiver position a higher order source is equivalent to a linear array of 2N + 1 monopole sources. This interpretation suggests performance similar to a circular array of higher order sources can be produced by an array of sources, each of which consists of a line array having monopoles at the apparent source locations of the corresponding phase modes. Simulations of higher order arrays and arrays of equivalent line sources are presented. It is shown that the interior fields produced by the two arrays are essentially the same, but that the exterior fields differ because the higher order sources produces different equivalent source locations for field positions outside the array. This work provides an explanation of the fact that an array of L Nth order sources can reproduce sound fields whose accuracy approaches the performance of (2N + 1)L monopoles.

  12. Structure of supersonic jet flow and its radiated sound

    NASA Technical Reports Server (NTRS)

    Mankbadi, Reda R.; Hayer, M. Ehtesham; Povinelli, Louis A.

    1994-01-01

    The present paper explores the use of large-eddy simulations as a tool for predicting noise from first principles. A high-order numerical scheme is used to perform large-eddy simulations of a supersonic jet flow with emphasis on capturing the time-dependent flow structure representating the sound source. The wavelike nature of this structure under random inflow disturbances is demonstrated. This wavelike structure is then enhanced by taking the inflow disturbances to be purely harmonic. Application of Lighthill's theory to calculate the far-field noise, with the sound source obtained from the calculated time-dependent near field, is demonstrated. Alternative approaches to coupling the near-field sound source to the far-field sound are discussed.

  13. Spacecraft Internal Acoustic Environment Modeling

    NASA Technical Reports Server (NTRS)

    Chu, SShao-sheng R.; Allen, Christopher S.

    2009-01-01

    Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. In FY09, the physical mockup developed in FY08, with interior geometric shape similar to Orion CM (Crew Module) IML (Interior Mode Line), was used to validate SEA (Statistical Energy Analysis) acoustic model development with realistic ventilation fan sources. The sound power levels of these sources were unknown a priori, as opposed to previous studies that RSS (Reference Sound Source) with known sound power level was used. The modeling results were evaluated based on comparisons to measurements of sound pressure levels over a wide frequency range, including the frequency range where SEA gives good results. Sound intensity measurement was performed over a rectangular-shaped grid system enclosing the ventilation fan source. Sound intensities were measured at the top, front, back, right, and left surfaces of the and system. Sound intensity at the bottom surface was not measured, but sound blocking material was placed tinder the bottom surface to reflect most of the incident sound energy back to the remaining measured surfaces. Integrating measured sound intensities over measured surfaces renders estimated sound power of the source. The reverberation time T6o of the mockup interior had been modified to match reverberation levels of ISS US Lab interior for speech frequency bands, i.e., 0.5k, 1k, 2k, 4 kHz, by attaching appropriately sized Thinsulate sound absorption material to the interior wall of the mockup. Sound absorption of Thinsulate was modeled in three methods: Sabine equation with measured mockup interior reverberation time T60, layup model based on past impedance tube testing, and layup model plus air absorption correction. The evaluation/validation was carried out by acquiring octave band microphone data simultaneously at ten fixed locations throughout the mockup. SPLs (Sound Pressure Levels) predicted by our SEA model match well with measurements for our CM mockup, with a more complicated shape. Additionally in FY09, background NC noise (Noise Criterion) simulation and MRT (Modified Rhyme Test) were developed and performed in the mockup to determine the maximum noise level in CM habitable volume for fair crew voice communications. Numerous demonstrations of simulated noise environment in the mockup and associated SIL (Speech Interference Level) via MRT were performed for various communities, including members from NASA and Orion prime-/sub-contractors. Also, a new HSIR (Human-Systems Integration Requirement) for limiting pre- and post-landing SIL was proposed.

  14. Underwater sound of rigid-hulled inflatable boats.

    PubMed

    Erbe, Christine; Liong, Syafrin; Koessler, Matthew Walter; Duncan, Alec J; Gourlay, Tim

    2016-06-01

    Underwater sound of rigid-hulled inflatable boats was recorded 142 times in total, over 3 sites: 2 in southern British Columbia, Canada, and 1 off Western Australia. Underwater sound peaked between 70 and 400 Hz, exhibiting strong tones in this frequency range related to engine and propeller rotation. Sound propagation models were applied to compute monopole source levels, with the source assumed 1 m below the sea surface. Broadband source levels (10-48 000 Hz) increased from 134 to 171 dB re 1 μPa @ 1 m with speed from 3 to 16 m/s (10-56 km/h). Source power spectral density percentile levels and 1/3 octave band levels are given for use in predictive modeling of underwater sound of these boats as part of environmental impact assessments.

  15. Control of Toxic Chemicals in Puget Sound, Phase 3: Study Of Atmospheric Deposition of Air Toxics to the Surface of Puget Sound

    DTIC Science & Technology

    2007-01-01

    deposition directly to Puget Sound was an important source of PAHs, polybrominated diphenyl ethers (PBDEs), and heavy metals . In most cases, atmospheric...versus Atmospheric Fluxes ........................................................................66  PAH Source Apportionment ...temperature inversions) on air quality during the wet season. A semi-quantitative apportionment study permitted a first-order characterization of source

  16. Binaural Processing of Multiple Sound Sources

    DTIC Science & Technology

    2016-08-18

    Sound Source Localization Identification, and Sound Source Localization When Listeners Move. The CI research was also supported by an NIH grant...8217Cochlear Implant Performance in Realistic Listening Environments,’ Dr. Michael Dorman, Principal Investigator, Dr. William Yost unpaid advisor. The other... Listeners Move. The CI research was also supported by an NIH grant (“Cochlear Implant Performance in Realistic Listening Environments,” Dr. Michael Dorman

  17. Effect of external pressure environment on the internal noise level due to a source inside a cylindrical tank

    NASA Technical Reports Server (NTRS)

    Clevenson, S. A.; Roussos, L. A.

    1984-01-01

    A small cylindrical tank was used to study the effect on the noise environment within a tank of conditions of atmospheric (sea level) pressure or vacuum environments on the exterior. Experimentally determined absorption coefficients were used to calculate transmission loss, transmissibility coefficients and the sound pressure (noise) level differences in the interior. The noise level differences were also measured directly for the two exterior environments and compared to various analytical approximations with limited agreement. Trend study curves indicated that if the tank transmission loss is above 25 dB, the difference in interior noise level between the vacuum and ambient pressure conditions are less than 2 dB.

  18. Psychophysical investigation of an auditory spatial illusion in cats: the precedence effect.

    PubMed

    Tollin, Daniel J; Yin, Tom C T

    2003-10-01

    The precedence effect (PE) describes several spatial perceptual phenomena that occur when similar sounds are presented from two different locations and separated by a delay. The mechanisms that produce the effect are thought to be responsible for the ability to localize sounds in reverberant environments. Although the physiological bases for the PE have been studied, little is known about how these sounds are localized by species other than humans. Here we used the search coil technique to measure the eye positions of cats trained to saccade to the apparent locations of sounds. To study the PE, brief broadband stimuli were presented from two locations, with a delay between their onsets; the delayed sound meant to simulate a single reflection. Although the cats accurately localized single sources, the apparent locations of the paired sources depended on the delay. First, the cats exhibited summing localization, the perception of a "phantom" sound located between the sources, for delays < +/-400 micros for sources positioned in azimuth along the horizontal plane, but not for sources positioned in elevation along the sagittal plane. Second, consistent with localization dominance, for delays from 400 micros to about 10 ms, the cats oriented toward the leading source location only, with little influence of the lagging source, both for horizontally and vertically placed sources. Finally, the echo threshold was reached for delays >10 ms, where the cats first began to orient to the lagging source on some trials. These data reveal that cats experience the PE phenomena similarly to humans.

  19. Acoustic signatures of sound source-tract coupling.

    PubMed

    Arneodo, Ezequiel M; Perl, Yonatan Sanz; Mindlin, Gabriel B

    2011-04-01

    Birdsong is a complex behavior, which results from the interaction between a nervous system and a biomechanical peripheral device. While much has been learned about how complex sounds are generated in the vocal organ, little has been learned about the signature on the vocalizations of the nonlinear effects introduced by the acoustic interactions between a sound source and the vocal tract. The variety of morphologies among bird species makes birdsong a most suitable model to study phenomena associated to the production of complex vocalizations. Inspired by the sound production mechanisms of songbirds, in this work we study a mathematical model of a vocal organ, in which a simple sound source interacts with a tract, leading to a delay differential equation. We explore the system numerically, and by taking it to the weakly nonlinear limit, we are able to examine its periodic solutions analytically. By these means we are able to explore the dynamics of oscillatory solutions of a sound source-tract coupled system, which are qualitatively different from those of a sound source-filter model of a vocal organ. Nonlinear features of the solutions are proposed as the underlying mechanisms of observed phenomena in birdsong, such as unilaterally produced "frequency jumps," enhancement of resonances, and the shift of the fundamental frequency observed in heliox experiments. ©2011 American Physical Society

  20. Acoustic signatures of sound source-tract coupling

    PubMed Central

    Arneodo, Ezequiel M.; Perl, Yonatan Sanz; Mindlin, Gabriel B.

    2014-01-01

    Birdsong is a complex behavior, which results from the interaction between a nervous system and a biomechanical peripheral device. While much has been learned about how complex sounds are generated in the vocal organ, little has been learned about the signature on the vocalizations of the nonlinear effects introduced by the acoustic interactions between a sound source and the vocal tract. The variety of morphologies among bird species makes birdsong a most suitable model to study phenomena associated to the production of complex vocalizations. Inspired by the sound production mechanisms of songbirds, in this work we study a mathematical model of a vocal organ, in which a simple sound source interacts with a tract, leading to a delay differential equation. We explore the system numerically, and by taking it to the weakly nonlinear limit, we are able to examine its periodic solutions analytically. By these means we are able to explore the dynamics of oscillatory solutions of a sound source-tract coupled system, which are qualitatively different from those of a sound source-filter model of a vocal organ. Nonlinear features of the solutions are proposed as the underlying mechanisms of observed phenomena in birdsong, such as unilaterally produced “frequency jumps,” enhancement of resonances, and the shift of the fundamental frequency observed in heliox experiments. PMID:21599213

  1. Separation of non-stationary multi-source sound field based on the interpolated time-domain equivalent source method

    NASA Astrophysics Data System (ADS)

    Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng

    2016-05-01

    In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.

  2. Intensity-invariant coding in the auditory system.

    PubMed

    Barbour, Dennis L

    2011-11-01

    The auditory system faithfully represents sufficient details from sound sources such that downstream cognitive processes are capable of acting upon this information effectively even in the face of signal uncertainty, degradation or interference. This robust sound source representation leads to an invariance in perception vital for animals to interact effectively with their environment. Due to unique nonlinearities in the cochlea, sound representations early in the auditory system exhibit a large amount of variability as a function of stimulus intensity. In other words, changes in stimulus intensity, such as for sound sources at differing distances, create a unique challenge for the auditory system to encode sounds invariantly across the intensity dimension. This challenge and some strategies available to sensory systems to eliminate intensity as an encoding variable are discussed, with a special emphasis upon sound encoding. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Numerical Models for Sound Propagation in Long Spaces

    NASA Astrophysics Data System (ADS)

    Lai, Chenly Yuen Cheung

    Both reverberation time and steady-state sound field are the key elements for assessing the acoustic condition in an enclosed space. They affect the noise propagation, speech intelligibility, clarity index, and definition. Since the sound field in a long space is non diffuse, classical room acoustics theory does not apply in this situation. The ray tracing technique and the image source methods are two common models to fathom both reverberation time and steady-state sound field in long enclosures nowadays. Although both models can give an accurate estimate of reverberation times and steady-state sound field directly or indirectly, they often involve time-consuming calculations. In order to simplify the acoustic consideration, a theoretical formulation has been developed for predicting both steady-state sound fields and reverberation times in street canyons. The prediction model is further developed to predict the steady-state sound field in a long enclosure. Apart from the straight long enclosure, there are other variations such as a cross junction, a long enclosure with a T-intersection, an U-turn long enclosure. In the present study, an theoretical and experimental investigations were conducted to develop formulae for predicting reverberation times and steady-state sound fields in a junction of a street canyon and in a long enclosure with T-intersection. The theoretical models are validated by comparing the numerical predictions with published experimental results. The theoretical results are also compared with precise indoor measurements and large-scale outdoor experimental results. In all of previous acoustical studies related to long enclosure, most of the studies are focused on the monopole sound source. Besides non-directional noise source, many noise sources in long enclosure are dipole like, such as train noise and fan noise. In order to study the characteristics of directional noise sources, a review of available dipole source was conducted. A dipole was constructed which was subsequent used for experimental studies. In additional, a theoretical model was developed for predicting dipole sound fields. The theoretical model can be used to study the effect of a dipole source on the speech intelligibility in long enclosures.

  4. Sound source tracking device for telematic spatial sound field reproduction

    NASA Astrophysics Data System (ADS)

    Cardenas, Bruno

    This research describes an algorithm that localizes sound sources for use in telematic applications. The localization algorithm is based on amplitude differences between various channels of a microphone array of directional shotgun microphones. The amplitude differences will be used to locate multiple performers and reproduce their voices, which were recorded at close distance with lavalier microphones, spatially corrected using a loudspeaker rendering system. In order to track multiple sound sources in parallel the information gained from the lavalier microphones will be utilized to estimate the signal-to-noise ratio between each performer and the concurrent performers.

  5. A Corticothalamic Circuit Model for Sound Identification in Complex Scenes

    PubMed Central

    Otazu, Gonzalo H.; Leibold, Christian

    2011-01-01

    The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668

  6. Quantitative measurement of pass-by noise radiated by vehicles running at high speeds

    NASA Astrophysics Data System (ADS)

    Yang, Diange; Wang, Ziteng; Li, Bing; Luo, Yugong; Lian, Xiaomin

    2011-03-01

    It has been a challenge in the past to accurately locate and quantify the pass-by noise source radiated by the running vehicles. A system composed of a microphone array is developed in our current work to do this work. An acoustic-holography method for moving sound sources is designed to handle the Doppler effect effectively in the time domain. The effective sound pressure distribution is reconstructed on the surface of a running vehicle. The method has achieved a high calculation efficiency and is able to quantitatively measure the sound pressure at the sound source and identify the location of the main sound source. The method is also validated by the simulation experiments and the measurement tests with known moving speakers. Finally, the engine noise, tire noise, exhaust noise and wind noise of the vehicle running at different speeds are successfully identified by this method.

  7. Auditory performance in an open sound field

    NASA Astrophysics Data System (ADS)

    Fluitt, Kim F.; Letowski, Tomasz; Mermagen, Timothy

    2003-04-01

    Detection and recognition of acoustic sources in an open field are important elements of situational awareness on the battlefield. They are affected by many technical and environmental conditions such as type of sound, distance to a sound source, terrain configuration, meteorological conditions, hearing capabilities of the listener, level of background noise, and the listener's familiarity with the sound source. A limited body of knowledge about auditory perception of sources located over long distances makes it difficult to develop models predicting auditory behavior on the battlefield. The purpose of the present study was to determine the listener's abilities to detect, recognize, localize, and estimate distances to sound sources from 25 to 800 m from the listing position. Data were also collected for meteorological conditions (wind direction and strength, temperature, atmospheric pressure, humidity) and background noise level for each experimental trial. Forty subjects (men and women, ages 18 to 25) participated in the study. Nine types of sounds were presented from six loudspeakers in random order; each series was presented four times. Partial results indicate that both detection and recognition declined at distances greater than approximately 200 m and distance estimation was grossly underestimated by listeners. Specific results will be presented.

  8. Evolutionary trends in directional hearing

    PubMed Central

    Carr, Catherine E.; Christensen-Dalsgaard, Jakob

    2016-01-01

    Tympanic hearing is a true evolutionary novelty that arose in parallel within early tetrapods. We propose that in these tetrapods, selection for sound localization in air acted upon pre-existing directionally sensitive brainstem circuits, similar to those in fishes. Auditory circuits in birds and lizards resemble this ancestral, directionally sensitive framework. Despite this anatomically similarity, coding of sound source location differs between birds and lizards. In birds, brainstem circuits compute sound location from interaural cues. Lizards, however, have coupled ears, and do not need to compute source location in the brain. Thus their neural processing of sound direction differs, although all show mechanisms for enhancing sound source directionality. Comparisons with mammals reveal similarly complex interactions between coding strategies and evolutionary history. PMID:27448850

  9. Noise measurements for various configurations of a model of a mixer nozzle externally blown flap system

    NASA Technical Reports Server (NTRS)

    Goodykoontz, J. H.; Wagner, J. M.; Sargent, N. B.

    1973-01-01

    Noise data were taken for variations to a large scale model of an externally blown flap lift augmentation system. The variations included two different mixer nozzles (7 and 8 lobes), two different wing models (2 and 3 flaps), and different lateral distances between the wing chord line and the nozzle centerline. When the seven lobe was used with the trailing flap in the 60 deg position, increasing the wing to nozzle distance had no effect on the sound level. When the eight lobe nozzle was used there was a decrease in sound level. With the 20 deg flap setting the noise level decreased when the distance was increased using either nozzle.

  10. Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid.

    PubMed

    Kidd, Gerald

    2017-10-17

    Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired "target" talker while ignoring the speech from unwanted "masker" talkers and other sources of sound. This listening situation forms the classic "cocktail party problem" described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. This approach, embodied in a prototype "visually guided hearing aid" (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based "spatial filter" operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in "informational masking." The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in "energetic masking." Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation. http://cred.pubs.asha.org/article.aspx?articleid=2601621.

  11. Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid

    PubMed Central

    2017-01-01

    Purpose Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired “target” talker while ignoring the speech from unwanted “masker” talkers and other sources of sound. This listening situation forms the classic “cocktail party problem” described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. Method This approach, embodied in a prototype “visually guided hearing aid” (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. Results The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based “spatial filter” operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in “informational masking.” The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in “energetic masking.” Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Conclusions Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601621 PMID:29049603

  12. Relation of sound intensity and accuracy of localization.

    PubMed

    Farrimond, T

    1989-08-01

    Tests were carried out on 17 subjects to determine the accuracy of monaural sound localization when the head is not free to turn toward the sound source. Maximum accuracy of localization for a constant-volume sound source coincided with the position for maximum perceived intensity of the sound in the front quadrant. There was a tendency for sounds to be perceived more often as coming from a position directly toward the ear. That is, for sounds in the front quadrant, errors of localization tended to be predominantly clockwise (i.e., biased toward a line directly facing the ear). Errors for sounds occurring in the rear quadrant tended to be anticlockwise. The pinna's differential effect on sound intensity between front and rear quadrants would assist in identifying the direction of movement of objects, for example an insect, passing the ear.

  13. Source sparsity control of sound field reproduction using the elastic-net and the lasso minimizers.

    PubMed

    Gauthier, P-A; Lecomte, P; Berry, A

    2017-04-01

    Sound field reproduction is aimed at the reconstruction of a sound pressure field in an extended area using dense loudspeaker arrays. In some circumstances, sound field reproduction is targeted at the reproduction of a sound field captured using microphone arrays. Although methods and algorithms already exist to convert microphone array recordings to loudspeaker array signals, one remaining research question is how to control the spatial sparsity in the resulting loudspeaker array signals and what would be the resulting practical advantages. Sparsity is an interesting feature for spatial audio since it can drastically reduce the number of concurrently active reproduction sources and, therefore, increase the spatial contrast of the solution at the expense of a difference between the target and reproduced sound fields. In this paper, the application of the elastic-net cost function to sound field reproduction is compared to the lasso cost function. It is shown that the elastic-net can induce solution sparsity and overcomes limitations of the lasso: The elastic-net solves the non-uniqueness of the lasso solution, induces source clustering in the sparse solution, and provides a smoother solution within the activated source clusters.

  14. Ventilation noise and its effects on annoyance and performance

    NASA Astrophysics Data System (ADS)

    Landstrom, Ulf

    2004-05-01

    In almost every room environment, ventilation acts as a more or less prominent part of the noise exposure. The contribution to the overall sound environment is a question not only of the way in which the ventilation system itself functions, but also a question of the prominence of other contemporary sound sources such as speech, equipment, machines, and external noises. Hazardous effects due to ventilation noise are most prominent in offices, hospitals, control rooms, classrooms, conference rooms, and other types of silent areas. The effects evoked by ventilation noise have also been found to be related to the type of activity being conducted. Annoyance and performance thus not only seemed to be linked to the physical character of exposure, i.e., noise level, frequency characteristics, and length of exposure, but also mental and manual activity, complexity, and monotony of the work. The effects can be described in terms of annoyance, discomfort, and fatigue, with consequences on performance and increased mental load. The silent areas where ventilation noise may be most frequently experienced are often synonymous with areas and activities most sensitive to the exposure.

  15. The N1-suppression effect for self-initiated sounds is independent of attention

    PubMed Central

    2013-01-01

    Background If we initiate a sound by our own motor behavior, the N1 component of the auditory event-related brain potential (ERP) that the sound elicits is attenuated compared to the N1 elicited by the same sound when it is initiated externally. It has been suggested that this N1 suppression results from an internal predictive mechanism that is in the service of discriminating the sensory consequences of one’s own actions from other sensory input. As the N1-suppression effect is becoming a popular approach to investigate predictive processing in cognitive and social neuroscience, it is important to exclude an alternative interpretation not related to prediction. According to the attentional account, the N1 suppression is due to a difference in the allocation of attention between self- and externally-initiated sounds. To test this hypothesis, we manipulated the allocation of attention to the sounds in different blocks: Attention was directed either to the sounds, to the own motor acts or to visual stimuli. If attention causes the N1-suppression effect, then manipulating attention should affect the effect for self-initiated sounds. Results We found N1 suppression in all conditions. The N1 per se was affected by attention, but there was no interaction between attention and self-initiation effects. This implies that self-initiation N1 effects are not caused by attention. Conclusions The present results support the assumption that the N1-suppression effect for self-initiated sounds indicates the operation of an internal predictive mechanism. Furthermore, while attention had an influence on the N1a, N1b, and N1c components, the N1-suppression effect was confined to the N1b and N1c subcomponents suggesting that the major contribution to the auditory N1-suppression effect is circumscribed to late N1 components. PMID:23281832

  16. Sound quality indicators for urban places in Paris cross-validated by Milan data.

    PubMed

    Ricciardi, Paola; Delaitre, Pauline; Lavandier, Catherine; Torchia, Francesca; Aumond, Pierre

    2015-10-01

    A specific smartphone application was developed to collect perceptive and acoustic data in Paris. About 3400 questionnaires were analyzed, regarding the global sound environment characterization, the perceived loudness of some emergent sources and the presence time ratio of sources that do not emerge from the background. Sound pressure level was recorded each second from the mobile phone's microphone during a 10-min period. The aim of this study is to propose indicators of urban sound quality based on linear regressions with perceptive variables. A cross validation of the quality models extracted from Paris data was carried out by conducting the same survey in Milan. The proposed sound quality general model is correlated with the real perceived sound quality (72%). Another model without visual amenity and familiarity is 58% correlated with perceived sound quality. In order to improve the sound quality indicator, a site classification was performed by Kohonen's Artificial Neural Network algorithm, and seven specific class models were developed. These specific models attribute more importance on source events and are slightly closer to the individual data than the global model. In general, the Parisian models underestimate the sound quality of Milan environments assessed by Italian people.

  17. Development of an ICT-Based Air Column Resonance Learning Media

    NASA Astrophysics Data System (ADS)

    Purjiyanta, Eka; Handayani, Langlang; Marwoto, Putut

    2016-08-01

    Commonly, the sound source used in the air column resonance experiment is the tuning fork having disadvantage of unoptimal resonance results due to the sound produced which is getting weaker. In this study we made tones with varying frequency using the Audacity software which were, then, stored in a mobile phone as a source of sound. One advantage of this sound source is the stability of the resulting sound enabling it to produce the same powerful sound. The movement of water in a glass tube mounted on the tool resonance and the tone sound that comes out from the mobile phone were recorded by using a video camera. Sound resonances recorded were first, second, and third resonance, for each tone frequency mentioned. The resulting sound stays longer, so it can be used for the first, second, third and next resonance experiments. This study aimed to (1) explain how to create tones that can substitute tuning forks sound used in air column resonance experiments, (2) illustrate the sound wave that occurred in the first, second, and third resonance in the experiment, and (3) determine the speed of sound in the air. This study used an experimental method. It was concluded that; (1) substitute tones of a tuning fork sound can be made by using the Audacity software; (2) the form of sound waves that occured in the first, second, and third resonance in the air column resonance can be drawn based on the results of video recording of the air column resonance; and (3) based on the experiment result, the speed of sound in the air is 346.5 m/s, while based on the chart analysis with logger pro software, the speed of sound in the air is 343.9 ± 0.3171 m/s.

  18. Shapes and sounds as self-objects in learning geography.

    PubMed

    Baum, E A

    1978-01-01

    The pleasure which some children find in maps and map reading is manifold in origin. Children cathect patterns of configuration and color and derive joy from the visual mastery of these. This gratification is enhanced by the child's knowledge that the map represents something bigger than and external to itself. Likewise, some children take pleasure in the pronunciation of names themselves. The phonetic transcription of multisyllabic names is often a plearurable challenge. The vocalized name has its origin in the self, becomes barely external to self, and is self-monitored. Thus, in children both the configurations and the vocalizations associated with map reading have the properties of "self=objects" (Kohut, 1971). From the author's observation the delight which some children take in sounding out geographic names on a map may, in some instances, indicate pre-existing gratifying sound associations. Childish amusement in punning on cognomens may be an even greater stimulant for learning than visual configurations or artificial cognitive devices.

  19. Numerical simulations of a sounding rocket in ionospheric plasma: Effects of magnetic field on the wake formation and rocket potential

    NASA Astrophysics Data System (ADS)

    Darian, D.; Marholm, S.; Paulsson, J. J. P.; Miyake, Y.; Usui, H.; Mortensen, M.; Miloch, W. J.

    2017-09-01

    The charging of a sounding rocket in subsonic and supersonic plasma flows with external magnetic field is studied with numerical particle-in-cell (PIC) simulations. A weakly magnetized plasma regime is considered that corresponds to the ionospheric F2 layer, with electrons being strongly magnetized, while the magnetization of ions is weak. It is demonstrated that the magnetic field orientation influences the floating potential of the rocket and that with increasing angle between the rocket axis and the magnetic field direction the rocket potential becomes less negative. External magnetic field gives rise to asymmetric wake downstream of the rocket. The simulated wake in the potential and density may extend as far as 30 electron Debye lengths; thus, it is important to account for these plasma perturbations when analyzing in situ measurements. A qualitative agreement between simulation results and the actual measurements with a sounding rocket is also shown.

  20. [HYGIENIC ASSESSMENT OF NOISE FACTOR OF THE LARGE CITY].

    PubMed

    Chubirko, M L; Stepkin, Yu I; Seredenko, O V

    2015-01-01

    The article is devoted to the problem of the negative impact of traffic noise on the health and living conditions of the population in conditions of the large city. Every day on the streets there are appeared more and more different modes of transport, and to date almost all transportation network has reached his traffic performance. The increase in traffic noise certainly has an impact on the human body. The most common and intense noise is caused by the traffic of urban automobile and electric transport. This is explained by the existence of the heavy traffic (2-3 thousand crews/h) on almost all main roads in historically emerged parts of the city. In addition, sources of external noise in the city can be a railway running in residential zone, access roads, industrial enterprises, located in close proximity to residential areas and on the borders of residential zones, planes of military and civil aviation. For the evaluation of the different noises sound levels were measured with the use of sound level meters. The most common parameter for the assessment ofthe noise generatedfrom motor vehicles on residential areas and usedfor the noise characteristics of the traffic flows, is the equivalent sound level/A EQ dB. This parameter is used in the majority of normative-technical documentation as hygienic noise standard. With the aim of the assessment of noise exposure there were selected 122 control points at intersections of roads of different traffic performance where there were made instrumental measurements the equivalent sound level, followed by its comparison with permissible levels.

  1. Egocentric and allocentric representations in auditory cortex

    PubMed Central

    Brimijoin, W. Owen; Bizley, Jennifer K.

    2017-01-01

    A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position. PMID:28617796

  2. How the owl tracks its prey – II

    PubMed Central

    Takahashi, Terry T.

    2010-01-01

    Barn owls can capture prey in pitch darkness or by diving into snow, while homing in on the sounds made by their prey. First, the neural mechanisms by which the barn owl localizes a single sound source in an otherwise quiet environment will be explained. The ideas developed for the single source case will then be expanded to environments in which there are multiple sound sources and echoes – environments that are challenging for humans with impaired hearing. Recent controversies regarding the mechanisms of sound localization will be discussed. Finally, the case in which both visual and auditory information are available to the owl will be considered. PMID:20889819

  3. Design of laser monitoring and sound localization system

    NASA Astrophysics Data System (ADS)

    Liu, Yu-long; Xu, Xi-ping; Dai, Yu-ming; Qiao, Yang

    2013-08-01

    In this paper, a novel design of laser monitoring and sound localization system is proposed. It utilizes laser to monitor and locate the position of the indoor conversation. In China most of the laser monitors no matter used in labor in an instrument uses photodiode or phototransistor as a detector at present. At the laser receivers of those facilities, light beams are adjusted to ensure that only part of the window in photodiodes or phototransistors received the beams. The reflection would deviate from its original path because of the vibration of the detected window, which would cause the changing of imaging spots in photodiode or phototransistor. However, such method is limited not only because it could bring in much stray light in receivers but also merely single output of photocurrent could be obtained. Therefore a new method based on quadrant detector is proposed. It utilizes the relation of the optical integral among quadrants to locate the position of imaging spots. This method could eliminate background disturbance and acquired two-dimensional spots vibrating data pacifically. The principle of this whole system could be described as follows. Collimated laser beams are reflected from vibrate-window caused by the vibration of sound source. Therefore reflected beams are modulated by vibration source. Such optical signals are collected by quadrant detectors and then are processed by photoelectric converters and corresponding circuits. Speech signals are eventually reconstructed. In addition, sound source localization is implemented by the means of detecting three different reflected light sources simultaneously. Indoor mathematical models based on the principle of Time Difference Of Arrival (TDOA) are established to calculate the twodimensional coordinate of sound source. Experiments showed that this system is able to monitor the indoor sound source beyond 15 meters with a high quality of speech reconstruction and to locate the sound source position accurately.

  4. Assessment of Hydroacoustic Propagation Using Autonomous Hydrophones in the Scotia Sea

    DTIC Science & Technology

    2010-09-01

    Award No. DE-AI52-08NA28654 Proposal No. BAA08-36 ABSTRACT The remote area of the Atlantic Ocean near the Antarctic Peninsula and the South...hydroacoustic blind spot. To investigate the sound propagation and interferences affected by these landmasses in the vicinity of the Antarctic polar...from large icebergs (near-surface sources) were utilized as natural sound sources. Surface sound sources, e.g., ice-related events, tend to suffer less

  5. Active control of noise on the source side of a partition to increase its sound isolation

    NASA Astrophysics Data System (ADS)

    Tarabini, Marco; Roure, Alain; Pinhede, Cedric

    2009-03-01

    This paper describes a local active noise control system that virtually increases the sound isolation of a dividing wall by means of a secondary source array. With the proposed method, sound pressure on the source side of the partition is reduced using an array of loudspeakers that generates destructive interference on the wall surface, where an array of error microphones is placed. The reduction of sound pressure on the incident side of the wall is expected to decrease the sound radiated into the contiguous room. The method efficiency was experimentally verified by checking the insertion loss of the active noise control system; in order to investigate the possibility of using a large number of actuators, a decentralized FXLMS control algorithm was used. Active control performances and stability were tested with different array configurations, loudspeaker directivities and enclosure characteristics (sound source position and absorption coefficient). The influence of all these parameters was investigated with the factorial design of experiments. The main outcome of the experimental campaign was that the insertion loss produced by the secondary source array, in the 50-300 Hz frequency range, was close to 10 dB. In addition, the analysis of variance showed that the active noise control performance can be optimized with a proper choice of the directional characteristics of the secondary source and the distance between loudspeakers and error microphones.

  6. The low-frequency sound power measuring technique for an underwater source in a non-anechoic tank

    NASA Astrophysics Data System (ADS)

    Zhang, Yi-Ming; Tang, Rui; Li, Qi; Shang, Da-Jing

    2018-03-01

    In order to determine the radiated sound power of an underwater source below the Schroeder cut-off frequency in a non-anechoic tank, a low-frequency extension measuring technique is proposed. This technique is based on a unique relationship between the transmission characteristics of the enclosed field and those of the free field, which can be obtained as a correction term based on previous measurements of a known simple source. The radiated sound power of an unknown underwater source in the free field can thereby be obtained accurately from measurements in a non-anechoic tank. To verify the validity of the proposed technique, a mathematical model of the enclosed field is established using normal-mode theory, and the relationship between the transmission characteristics of the enclosed and free fields is obtained. The radiated sound power of an underwater transducer source is tested in a glass tank using the proposed low-frequency extension measuring technique. Compared with the free field, the radiated sound power level of the narrowband spectrum deviation is found to be less than 3 dB, and the 1/3 octave spectrum deviation is found to be less than 1 dB. The proposed testing technique can be used not only to extend the low-frequency applications of non-anechoic tanks, but also for measurement of radiated sound power from complicated sources in non-anechoic tanks.

  7. Modeling and analysis of secondary sources coupling for active sound field reduction in confined spaces

    NASA Astrophysics Data System (ADS)

    Montazeri, Allahyar; Taylor, C. James

    2017-10-01

    This article addresses the coupling of acoustic secondary sources in a confined space in a sound field reduction framework. By considering the coupling of sources in a rectangular enclosure, the set of coupled equations governing its acoustical behavior are solved. The model obtained in this way is used to analyze the behavior of multi-input multi-output (MIMO) active sound field control (ASC) systems, where the coupling of sources cannot be neglected. In particular, the article develops the analytical results to analyze the effect of coupling of an array of secondary sources on the sound pressure levels inside an enclosure, when an array of microphones is used to capture the acoustic characteristics of the enclosure. The results are supported by extensive numerical simulations showing how coupling of loudspeakers through acoustic modes of the enclosure will change the strength and hence the driving voltage signal applied to the secondary loudspeakers. The practical significance of this model is to provide a better insight on the performance of the sound reproduction/reduction systems in confined spaces when an array of loudspeakers and microphones are placed in a fraction of wavelength of the excitation signal to reduce/reproduce the sound field. This is of particular importance because the interaction of different sources affects their radiation impedance depending on the electromechanical properties of the loudspeakers.

  8. Mechanisms for Adjusting Interaural Time Differences to Achieve Binaural Coincidence Detection

    PubMed Central

    Seidl, Armin H.; Rubel, Edwin W; Harris, David M.

    2010-01-01

    Understanding binaural perception requires detailed analyses of the neural circuitry responsible for the computation of interaural time differences (ITDs). In the avian brainstem, this circuit consists of internal axonal delay lines innervating an array of coincidence detector neurons that encode external ITDs. Nucleus magnocellularis (NM) neurons project to the dorsal dendritic field of the ipsilateral nucleus laminaris (NL) and to the ventral field of the contralateral NL. Contralateral-projecting axons form a delay line system along a band of NL neurons. Binaural acoustic signals in the form of phase-locked action potentials from NM cells arrive at NL and establish a topographic map of sound source location along the azimuth. These pathways are assumed to represent a circuit similar to the Jeffress model of sound localization, establishing a place code along an isofrequency contour of NL. Three-dimensional measurements of axon lengths reveal major discrepancies with the current model; the temporal offset based on conduction length alone makes encoding of physiological ITDs impossible. However, axon diameter and distances between Nodes of Ranvier also influence signal propagation times along an axon. Our measurements of these parameters reveal that diameter and internode distance can compensate for the temporal offset inferred from axon lengths alone. Together with other recent studies these unexpected results should inspire new thinking on the cellular biology, evolution and plasticity of the circuitry underlying low frequency sound localization in both birds and mammals. PMID:20053889

  9. The self in action effects: selective attenuation of self-generated sounds.

    PubMed

    Weiss, Carmen; Herwig, Arvid; Schütz-Bosbach, Simone

    2011-11-01

    The immediate experience of self-agency, that is, the experience of generating and controlling our actions, is thought to be a key aspect of selfhood. It has been suggested that this experience is intimately linked to internal motor signals associated with the ongoing actions. These signals should lead to an attenuation of the sensory consequences of one's own actions and thereby allow classifying them as self-generated. The discovery of shared representations of actions between self and other, however, challenges this idea and suggests similar attenuation of one's own and other's sensory action effects. Here, we tested these assumptions by comparing sensory attenuation of self-generated and observed sensory effects. More specifically, we compared the loudness perception of sounds that were either self-generated, generated by another person or a computer. In two experiments, we found a reduced perception of loudness intensity specifically related to self-generation. Furthermore, the perception of sounds generated by another person and a computer did not differ from each other. These findings indicate that one's own agentive influence upon the outside world has a special perceptual quality which distinguishes it from any sort of external influence, including human and non-human sources. This suggests that a real sense of self-agency is not a socially shared but rather a unique and private experience. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Consistent modelling of wind turbine noise propagation from source to receiver.

    PubMed

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; Dag, Kaya O; Moriarty, Patrick

    2017-11-01

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. The local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.

  11. Consistent modelling of wind turbine noise propagation from source to receiver

    DOE PAGES

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; ...

    2017-11-28

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less

  12. Consistent modelling of wind turbine noise propagation from source to receiver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less

  13. Binaural room simulation

    NASA Technical Reports Server (NTRS)

    Lehnert, H.; Blauert, Jens; Pompetzki, W.

    1991-01-01

    In every-day listening the auditory event perceived by a listener is determined not only by the sound signal that a sound emits but also by a variety of environmental parameters. These parameters are the position, orientation and directional characteristics of the sound source, the listener's position and orientation, the geometrical and acoustical properties of surfaces which affect the sound field and the sound propagation properties of the surrounding fluid. A complete set of these parameters can be called an Acoustic Environment. If the auditory event perceived by a listener is manipulated in such a way that the listener is shifted acoustically into a different acoustic environment without moving himself physically, a Virtual Acoustic Environment has been created. Here, we deal with a special technique to set up nearly arbitrary Virtual Acoustic Environments, the Binaural Room Simulation. The purpose of the Binaural Room Simulation is to compute the binaural impulse response related to a virtual acoustic environment taking into account all parameters mentioned above. One possible way to describe a Virtual Acoustic Environment is the concept of the virtual sound sources. Each of the virtual sources emits a certain signal which is correlated but not necessarily identical with the signal emitted by the direct sound source. If source and receiver are non moving, the acoustic environment becomes a linear time-invariant system. Then, the Binaural Impulse Response from the source to a listener' s eardrums contains all relevant auditory information related to the Virtual Acoustic Environment. Listening into the simulated environment can easily be achieved by convolving the Binaural Impulse Response with dry signals and representing the results via headphones.

  14. Broad band sound from wind turbine generators

    NASA Technical Reports Server (NTRS)

    Hubbard, H. H.; Shepherd, K. P.; Grosveld, F. W.

    1981-01-01

    Brief descriptions are given of the various types of large wind turbines and their sound characteristics. Candidate sources of broadband sound are identified and are rank ordered for a large upwind configuration wind turbine generator for which data are available. The rotor is noted to be the main source of broadband sound which arises from inflow turbulence and from the interactions of the turbulent boundary layer on the blade with its trailing edge. Sound is radiated about equally in all directions but the refraction effects of the wind produce an elongated contour pattern in the downwind direction.

  15. Multidisciplinary conceptual design optimization of aircraft using a sound-matching-based objective function

    NASA Astrophysics Data System (ADS)

    Diez, Matteo; Iemma, Umberto

    2012-05-01

    The article presents a novel approach to include community noise considerations based on sound quality in the Multidisciplinary Conceptual Design Optimization (MCDO) of civil transportation aircraft. The novelty stems from the use of an unconventional objective function, defined as a measure of the difference between the noise emission of the aircraft under analysis and a reference 'weakly annoying' noise, the target sound. The minimization of such a merit factor yields an aircraft concept with a noise signature as close as possible to the given target. The reference sound is one of the outcomes of the European Research Project SEFA (Sound Engineering For Aircraft, VI Framework Programme, 2004-2007), and used here as an external input. The aim of the present work is to address the definition and the inclusion of the sound-matching-based objective function in the MCDO of aircraft.

  16. Essays on Environmental Economics and Policy

    NASA Astrophysics Data System (ADS)

    Walker, W. Reed

    A central feature of modern government is its role in designing welfare improving policies to address and correct market failures stemming from externalities and public goods. The rationale for most modern environmental regulations stems from the failure of markets to efficiently allocate goods and services. Yet, as with any policy, distributional effects are important there exist clear winners and losers. Despite the clear theoretical justification for environmental and energy policy, empirical work credibly identifying both the source and consequences of these externalities as well as the distributional effects of existing policies remains in its infancy. My dissertation focuses on the development of empirical methods to investigate the role of environmental and energy policy in addressing market failures as well as exploring the distributional implications of these policies. These questions are important not only as a justification for government intervention into markets but also for understanding how distributional consequences may shape the design and implementation of these policies. My dissertation investigates these questions in the context of programs and policies that are important in their own right. Chapters 1 and 2 of my dissertation explore the economic costs and distributional implications associated with the largest environmental regulatory program in the United States, the Clean Air Act. Chapters 3 and 4 examine the social costs of air pollution in the context of transportation externalities, showing how effective transportation policy has additional co-benefits in the form of environmental policy. My dissertation remains unified in both its subject matter and methodological approach -- using unique sources of data and sound research designs to understand important issues in environmental policy.

  17. Effects of sound source directivity on auralizations

    NASA Astrophysics Data System (ADS)

    Sheets, Nathan W.; Wang, Lily M.

    2002-05-01

    Auralization, the process of rendering audible the sound field in a simulated space, is a useful tool in the design of acoustically sensitive spaces. The auralization depends on the calculation of an impulse response between a source and a receiver which have certain directional behavior. Many auralizations created to date have used omnidirectional sources; the effects of source directivity on auralizations is a relatively unexplored area. To examine if and how the directivity of a sound source affects the acoustical results obtained from a room, we used directivity data for three sources in a room acoustic modeling program called Odeon. The three sources are: violin, piano, and human voice. The results from using directional data are compared to those obtained using omnidirectional source behavior, both through objective measure calculations and subjective listening tests.

  18. Development of a directivity-controlled piezoelectric transducer for sound reproduction

    NASA Astrophysics Data System (ADS)

    Bédard, Magella; Berry, Alain

    2008-04-01

    Present sound reproduction systems do not attempt to simulate the spatial radiation of musical instruments, or sound sources in general, even though the spatial directivity has a strong impact on the psychoacoustic experience. A transducer consisting of 4 piezoelectric elemental sources made from curved PVDF films is used to generate a target directivity pattern in the horizontal plane, in the frequency range of 5-20 kHz. The vibratory and acoustical response of an elemental source is addressed, both theoretically and experimentally. Two approaches to synthesize the input signals to apply to each elemental source are developed in order to create a prescribed, frequency-dependent acoustic directivity. The circumferential Fourier decomposition of the target directivity provides a compromise between the magnitude and the phase reconstruction, whereas the minimization of a quadratic error criterion provides a best magnitude reconstruction. This transducer can improve sound reproduction by introducing the spatial radiation aspect of the original source at high frequency.

  19. Callback response of dugongs to conspecific chirp playbacks.

    PubMed

    Ichikawa, Kotaro; Akamatsu, Tomonari; Shinke, Tomio; Adulyanukosol, Kanjana; Arai, Nobuaki

    2011-06-01

    Dugongs (Dugong dugon) produce bird-like calls such as chirps and trills. The vocal responses of dugongs to playbacks of several acoustic stimuli were investigated. Animals were exposed to four different playback stimuli: a recorded chirp from a wild dugong, a synthesized down-sweep sound, a synthesized constant-frequency sound, and silence. Wild dugongs vocalized more frequently after playback of broadcast chirps than that after constant-frequency sounds or silence. The down-sweep sound also elicited more vocal responses than did silence. No significant difference was found between the broadcast chirps and the down-sweep sound. The ratio of wild dugong chirps to all calls and the dominant frequencies of the wild dugong calls were significantly higher during playbacks of broadcast chirps, down-sweep sounds, and constant-frequency sounds than during those of silence. The source level and duration of dugong chirps increased significantly as signaling distance increased. No significant correlation was found between signaling distance and the source level of trills. These results show that dugongs vocalize to playbacks of frequency-modulated signals and suggest that the source level of dugong chirps may be manipulated to compensate for transmission loss between the source and receiver. This study provides the first behavioral observations revealing the function of dugong chirps. © 2011 Acoustical Society of America

  20. Optical measurement of the weak non-linearity in the eardrum vibration response to auditory stimuli

    NASA Astrophysics Data System (ADS)

    Aerts, Johan

    The mammalian hearing organ consists of the external ear (auricle and ear canal) followed by the middle ear (eardrum and ossicles) and the inner ear (cochlea). Its function is to convert the incoming sound waves and convert them into nerve pulses which are processed in the final stage by the brain. The main task of the external and middle ear is to concentrate the incoming sound waves on a smaller surface to reduce the loss that would normally occur in transmission from air to inner ear fluid. In the past it has been shown that this is a linear process, thus without serious distortions, for sound waves going up to pressures of 130 dB SPL (˜90 Pa). However, at large pressure changes up to several kPa, the middle ear movement clearly shows non-linear behaviour. Thus, it is possible that some small non-linear distortions are also present in the middle ear vibration at lower sound pressures. In this thesis a sensitive measurement set-up is presented to detect this weak non-linear behaviour. Essentially, this set-up consists of a loud-speaker which excites the middle ear, and the resulting vibration is measured with an heterodyne vibrometer. The use of specially designed acoustic excitation signals (odd random phase multisines) enables the separation of the linear and non-linear response. The application of this technique on the middle ear demonstrates that there are already non-linear distortions present in the vibration of the middle ear at a sound pressure of 93 dB SPL. This non-linear component also grows strongly with increasing sound pressure. Knowledge of this non-linear component can contribute to the improvement of modern hearing aids, which operate at higher sound pressures where the non-linearities could distort the signal considerably. It is also important to know the contribution of middle ear non-linearity to otoacoustic emissions. This are non-linearities caused by the active feedback amplifier in the inner ear, and can be detected in the external and middle ear. These signals are used for diagnostic purposes, and therefore it is important to have an estimate the non-linear middle ear contribution to these emissions.

  1. Atmospheric Propagation

    NASA Technical Reports Server (NTRS)

    Embleton, Tony F. W.; Daigle, Gilles A.

    1991-01-01

    Reviewed here is the current state of knowledge with respect to each basic mechanism of sound propagation in the atmosphere and how each mechanism changes the spectral or temporal characteristics of the sound received at a distance from the source. Some of the basic processes affecting sound wave propagation which are present in any situation are discussed. They are geometrical spreading, molecular absorption, and turbulent scattering. In geometrical spreading, sound levels decrease with increasing distance from the source; there is no frequency dependence. In molecular absorption, sound energy is converted into heat as the sound wave propagates through the air; there is a strong dependence on frequency. In turbulent scattering, local variations in wind velocity and temperature induce fluctuations in phase and amplitude of the sound waves as they propagate through an inhomogeneous medium; there is a moderate dependence on frequency.

  2. How effectively do horizontal and vertical response strategies of long-finned pilot whales reduce sound exposure from naval sonar?

    PubMed

    Wensveen, Paul J; von Benda-Beckmann, Alexander M; Ainslie, Michael A; Lam, Frans-Peter A; Kvadsheim, Petter H; Tyack, Peter L; Miller, Patrick J O

    2015-05-01

    The behaviour of a marine mammal near a noise source can modulate the sound exposure it receives. We demonstrate that two long-finned pilot whales both surfaced in synchrony with consecutive arrivals of multiple sonar pulses. We then assess the effect of surfacing and other behavioural response strategies on the received cumulative sound exposure levels and maximum sound pressure levels (SPLs) by modelling realistic spatiotemporal interactions of a pilot whale with an approaching source. Under the propagation conditions of our model, some response strategies observed in the wild were effective in reducing received levels (e.g. movement perpendicular to the source's line of approach), but others were not (e.g. switching from deep to shallow diving; synchronous surfacing after maximum SPLs). Our study exemplifies how simulations of source-whale interactions guided by detailed observational data can improve our understanding about motivations behind behaviour responses observed in the wild (e.g., reducing sound exposure, prey movement). Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Implementation of the Vehicle Black Box Using External Sensor and Networks

    NASA Astrophysics Data System (ADS)

    Back, Sung-Hyun; Kim, Jang-Ju; Kim, Mi-Jin; Kim, Hwa-Sun; Park, You-Sin; Jang, Jong-Wook

    With the increasing uses of black boxes for vehicles, they are being widely studied and developed. Existing black boxes store only video and sound, and have limitations in accurately identifying accident contexts. Besides, data are lost if the black box in the vehicle is damaged. In this study, a smart black box was manufactured by storing the additional data, including on the tire pressure, in-vehicle data (e.g., head lamp operation), current location, travel path and speed, and video and sound, using OBD-II and GPS to improve the efficiency and accuracy of accident analysis. An external storage device was used for data backup via wireless LAN to allow checking of data even when the black box is damaged.

  4. Improvements of sound localization abilities by the facial ruff of the barn owl (Tyto alba) as demonstrated by virtual ruff removal.

    PubMed

    Hausmann, Laura; von Campenhausen, Mark; Endler, Frank; Singheiser, Martin; Wagner, Hermann

    2009-11-05

    When sound arrives at the eardrum it has already been filtered by the body, head, and outer ear. This process is mathematically described by the head-related transfer functions (HRTFs), which are characteristic for the spatial position of a sound source and for the individual ear. HRTFs in the barn owl (Tyto alba) are also shaped by the facial ruff, a specialization that alters interaural time differences (ITD), interaural intensity differences (ILD), and the frequency spectrum of the incoming sound to improve sound localization. Here we created novel stimuli to simulate the removal of the barn owl's ruff in a virtual acoustic environment, thus creating a situation similar to passive listening in other animals, and used these stimuli in behavioral tests. HRTFs were recorded from an owl before and after removal of the ruff feathers. Normal and ruff-removed conditions were created by filtering broadband noise with the HRTFs. Under normal virtual conditions, no differences in azimuthal head-turning behavior between individualized and non-individualized HRTFs were observed. The owls were able to respond differently to stimuli from the back than to stimuli from the front having the same ITD. By contrast, such a discrimination was not possible after the virtual removal of the ruff. Elevational head-turn angles were (slightly) smaller with non-individualized than with individualized HRTFs. The removal of the ruff resulted in a large decrease in elevational head-turning amplitudes. The facial ruff a) improves azimuthal sound localization by increasing the ITD range and b) improves elevational sound localization in the frontal field by introducing a shift of iso-ILD lines out of the midsagittal plane, which causes ILDs to increase with increasing stimulus elevation. The changes at the behavioral level could be related to the changes in the binaural physical parameters that occurred after the virtual removal of the ruff. These data provide new insights into the function of external hearing structures and open up the possibility to apply the results on autonomous agents, creation of virtual auditory environments for humans, or in hearing aids.

  5. Difference in precedence effect between children and adults signifies development of sound localization abilities in complex listening tasks

    PubMed Central

    Litovsky, Ruth Y.; Godar, Shelly P.

    2010-01-01

    The precedence effect refers to the fact that humans are able to localize sound in reverberant environments, because the auditory system assigns greater weight to the direct sound (lead) than the later-arriving sound (lag). In this study, absolute sound localization was studied for single source stimuli and for dual source lead-lag stimuli in 4–5 year old children and adults. Lead-lag delays ranged from 5–100 ms. Testing was conducted in free field, with pink noise bursts emitted from loudspeakers positioned on a horizontal arc in the frontal field. Listeners indicated how many sounds were heard and the perceived location of the first- and second-heard sounds. Results suggest that at short delays (up to 10 ms), the lead dominates sound localization strongly at both ages, and localization errors are similar to those with single-source stimuli. At longer delays errors can be large, stemming from over-integration of the lead and lag, interchanging of perceived locations of the first-heard and second-heard sounds due to temporal order confusion, and dominance of the lead over the lag. The errors are greater for children than adults. Results are discussed in the context of maturation of auditory and non-auditory factors. PMID:20968369

  6. Simulated seal scarer sounds scare porpoises, but not seals: species-specific responses to 12 kHz deterrence sounds

    PubMed Central

    Hermannsen, Line; Beedholm, Kristian

    2017-01-01

    Acoustic harassment devices (AHD) or ‘seal scarers’ are used extensively, not only to deter seals from fisheries, but also as mitigation tools to deter marine mammals from potentially harmful sound sources, such as offshore pile driving. To test the effectiveness of AHDs, we conducted two studies with similar experimental set-ups on two key species: harbour porpoises and harbour seals. We exposed animals to 500 ms tone bursts at 12 kHz simulating that of an AHD (Lofitech), but with reduced output levels (source peak-to-peak level of 165 dB re 1 µPa). Animals were localized with a theodolite before, during and after sound exposures. In total, 12 sound exposures were conducted to porpoises and 13 exposures to seals. Porpoises were found to exhibit avoidance reactions out to ranges of 525 m from the sound source. Contrary to this, seal observations increased during sound exposure within 100 m of the loudspeaker. We thereby demonstrate that porpoises and seals respond very differently to AHD sounds. This has important implications for application of AHDs in multi-species habitats, as sound levels required to deter less sensitive species (seals) can lead to excessive and unwanted large deterrence ranges on more sensitive species (porpoises). PMID:28791155

  7. Characterizing large river sounds: Providing context for understanding the environmental effects of noise produced by hydrokinetic turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bevelhimer, Mark S.; Deng, Z. Daniel; Scherelis, Constantin

    2016-01-01

    Underwater noise associated with the installation and operation of hydrokinetic turbines in rivers and tidal zones presents a potential environmental concern for fish and marine mammals. Comparing the spectral quality of sounds emitted by hydrokinetic turbines to natural and other anthropogenic sound sources is an initial step at understanding potential environmental impacts. Underwater recordings were obtained from passing vessels of different sizes and other underwater sound sources in both static and flowing waters. Static water measurements were taken in a lake with minimal background noise. Flowing water measurements were taken at a previously proposed deployment site for hydrokinetic turbines onmore » the Mississippi River, where the sound of flowing water is included in background measurements. The size of vessels measured ranged from a small fishing boat with a 60 HP outboard motor to an 18-unit barge train being pushed upstream by tugboat. As expected, large vessels with large engines created the highest sound levels, and when compared to the sound created by an operating HK turbine were many times greater. A comparison of sound levels from the same sources at different distances using both spherical and cylindrical sound attenuation functions suggests that spherical model results more closely approximate observed values.« less

  8. Feasibility of making sound power measurements in the NASA Langley V/STOL tunnel test section

    NASA Technical Reports Server (NTRS)

    Brooks, T. F.; Scheiman, J.; Silcox, R. J.

    1976-01-01

    Based on exploratory acoustic measurements in Langley's V/STOL wind tunnel, recommendations are made on the methodology for making sound power measurements of aircraft components in the closed tunnel test section. During airflow, tunnel self-noise and microphone flow-induced noise place restrictions on the amplitude and spectrum of the sound source to be measured. Models of aircraft components with high sound level sources, such as thrust engines and powered lift systems, seem likely candidates for acoustic testing.

  9. Varying sediment sources (Hudson Strait, Cumberland Sound, Baffin Bay) to the NW Labrador Sea slope between and during Heinrich events 0 to 4

    USGS Publications Warehouse

    Andrews, John T.; Barber, D.C.; Jennings, A.E.; Eberl, D.D.; Maclean, B.; Kirby, M.E.; Stoner, J.S.

    2012-01-01

    Core HU97048-007PC was recovered from the continental Labrador Sea slope at a water depth of 945 m, 250 km seaward from the mouth of Cumberland Sound, and 400 km north of Hudson Strait. Cumberland Sound is a structural trough partly floored by Cretaceous mudstones and Paleozoic carbonates. The record extends from ∼10 to 58 ka. On-board logging revealed a complex series of lithofacies, including buff-colored detrital carbonate-rich sediments [Heinrich (H)-events] frequently bracketed by black facies. We investigate the provenance of these facies using quantitative X-ray diffraction on drill-core samples from Paleozoic and Cretaceous bedrock from the SE Baffin Island Shelf, and on the < 2-mm sediment fraction in a transect of five cores from Cumberland Sound to the NW Labrador Sea. A sediment unmixing program was used to discriminate between sediment sources, which included dolomite-rich sediments from Baffin Bay, calcite-rich sediments from Hudson Strait and discrete sources from Cumberland Sound. Results indicated that the bulk of the sediment was derived from Cumberland Sound, but Baffin Bay contributed to sediments coeval with H-0 (Younger Dryas), whereas Hudson Strait was the source during H-events 1–4. Contributions from the Cretaceous outcrops within Cumberland Sound bracket H-events, thus both leading and lagging Hudson Strait-sourced H-events.

  10. Propagation of second sound in a superfluid Fermi gas in the unitary limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arahata, Emiko; Nikuni, Tetsuro

    2009-10-15

    We study sound propagation in a uniform superfluid gas of Fermi atoms in the unitary limit. The existence of normal and superfluid components leads to appearance of two sound modes in the collisional regime, referred to as first and second sounds. The second sound is of particular interest as it is a clear signal of a superfluid component. Using Landau's two-fluid hydrodynamic theory, we calculate hydrodynamic sound velocities and these weights in the density response function. The latter is used to calculate the response to a sudden modification of the external potential generating pulse propagation. The amplitude of a pulsemore » which is proportional to the weight in the response function is calculated, the basis of the approach of Nozieres and Schmitt-Rink for the BCS-BEC. We show that, in a superfluid Fermi gas at unitarity, the second-sound pulse is excited with an appreciate amplitude by density perturbations.« less

  11. The RetroX auditory implant for high-frequency hearing loss.

    PubMed

    Garin, P; Genard, F; Galle, C; Jamart, J

    2004-07-01

    The objective of this study was to analyze the subjective satisfaction and measure the hearing gain provided by the RetroX (Auric GmbH, Rheine, Germany), an auditory implant of the external ear. We conducted a retrospective case review. We conducted this study at a tertiary referral center at a university hospital. We studied 10 adults with high-frequency sensori-neural hearing loss (ski-slope audiogram). The RetroX consists of an electronic unit sited in the postaural sulcus connected to a titanium tube implanted under the auricle between the sulcus and the entrance of the external auditory canal. Implanting requires only minor surgery under local anesthesia. Main outcome measures were a satisfaction questionnaire, pure-tone audiometry in quiet, speech audiometry in quiet, speech audiometry in noise, and azimuth audiometry (hearing threshold in function of sound source location within the horizontal plane at ear level). : Subjectively, all 10 patients are satisfied or even extremely satisfied with the hearing improvement provided by the RetroX. They wear the implant daily, from morning to evening. We observe a statistically significant improvement of pure-tone thresholds at 1, 2, and 4 kHz. In quiet, the speech reception threshold improves by 9 dB. Speech audiometry in noise shows that intelligibility improves by 26% for a signal-to-noise ratio of -5 dB, by 18% for a signal-to-noise ratio of 0 dB, and by 13% for a signal-to-noise ratio of +5 dB. Localization audiometry indicates that the skull masks sound contralateral to the implanted ear. Of the 10 patients, one had acoustic feedback and one presented with a granulomatous reaction to the foreign body that necessitated removing the implant. The RetroX auditory implant is a semi-implantable hearing aid without occlusion of the external auditory canal. It provides a new therapeutic alternative for managing high-frequency hearing loss.

  12. Peripheral mechanisms for vocal production in birds - differences and similarities to human speech and singing.

    PubMed

    Riede, Tobias; Goller, Franz

    2010-10-01

    Song production in songbirds is a model system for studying learned vocal behavior. As in humans, bird phonation involves three main motor systems (respiration, vocal organ and vocal tract). The avian respiratory mechanism uses pressure regulation in air sacs to ventilate a rigid lung. In songbirds sound is generated with two independently controlled sound sources, which reside in a uniquely avian vocal organ, the syrinx. However, the physical sound generation mechanism in the syrinx shows strong analogies to that in the human larynx, such that both can be characterized as myoelastic-aerodynamic sound sources. Similarities include active adduction and abduction, oscillating tissue masses which modulate flow rate through the organ and a layered structure of the oscillating tissue masses giving rise to complex viscoelastic properties. Differences in the functional morphology of the sound producing system between birds and humans require specific motor control patterns. The songbird vocal apparatus is adapted for high speed, suggesting that temporal patterns and fast modulation of sound features are important in acoustic communication. Rapid respiratory patterns determine the coarse temporal structure of song and maintain gas exchange even during very long songs. The respiratory system also contributes to the fine control of airflow. Muscular control of the vocal organ regulates airflow and acoustic features. The upper vocal tract of birds filters the sounds generated in the syrinx, and filter properties are actively adjusted. Nonlinear source-filter interactions may also play a role. The unique morphology and biomechanical system for sound production in birds presents an interesting model for exploring parallels in control mechanisms that give rise to highly convergent physical patterns of sound generation. More comparative work should provide a rich source for our understanding of the evolution of complex sound producing systems. Copyright © 2009 Elsevier Inc. All rights reserved.

  13. The auditory P50 component to onset and offset of sound

    PubMed Central

    Pratt, Hillel; Starr, Arnold; Michalewski, Henry J.; Bleich, Naomi; Mittelman, Nomi

    2008-01-01

    Objective: The auditory Event-Related Potentials (ERP) component P50 to sound onset and offset have been reported to be similar, but their magnetic homologue has been reported absent to sound offset. We compared the spatio-temporal distribution of cortical activity during P50 to sound onset and offset, without confounds of spectral change. Methods: ERPs were recorded in response to onsets and offsets of silent intervals of 0.5 s (gaps) appearing randomly in otherwise continuous white noise and compared to ERPs to randomly distributed click pairs with half second separation presented in silence. Subjects were awake and distracted from the stimuli by reading a complicated text. Measures of P50 included peak latency and amplitude, as well as source current density estimates to the clicks and sound onsets and offsets. Results P50 occurred in response to noise onsets and to clicks, while to noise offset it was absent. Latency of P50 was similar to noise onset (56 msec) and to clicks (53 msec). Sources of P50 to noise onsets and clicks included bilateral superior parietal areas. In contrast, noise offsets activated left inferior temporal and occipital areas at the time of P50. Source current density was significantly higher to noise onset than offset in the vicinity of the temporo-parietal junction. Conclusions: P50 to sound offset is absent compared to the distinct P50 to sound onset and to clicks, at different intracranial sources. P50 to stimulus onset and to clicks appears to reflect preattentive arousal by a new sound in the scene. Sound offset does not involve a new sound and hence the absent P50. Significance: Stimulus onset activates distinct early cortical processes that are absent to offset. PMID:18055255

  14. Blind separation of incoherent and spatially disjoint sound sources

    NASA Astrophysics Data System (ADS)

    Dong, Bin; Antoni, Jérôme; Pereira, Antonio; Kellermann, Walter

    2016-11-01

    Blind separation of sound sources aims at reconstructing the individual sources which contribute to the overall radiation of an acoustical field. The challenge is to reach this goal using distant measurements when all sources are operating concurrently. The working assumption is usually that the sources of interest are incoherent - i.e. statistically orthogonal - so that their separation can be approached by decorrelating a set of simultaneous measurements, which amounts to diagonalizing the cross-spectral matrix. Principal Component Analysis (PCA) is traditionally used to this end. This paper reports two new findings in this context. First, a sufficient condition is established under which "virtual" sources returned by PCA coincide with true sources; it stipulates that the sources of interest should be not only incoherent but also spatially orthogonal. A particular case of this instance is met by spatially disjoint sources - i.e. with non-overlapping support sets. Second, based on this finding, a criterion that enforces both statistical and spatial orthogonality is proposed to blindly separate incoherent sound sources which radiate from disjoint domains. This criterion can be easily incorporated into acoustic imaging algorithms such as beamforming or acoustical holography to identify sound sources of different origins. The proposed methodology is validated on laboratory experiments. In particular, the separation of aeroacoustic sources is demonstrated in a wind tunnel.

  15. A New Mechanism of Sound Generation in Songbirds

    NASA Astrophysics Data System (ADS)

    Goller, Franz; Larsen, Ole N.

    1997-12-01

    Our current understanding of the sound-generating mechanism in the songbird vocal organ, the syrinx, is based on indirect evidence and theoretical treatments. The classical avian model of sound production postulates that the medial tympaniform membranes (MTM) are the principal sound generators. We tested the role of the MTM in sound generation and studied the songbird syrinx more directly by filming it endoscopically. After we surgically incapacitated the MTM as a vibratory source, zebra finches and cardinals were not only able to vocalize, but sang nearly normal song. This result shows clearly that the MTM are not the principal sound source. The endoscopic images of the intact songbird syrinx during spontaneous and brain stimulation-induced vocalizations illustrate the dynamics of syringeal reconfiguration before phonation and suggest a different model for sound production. Phonation is initiated by rostrad movement and stretching of the syrinx. At the same time, the syrinx is closed through movement of two soft tissue masses, the medial and lateral labia, into the bronchial lumen. Sound production always is accompanied by vibratory motions of both labia, indicating that these vibrations may be the sound source. However, because of the low temporal resolution of the imaging system, the frequency and phase of labial vibrations could not be assessed in relation to that of the generated sound. Nevertheless, in contrast to the previous model, these observations show that both labia contribute to aperture control and strongly suggest that they play an important role as principal sound generators.

  16. On the role of glottis-interior sources in the production of voiced sound.

    PubMed

    Howe, M S; McGowan, R S

    2012-02-01

    The voice source is dominated by aeroacoustic sources downstream of the glottis. In this paper an investigation is made of the contribution to voiced speech of secondary sources within the glottis. The acoustic waveform is ultimately determined by the volume velocity of air at the glottis, which is controlled by vocal fold vibration, pressure forcing from the lungs, and unsteady backreactions from the sound and from the supraglottal air jet. The theory of aerodynamic sound is applied to study the influence on the fine details of the acoustic waveform of "potential flow" added-mass-type glottal sources, glottis friction, and vorticity either in the glottis-wall boundary layer or in the portion of the free jet shear layer within the glottis. These sources govern predominantly the high frequency content of the sound when the glottis is near closure. A detailed analysis performed for a canonical, cylindrical glottis of rectangular cross section indicates that glottis-interior boundary/shear layer vortex sources and the surface frictional source are of comparable importance; the influence of the potential flow source is about an order of magnitude smaller. © 2012 Acoustical Society of America

  17. Calibration of International Space Station (ISS) Node 1 Vibro-Acoustic Model-Report 2

    NASA Technical Reports Server (NTRS)

    Zhang, Weiguo; Raveendra, Ravi

    2014-01-01

    Reported here is the capability of the Energy Finite Element Method (E-FEM) to predict the vibro-acoustic sound fields within the International Space Station (ISS) Node 1 and to compare the results with simulated leak sounds. A series of electronically generated structural ultrasonic noise sources were created in the pressure wall to emulate leak signals at different locations of the Node 1 STA module during its period of storage at Stennis Space Center (SSC). The exact sound source profiles created within the pressure wall at the source were unknown, but were estimated from the closest sensor measurement. The E-FEM method represents a reverberant sound field calculation, and of importance to this application is the requirement to correctly handle the direct field effect of the sound generation. It was also important to be able to compute the sound energy fields in the ultrasonic frequency range. This report demonstrates the capability of this technology as applied to this type of application.

  18. [Perception by teenagers and adults of the changed by amplitude sound sequences used in models of movement of the sound source].

    PubMed

    Andreeva, I G; Vartanian, I A

    2012-01-01

    The ability to evaluate direction of amplitude changes of sound stimuli was studied in adults and in the 11-12- and 15-16-year old teenagers. The stimuli representing sequences of fragments of the tone of 1 kHz, whose amplitude is changing with time, are used as model of approach and withdrawal of the sound sources. The 11-12-year old teenagers at estimation of direction of amplitude changes were shown to make the significantly higher number of errors as compared with two other examined groups, including those in repeated experiments. The structure of errors - the ratio of the portion of errors at estimation of increasing and decreasing by amplitude stimulus - turned out to be different in teenagers and in adults. The question is discussed about the effect of unspecific activation of the large hemisphere cortex in teenagers on processes if taking solution about the complex sound stimulus, including a possibility estimation of approach and withdrawal of the sound source.

  19. Interior and exterior sound field control using general two-dimensional first-order sources.

    PubMed

    Poletti, M A; Abhayapala, T D

    2011-01-01

    Reproduction of a given sound field interior to a circular loudspeaker array without producing an undesirable exterior sound field is an unsolved problem over a broadband of frequencies. At low frequencies, by implementing the Kirchhoff-Helmholtz integral using a circular discrete array of line-source loudspeakers, a sound field can be recreated within the array and produce no exterior sound field, provided that the loudspeakers have azimuthal polar responses with variable first-order responses which are a combination of a two-dimensional (2D) monopole and a radially oriented 2D dipole. This paper examines the performance of circular discrete arrays of line-source loudspeakers which also include a tangential dipole, providing general variable-directivity responses in azimuth. It is shown that at low frequencies, the tangential dipoles are not required, but that near and above the Nyquist frequency, the tangential dipoles can both improve the interior accuracy and reduce the exterior sound field. The additional dipoles extend the useful range of the array by around an octave.

  20. The silent base flow and the sound sources in a laminar jet.

    PubMed

    Sinayoko, Samuel; Agarwal, Anurag

    2012-03-01

    An algorithm to compute the silent base flow sources of sound in a jet is introduced. The algorithm is based on spatiotemporal filtering of the flow field and is applicable to multifrequency sources. It is applied to an axisymmetric laminar jet and the resulting sources are validated successfully. The sources are compared to those obtained from two classical acoustic analogies, based on quiescent and time-averaged base flows. The comparison demonstrates how the silent base flow sources shed light on the sound generation process. It is shown that the dominant source mechanism in the axisymmetric laminar jet is "shear-noise," which is a linear mechanism. The algorithm presented here could be applied to fully turbulent flows to understand the aerodynamic noise-generation mechanism. © 2012 Acoustical Society of America

  1. Modal propagation angles in a cylindrical duct with flow and their relation to sound radiation

    NASA Technical Reports Server (NTRS)

    Rice, E. J.; Heidmann, M. F.; Sofrin, T. G.

    1979-01-01

    The main emphasis is upon the propagation angle with respect to the duct axis and its relation to the far-field acoustic radiation pattern. When the steady flow Mach number is accounted for in the duct, the propagation angle in the duct is shown to be coincident with the angle of the principal lobe of far-field radiation obtained using the Wiener-Hopf technique. Different Mach numbers are allowed within the duct and in the external field. For static tests with a steady flow in an inlet but with no external Mach number the far-field radiation pattern is shifted considerably toward the inlet axis when compared to zero Mach number radiation theory. As the external Mach number is increased the noise radiation pattern is shifted away from the inlet axis. The theory is developed using approximations for sound propagation in circular ducts. An exact analysis using Hankel function solutions for the zero Mach number case is given to provide a check of the simpler approximate theory.

  2. Sound Radiated by a Wave-Like Structure in a Compressible Jet

    NASA Technical Reports Server (NTRS)

    Golubev, V. V.; Prieto, A. F.; Mankbadi, R. R.; Dahl, M. D.; Hixon, R.

    2003-01-01

    This paper extends the analysis of acoustic radiation from the source model representing spatially-growing instability waves in a round jet at high speeds. Compared to previous work, a modified approach to the sound source modeling is examined that employs a set of solutions to linearized Euler equations. The sound radiation is then calculated using an integral surface method.

  3. Photoacoustic Effect Generated from an Expanding Spherical Source

    NASA Astrophysics Data System (ADS)

    Bai, Wenyu; Diebold, Gerald J.

    2018-02-01

    Although the photoacoustic effect is typically generated by amplitude-modulated continuous or pulsed radiation, the form of the wave equation for pressure that governs the generation of sound indicates that optical sources moving in an absorbing fluid can produce sound as well. Here, the characteristics of the acoustic wave produced by a radially symmetric Gaussian source expanding outwardly from the origin are found. The unique feature of the photoacoustic effect from the spherical source is a trailing compressive wave that arises from reflection of an inwardly propagating component of the wave. Similar to the one-dimensional geometry, an unbounded amplification effect is found for the Gaussian source expanding at the sound speed.

  4. Volume I. Percussion Sextet. (original Composition). Volume II. The Simulation of Acoustical Space by Means of Physical Modeling.

    NASA Astrophysics Data System (ADS)

    Manzara, Leonard Charles

    1990-01-01

    The dissertation is in two parts:. 1. Percussion Sextet. The Percussion Sextet is a one movement musical composition with a length of approximately fifteen minutes. It is for six instrumentalists, each on a number of percussion instruments. The overriding formal problem was to construct a coherent and compelling structure which fuses a diversity of musical materials and textures into a dramatic whole. Particularly important is the synthesis of opposing tendencies contained in stochastic and deterministic processes: global textures versus motivic detail, and randomness versus total control. Several compositional techniques are employed in the composition. These methods of composition will be aided, in part, by the use of artificial intelligence techniques programmed on a computer. Finally, the percussion ensemble is the ideal medium to realize the above processes since it encompasses a wide range of both pitched and unpitched timbres, and since a great variety of textures and densities can be created with a certain economy of means. 2. The simulation of acoustical space by means of physical modeling. This is a written report describing the research and development of a computer program which simulates the characteristics of acoustical space in two dimensions. With the computer program the user can simulate most conventional acoustical spaces, as well as those physically impossible to realize in the real world. The program simulates acoustical space by means of geometric modeling. This involves defining wall equations, phantom source points and wall diffusions, and then processing input files containing digital signals through the program, producing output files ready for digital to analog conversion. The user of the program is able to define wall locations and wall reflectivity and roughness characteristics, all of which can be changed over time. Sound source locations are also definable within the acoustical space and these locations can be changed independently at any rate of speed. The sounds themselves are generated from any external sound synthesis program or appropriate sampling system. Finally, listener location and orientation is also user definable and dynamic in nature. A Receive-ReBroadcast (RRB) model is used to play back the sound and is definable from two to eight channels of sound. (Abstract shortened with permission of author.).

  5. Measurement and Numerical Calculation of Force on a Particle in a Strong Acoustic Field Required for Levitation

    NASA Astrophysics Data System (ADS)

    Kozuka, Teruyuki; Yasui, Kyuichi; Tuziuti, Toru; Towata, Atsuya; Lee, Judy; Iida, Yasuo

    2009-07-01

    Using a standing-wave field generated between a sound source and a reflector, it is possible to trap small objects at nodes of the sound pressure distribution in air. In this study, a sound field generated under a flat or concave reflector was studied by both experimental measurement and numerical calculation. The calculated result agrees well with the experimental data. The maximum force generated between a sound source of 25.0 mm diameter and a concave reflector is 0.8 mN in the experiment. A steel ball of 2.0 mm in diameter was levitated in the sound field in air.

  6. Research and Implementation of Heart Sound Denoising

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  7. Evaluation of hearing protection used by police officers in the shooting range.

    PubMed

    Guida, Heraldo Lorena; Taxini, Carla Linhares; Gonçalves, Claudia Giglio de Oliveira; Valenti, Vitor Engrácia

    2014-01-01

    Impact noise is characterized by acoustic energy peaks that last less than a second, at intervals of more than 1s. To quantify the levels of impact noise to which police officers are exposed during activities at the shooting range and to evaluate the attenuation of the hearing protector. Measurements were performed in the shooting range of a military police department. An SV 102 audiodosimeter (Svantek) was used to measure sound pressure levels. Two microphones were used simultaneously: one external and one insertion type; the firearm used was a 0.40 Taurus® rimless pistol. The values obtained with the external microphone were 146 dBC (peak), and a maximum sound level of 129.4 dBC (fast). The results obtained with the insertion microphone were 138.7 dBC (peak), and a maximum sound level of 121.6 dBC (fast). The findings showed high levels of sound pressure in the shooting range, which exceeded the maximum recommended noise (120 dBC), even when measured through the insertion microphone. Therefore, alternatives to improve the performance of hearing protection should be considered. Copyright © 2014 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  8. Sound field reproduction as an equivalent acoustical scattering problem.

    PubMed

    Fazi, Filippo Maria; Nelson, Philip A

    2013-11-01

    Given a continuous distribution of acoustic sources, the determination of the source strength that ensures the synthesis of a desired sound field is shown to be identical to the solution of an equivalent acoustic scattering problem. The paper begins with the presentation of the general theory that underpins sound field reproduction with secondary sources continuously arranged on the boundary of the reproduction region. The process of reproduction by a continuous source distribution is modeled by means of an integral operator (the single layer potential). It is then shown how the solution of the sound reproduction problem corresponds to that of an equivalent scattering problem. Analytical solutions are computed for two specific instances of this problem, involving, respectively, the use of a secondary source distribution in spherical and planar geometries. The results are shown to be the same as those obtained with analyses based on High Order Ambisonics and Wave Field Synthesis, respectively, thus bringing to light a fundamental analogy between these two methods of sound reproduction. Finally, it is shown how the physical optics (Kirchhoff) approximation enables the derivation of a high-frequency simplification for the problem under consideration, this in turn being related to the secondary source selection criterion reported in the literature on Wave Field Synthesis.

  9. Investigation of spherical loudspeaker arrays for local active control of sound.

    PubMed

    Peleg, Tomer; Rafaely, Boaz

    2011-10-01

    Active control of sound can be employed globally to reduce noise levels in an entire enclosure, or locally around a listener's head. Recently, spherical loudspeaker arrays have been studied as multiple-channel sources for local active control of sound, presenting the fundamental theory and several active control configurations. In this paper, important aspects of using a spherical loudspeaker array for local active control of sound are further investigated. First, the feasibility of creating sphere-shaped quiet zones away from the source is studied both theoretically and numerically, showing that these quiet zones are associated with sound amplification and poor system robustness. To mitigate the latter, the design of shell-shaped quiet zones around the source is investigated. A combination of two spherical sources is then studied with the aim of enlarging the quiet zone. The two sources are employed to generate quiet zones that surround a rigid sphere, investigating the application of active control around a listener's head. A significant improvement in performance is demonstrated in this case over a conventional headrest-type system that uses two monopole secondary sources. Finally, several simulations are presented to support the theoretical work and to demonstrate the performance and limitations of the system. © 2011 Acoustical Society of America

  10. Efficient techniques for wave-based sound propagation in interactive applications

    NASA Astrophysics Data System (ADS)

    Mehra, Ravish

    Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data-driven, rotating or time-varying directivity function at runtime. Unlike previous approaches, the listener directivity approach can be used to compute spatial audio (3D audio) for a moving, rotating listener at interactive rates. Lastly, we propose an efficient GPU-based time-domain solver for the wave equation that enables wave simulation up to the mid-frequency range in tens of minutes on a desktop computer. It is demonstrated that by carefully mapping all the components of the wave simulator to match the parallel processing capabilities of the graphics processors, significant improvement in performance can be achieved compared to the CPU-based simulators, while maintaining numerical accuracy. We validate these techniques with offline numerical simulations and measured data recorded in an outdoor scene. We present results of preliminary user evaluations conducted to study the impact of these techniques on user's immersion in virtual environment. We have integrated these techniques with the Half-Life 2 game engine, Oculus Rift head-mounted display, and Xbox game controller to enable users to experience high-quality acoustics effects and spatial audio in the virtual environment.

  11. Echolocation versus echo suppression in humans

    PubMed Central

    Wallmeier, Ludwig; Geßele, Nikodemus; Wiegrebe, Lutz

    2013-01-01

    Several studies have shown that blind humans can gather spatial information through echolocation. However, when localizing sound sources, the precedence effect suppresses spatial information of echoes, and thereby conflicts with effective echolocation. This study investigates the interaction of echolocation and echo suppression in terms of discrimination suppression in virtual acoustic space. In the ‘Listening’ experiment, sighted subjects discriminated between positions of a single sound source, the leading or the lagging of two sources, respectively. In the ‘Echolocation’ experiment, the sources were replaced by reflectors. Here, the same subjects evaluated echoes generated in real time from self-produced vocalizations and thereby discriminated between positions of a single reflector, the leading or the lagging of two reflectors, respectively. Two key results were observed. First, sighted subjects can learn to discriminate positions of reflective surfaces echo-acoustically with accuracy comparable to sound source discrimination. Second, in the Listening experiment, the presence of the leading source affected discrimination of lagging sources much more than vice versa. In the Echolocation experiment, however, the presence of both the lead and the lag strongly affected discrimination. These data show that the classically described asymmetry in the perception of leading and lagging sounds is strongly diminished in an echolocation task. Additional control experiments showed that the effect is owing to both the direct sound of the vocalization that precedes the echoes and owing to the fact that the subjects actively vocalize in the echolocation task. PMID:23986105

  12. Two dimensional sound field reproduction using higher order sources to exploit room reflections.

    PubMed

    Betlehem, Terence; Poletti, Mark A

    2014-04-01

    In this paper, sound field reproduction is performed in a reverberant room using higher order sources (HOSs) and a calibrating microphone array. Previously a sound field was reproduced with fixed directivity sources and the reverberation compensated for using digital filters. However by virtue of their directive properties, HOSs may be driven to not only avoid the creation of excess reverberation but also to use room reflection to contribute constructively to the desired sound field. The manner by which the loudspeakers steer the sound around the room is determined by measuring the acoustic transfer functions. The requirements on the number and order N of HOSs for accurate reproduction in a reverberant room are derived, showing a 2N + 1-fold decrease in the number of loudspeakers in comparison to using monopole sources. HOSs are shown applicable to rooms with a rich variety of wall reflections while in an anechoic room their advantages may be lost. Performance is investigated in a room using extensions of both the diffuse field model and a more rigorous image-source simulation method, which account for the properties of the HOSs. The robustness of the proposed method is validated by introducing measurement errors.

  13. Car glass microphones using piezoelectric transducers for external alarm detection and localization

    NASA Astrophysics Data System (ADS)

    Bolzmacher, Christian; Le Guelvouit, Valentin

    2015-05-01

    This work describes the potential use of car windows as a long range acoustic sensing device for external alarm signals. The goal is to detect and localize siren signals (e.g. ambulances and police cars) and to alert presbycusic drivers of its presence by visual and acoustic feedback in order to improve individual mobility and increase the sense of security. The glass panes of a Renault Zoé operating as an acoustic antenna have been equipped with large 50 mm outer diameter piezoceramic rings, hidden in the lower part of the door structure and the lower part of the windshield and the rear window. The response of the glass to quasi-static signals and sweep excitation has been recorded. In general, the glass pane is acting as a high pass filter due to its inherent stiffness and provides only little damping. This effect is compensated by using a charge amplifier electronic circuit. The detection capability up to 120 m as well as a dynamic test where the car is moving towards the sound source is reported.

  14. Challenges and solutions for realistic room simulation

    NASA Astrophysics Data System (ADS)

    Begault, Durand R.

    2002-05-01

    Virtual room acoustic simulation (auralization) techniques have traditionally focused on answering questions related to speech intelligibility or musical quality, typically in large volumetric spaces. More recently, auralization techniques have been found to be important for the externalization of headphone-reproduced virtual acoustic images. Although externalization can be accomplished using a minimal simulation, data indicate that realistic auralizations need to be responsive to head motion cues for accurate localization. Computational demands increase when providing for the simulation of coupled spaces, small rooms lacking meaningful reverberant decays, or reflective surfaces in outdoor environments. Auditory threshold data for both early reflections and late reverberant energy levels indicate that much of the information captured in acoustical measurements is inaudible, minimizing the intensive computational requirements of real-time auralization systems. Results are presented for early reflection thresholds as a function of azimuth angle, arrival time, and sound-source type, and reverberation thresholds as a function of reverberation time and level within 250-Hz-2-kHz octave bands. Good agreement is found between data obtained in virtual room simulations and those obtained in real rooms, allowing a strategy for minimizing computational requirements of real-time auralization systems.

  15. Seismic and Biological Sources of Ambient Ocean Sound

    NASA Astrophysics Data System (ADS)

    Freeman, Simon Eric

    Sound is the most efficient radiation in the ocean. Sounds of seismic and biological origin contain information regarding the underlying processes that created them. A single hydrophone records summary time-frequency information from the volume within acoustic range. Beamforming using a hydrophone array additionally produces azimuthal estimates of sound sources. A two-dimensional array and acoustic focusing produce an unambiguous two-dimensional `image' of sources. This dissertation describes the application of these techniques in three cases. The first utilizes hydrophone arrays to investigate T-phases (water-borne seismic waves) in the Philippine Sea. Ninety T-phases were recorded over a 12-day period, implying a greater number of seismic events occur than are detected by terrestrial seismic monitoring in the region. Observation of an azimuthally migrating T-phase suggests that reverberation of such sounds from bathymetric features can occur over megameter scales. In the second case, single hydrophone recordings from coral reefs in the Line Islands archipelago reveal that local ambient reef sound is spectrally similar to sounds produced by small, hard-shelled benthic invertebrates in captivity. Time-lapse photography of the reef reveals an increase in benthic invertebrate activity at sundown, consistent with an increase in sound level. The dominant acoustic phenomenon on these reefs may thus originate from the interaction between a large number of small invertebrates and the substrate. Such sounds could be used to take census of hard-shelled benthic invertebrates that are otherwise extremely difficult to survey. A two-dimensional `map' of sound production over a coral reef in the Hawaiian Islands was obtained using two-dimensional hydrophone array in the third case. Heterogeneously distributed bio-acoustic sources were generally co-located with rocky reef areas. Acoustically dominant snapping shrimp were largely restricted to one location within the area surveyed. This distribution of sources could reveal small-scale spatial ecological limitations, such as the availability of food and shelter. While array-based passive acoustic sensing is well established in seismoacoustics, the technique is little utilized in the study of ambient biological sound. With the continuance of Moore's law and advances in battery and memory technology, inferring biological processes from ambient sound may become a more accessible tool in underwater ecological evaluation and monitoring.

  16. Calculating far-field radiated sound pressure levels from NASTRAN output

    NASA Technical Reports Server (NTRS)

    Lipman, R. R.

    1986-01-01

    FAFRAP is a computer program which calculates far field radiated sound pressure levels from quantities computed by a NASTRAN direct frequency response analysis of an arbitrarily shaped structure. Fluid loading on the structure can be computed directly by NASTRAN or an added-mass approximation to fluid loading on the structure can be used. Output from FAFRAP includes tables of radiated sound pressure levels and several types of graphic output. FAFRAP results for monopole and dipole sources compare closely with an explicit calculation of the radiated sound pressure level for those sources.

  17. Modelling of human low frequency sound localization acuity demonstrates dominance of spatial variation of interaural time difference and suggests uniform just-noticeable differences in interaural time difference.

    PubMed

    Smith, Rosanna C G; Price, Stephen R

    2014-01-01

    Sound source localization is critical to animal survival and for identification of auditory objects. We investigated the acuity with which humans localize low frequency, pure tone sounds using timing differences between the ears. These small differences in time, known as interaural time differences or ITDs, are identified in a manner that allows localization acuity of around 1° at the midline. Acuity, a relative measure of localization ability, displays a non-linear variation as sound sources are positioned more laterally. All species studied localize sounds best at the midline and progressively worse as the sound is located out towards the side. To understand why sound localization displays this variation with azimuthal angle, we took a first-principles, systemic, analytical approach to model localization acuity. We calculated how ITDs vary with sound frequency, head size and sound source location for humans. This allowed us to model ITD variation for previously published experimental acuity data and determine the distribution of just-noticeable differences in ITD. Our results suggest that the best-fit model is one whereby just-noticeable differences in ITDs are identified with uniform or close to uniform sensitivity across the physiological range. We discuss how our results have several implications for neural ITD processing in different species as well as development of the auditory system.

  18. Graphene-on-paper sound source devices.

    PubMed

    Tian, He; Ren, Tian-Ling; Xie, Dan; Wang, Yu-Feng; Zhou, Chang-Jian; Feng, Ting-Ting; Fu, Di; Yang, Yi; Peng, Ping-Gang; Wang, Li-Gang; Liu, Li-Tian

    2011-06-28

    We demonstrate an interesting phenomenon that graphene can emit sound. The application of graphene can be expanded in the acoustic field. Graphene-on-paper sound source devices are made by patterning graphene on paper substrates. Three graphene sheet samples with the thickness of 100, 60, and 20 nm were fabricated. Sound emission from graphene is measured as a function of power, distance, angle, and frequency in the far-field. The theoretical model of air/graphene/paper/PCB board multilayer structure is established to analyze the sound directivity, frequency response, and efficiency. Measured sound pressure level (SPL) and efficiency are in good agreement with theoretical results. It is found that graphene has a significant flat frequency response in the wide ultrasound range 20-50 kHz. In addition, the thinner graphene sheets can produce higher SPL due to its lower heat capacity per unit area (HCPUA). The infrared thermal images reveal that a thermoacoustic effect is the working principle. We find that the sound performance mainly depends on the HCPUA of the conductor and the thermal properties of the substrate. The paper-based graphene sound source devices have highly reliable, flexible, no mechanical vibration, simple structure and high performance characteristics. It could open wide applications in multimedia, consumer electronics, biological, medical, and many other areas.

  19. Representation of Sound Objects within Early-Stage Auditory Areas: A Repetition Effect Study Using 7T fMRI

    PubMed Central

    Da Costa, Sandra; Bourquin, Nathalie M.-P.; Knebel, Jean-François; Saenz, Melissa; van der Zwaag, Wietske; Clarke, Stephanie

    2015-01-01

    Environmental sounds are highly complex stimuli whose recognition depends on the interaction of top-down and bottom-up processes in the brain. Their semantic representations were shown to yield repetition suppression effects, i. e. a decrease in activity during exposure to a sound that is perceived as belonging to the same source as a preceding sound. Making use of the high spatial resolution of 7T fMRI we have investigated the representations of sound objects within early-stage auditory areas on the supratemporal plane. The primary auditory cortex was identified by means of tonotopic mapping and the non-primary areas by comparison with previous histological studies. Repeated presentations of different exemplars of the same sound source, as compared to the presentation of different sound sources, yielded significant repetition suppression effects within a subset of early-stage areas. This effect was found within the right hemisphere in primary areas A1 and R as well as two non-primary areas on the antero-medial part of the planum temporale, and within the left hemisphere in A1 and a non-primary area on the medial part of Heschl’s gyrus. Thus, several, but not all early-stage auditory areas encode the meaning of environmental sounds. PMID:25938430

  20. Conversion of environmental data to a digital-spatial database, Puget Sound area, Washington

    USGS Publications Warehouse

    Uhrich, M.A.; McGrath, T.S.

    1997-01-01

    Data and maps from the Puget Sound Environmental Atlas, compiled for the U.S. Environmental Protection Agency, the Puget Sound Water Quality Authority, and the U.S. Army Corps of Engineers, have been converted into a digital-spatial database using a geographic information system. Environmental data for the Puget Sound area,collected from sources other than the Puget SoundEnvironmental Atlas by different Federal, State, andlocal agencies, also have been converted into thisdigital-spatial database. Background on the geographic-information-system planning process, the design and implementation of the geographic information-system database, and the reasons for conversion to this digital-spatial database are included in this report. The Puget Sound Environmental Atlas data layers include information about seabird nesting areas, eelgrass and kelp habitat, marine mammal and fish areas, and shellfish resources and bed certification. Data layers, from sources other than the Puget Sound Environmental Atlas, include the Puget Sound shoreline, the water-body system, shellfish growing areas, recreational shellfish beaches, sewage-treatment outfalls, upland hydrography,watershed and political boundaries, and geographicnames. The sources of data, descriptions of the datalayers, and the steps and errors of processing associated with conversion to a digital-spatial database used in development of the Puget Sound Geographic Information System also are included in this report. The appendixes contain data dictionaries for each of the resource layers and error values for the conversion of Puget SoundEnvironmental Atlas data.

  1. Wave Field Synthesis of moving sources with arbitrary trajectory and velocity profile.

    PubMed

    Firtha, Gergely; Fiala, Péter

    2017-08-01

    The sound field synthesis of moving sound sources is of great importance when dynamic virtual sound scenes are to be reconstructed. Previous solutions considered only virtual sources moving uniformly along a straight trajectory, synthesized employing a linear loudspeaker array. This article presents the synthesis of point sources following an arbitrary trajectory. Under high-frequency assumptions 2.5D Wave Field Synthesis driving functions are derived for arbitrary shaped secondary source contours by adapting the stationary phase approximation to the dynamic description of sources in motion. It is explained how a referencing function should be chosen in order to optimize the amplitude of synthesis on an arbitrary receiver curve. Finally, a finite difference implementation scheme is considered, making the presented approach suitable for real-time applications.

  2. Sound localization skills in children who use bilateral cochlear implants and in children with normal acoustic hearing

    PubMed Central

    Grieco-Calub, Tina M.; Litovsky, Ruth Y.

    2010-01-01

    Objectives To measure sound source localization in children who have sequential bilateral cochlear implants (BICIs); to determine if localization accuracy correlates with performance on a right-left discrimination task (i.e., spatial acuity); to determine if there is a measurable bilateral benefit on a sound source identification task (i.e., localization accuracy) by comparing performance under bilateral and unilateral listening conditions; to determine if sound source localization continues to improve with longer durations of bilateral experience. Design Two groups of children participated in this study: a group of 21 children who received BICIs in sequential procedures (5–14 years old) and a group of 7 typically-developing children with normal acoustic hearing (5 years old). Testing was conducted in a large sound-treated booth with loudspeakers positioned on a horizontal arc with a radius of 1.2 m. Children participated in two experiments that assessed spatial hearing skills. Spatial hearing acuity was assessed with a discrimination task in which listeners determined if a sound source was presented on the right or left side of center; the smallest angle at which performance on this task was reliably above chance is the minimum audible angle. Sound localization accuracy was assessed with a sound source identification task in which children identified the perceived position of the sound source from a multi-loudspeaker array (7 or 15); errors are quantified using the root-mean-square (RMS) error. Results Sound localization accuracy was highly variable among the children with BICIs, with RMS errors ranging from 19°–56°. Performance of the NH group, with RMS errors ranging from 9°–29° was significantly better. Within the BICI group, in 11/21 children RMS errors were smaller in the bilateral vs. unilateral listening condition, indicating bilateral benefit. There was a significant correlation between spatial acuity and sound localization accuracy (R2=0.68, p<0.01), suggesting that children who achieve small RMS errors tend to have the smallest MAAs. Although there was large intersubject variability, testing of 11 children in the BICI group at two sequential visits revealed a subset of children who show improvement in spatial hearing skills over time. Conclusions A subset of children who use sequential BICIs can acquire sound localization abilities, even after long intervals between activation of hearing in the first- and second-implanted ears. This suggests that children with activation of the second implant later in life may be capable of developing spatial hearing abilities. The large variability in performance among the children with BICIs suggests that maturation of sound localization abilities in children with BICIs may be dependent on various individual subject factors such as age of implantation and chronological age. PMID:20592615

  3. Hardwall acoustical characteristics and measurement capabilities of the NASA Lewis 9 x 15 foot low speed wind tunnel

    NASA Technical Reports Server (NTRS)

    Rentz, P. E.

    1976-01-01

    Experimental evaluations of the acoustical characteristics and source sound power and directionality measurement capabilities of the NASA Lewis 9 x 15 foot low speed wind tunnel in the untreated or hardwall configuration were performed. The results indicate that source sound power estimates can be made using only settling chamber sound pressure measurements. The accuracy of these estimates, expressed as one standard deviation, can be improved from + or - 4 db to + or - 1 db if sound pressure measurements in the preparation room and diffuser are also used and source directivity information is utilized. A simple procedure is presented. Acceptably accurate measurements of source direct field acoustic radiation were found to be limited by the test section reverberant characteristics to 3.0 feet for omni-directional and highly directional sources. Wind-on noise measurements in the test section, settling chamber and preparation room were found to depend on the sixth power of tunnel velocity. The levels were compared with various analytic models. Results are presented and discussed.

  4. Reduced order modeling of head related transfer functions for virtual acoustic displays

    NASA Astrophysics Data System (ADS)

    Willhite, Joel A.; Frampton, Kenneth D.; Grantham, D. Wesley

    2003-04-01

    The purpose of this work is to improve the computational efficiency in acoustic virtual applications by creating and testing reduced order models of the head related transfer functions used in localizing sound sources. State space models of varying order were generated from zero-elevation Head Related Impulse Responses (HRIRs) using Kungs Single Value Decomposition (SVD) technique. The inputs to the models are the desired azimuths of the virtual sound sources (from minus 90 deg to plus 90 deg, in 10 deg increments) and the outputs are the left and right ear impulse responses. Trials were conducted in an anechoic chamber in which subjects were exposed to real sounds that were emitted by individual speakers across a numbered speaker array, phantom sources generated from the original HRIRs, and phantom sound sources generated with the different reduced order state space models. The error in the perceived direction of the phantom sources generated from the reduced order models was compared to errors in localization using the original HRIRs.

  5. Estimation of multiple sound sources with data and model uncertainties using the EM and evidential EM algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Xun; Quost, Benjamin; Chazot, Jean-Daniel; Antoni, Jérôme

    2016-01-01

    This paper considers the problem of identifying multiple sound sources from acoustical measurements obtained by an array of microphones. The problem is solved via maximum likelihood. In particular, an expectation-maximization (EM) approach is used to estimate the sound source locations and strengths, the pressure measured by a microphone being interpreted as a mixture of latent signals emitted by the sources. This work also considers two kinds of uncertainties pervading the sound propagation and measurement process: uncertain microphone locations and uncertain wavenumber. These uncertainties are transposed to the data in the belief functions framework. Then, the source locations and strengths can be estimated using a variant of the EM algorithm, known as the Evidential EM (E2M) algorithm. Eventually, both simulation and real experiments are shown to illustrate the advantage of using the EM in the case without uncertainty and the E2M in the case of uncertain measurement.

  6. Auditory Localization: An Annotated Bibliography

    DTIC Science & Technology

    1983-11-01

    tranverse plane, natural sound localization in ,-- both horizontal and vertical planes can be performed with nearly the same accuracy as real sound sources...important for unscrambling the competing sounds which so often occur in natural environments. A workable sound sensor has been constructed and empirical

  7. Detection of Sound Image Movement During Horizontal Head Rotation

    PubMed Central

    Ohba, Kagesho; Iwaya, Yukio; Suzuki, Yôiti

    2016-01-01

    Movement detection for a virtual sound source was measured during the listener’s horizontal head rotation. Listeners were instructed to do head rotation at a given speed. A trial consisted of two intervals. During an interval, a virtual sound source was presented 60° to the right or left of the listener, who was instructed to rotate the head to face the sound image position. Then in one of a pair of intervals, the sound position was moved slightly in the middle of the rotation. Listeners were asked to judge the interval in a trial during which the sound stimuli moved. Results suggest that detection thresholds are higher when listeners do head rotation. Moreover, this effect was found to be independent of the rotation velocity. PMID:27698993

  8. Rotorcraft Noise Model

    NASA Technical Reports Server (NTRS)

    Lucas, Michael J.; Marcolini, Michael A.

    1997-01-01

    The Rotorcraft Noise Model (RNM) is an aircraft noise impact modeling computer program being developed for NASA-Langley Research Center which calculates sound levels at receiver positions either on a uniform grid or at specific defined locations. The basic computational model calculates a variety of metria. Acoustic properties of the noise source are defined by two sets of sound pressure hemispheres, each hemisphere being centered on a noise source of the aircraft. One set of sound hemispheres provides the broadband data in the form of one-third octave band sound levels. The other set of sound hemispheres provides narrowband data in the form of pure-tone sound pressure levels and phase. Noise contours on the ground are output graphically or in tabular format, and are suitable for inclusion in Environmental Impact Statements or Environmental Assessments.

  9. The effect of spatial auditory landmarks on ambulation.

    PubMed

    Karim, Adham M; Rumalla, Kavelin; King, Laurie A; Hullar, Timothy E

    2018-02-01

    The maintenance of balance and posture is a result of the collaborative efforts of vestibular, proprioceptive, and visual sensory inputs, but a fourth neural input, audition, may also improve balance. Here, we tested the hypothesis that auditory inputs function as environmental spatial landmarks whose effectiveness depends on sound localization ability during ambulation. Eight blindfolded normal young subjects performed the Fukuda-Unterberger test in three auditory conditions: silence, white noise played through headphones (head-referenced condition), and white noise played through a loudspeaker placed directly in front at 135 centimeters away from the ear at ear height (earth-referenced condition). For the earth-referenced condition, an additional experiment was performed where the effect of moving the speaker azimuthal position to 45, 90, 135, and 180° was tested. Subjects performed significantly better in the earth-referenced condition than in the head-referenced or silent conditions. Performance progressively decreased over the range from 0° to 135° but all subjects then improved slightly at the 180° compared to the 135° condition. These results suggest that presence of sound dramatically improves the ability to ambulate when vision is limited, but that sound sources must be located in the external environment in order to improve balance. This supports the hypothesis that they act by providing spatial landmarks against which head and body movement and orientation may be compared and corrected. Balance improvement in the azimuthal plane mirrors sensitivity to sound movement at similar positions, indicating that similar auditory mechanisms may underlie both processes. These results may help optimize the use of auditory cues to improve balance in particular patient populations. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. The influence of imagery vividness on cognitive and perceptual cues in circular auditorily-induced vection.

    PubMed

    Väljamäe, Aleksander; Sell, Sara

    2014-01-01

    In the absence of other congruent multisensory motion cues, sound contribution to illusions of self-motion (vection) is relatively weak and often attributed to purely cognitive, top-down processes. The present study addressed the influence of cognitive and perceptual factors in the experience of circular, yaw auditorily-induced vection (AIV), focusing on participants imagery vividness scores. We used different rotating sound sources (acoustic landmark vs. movable types) and their filtered versions that provided different binaural cues (interaural time or level differences, ITD vs. ILD) when delivering via loudspeaker array. The significant differences in circular vection intensity showed that (1) AIV was stronger for rotating sound fields containing auditory landmarks as compared to movable sound objects; (2) ITD based acoustic cues were more instrumental than ILD based ones for horizontal AIV; and (3) individual differences in imagery vividness significantly influenced the effects of contextual and perceptual cues. While participants with high scores of kinesthetic and visual imagery were helped by vection "rich" cues, i.e., acoustic landmarks and ITD cues, the participants from the low-vivid imagery group did not benefit from these cues automatically. Only when specifically asked to use their imagination intentionally did these external cues start influencing vection sensation in a similar way to high-vivid imagers. These findings are in line with the recent fMRI work which suggested that high-vivid imagers employ automatic, almost unconscious mechanisms in imagery generation, while low-vivid imagers rely on more schematic and conscious framework. Consequently, our results provide an additional insight into the interaction between perceptual and contextual cues when experiencing purely auditorily or multisensory induced vection.

  11. The influence of imagery vividness on cognitive and perceptual cues in circular auditorily-induced vection

    PubMed Central

    Väljamäe, Aleksander; Sell, Sara

    2014-01-01

    In the absence of other congruent multisensory motion cues, sound contribution to illusions of self-motion (vection) is relatively weak and often attributed to purely cognitive, top-down processes. The present study addressed the influence of cognitive and perceptual factors in the experience of circular, yaw auditorily-induced vection (AIV), focusing on participants imagery vividness scores. We used different rotating sound sources (acoustic landmark vs. movable types) and their filtered versions that provided different binaural cues (interaural time or level differences, ITD vs. ILD) when delivering via loudspeaker array. The significant differences in circular vection intensity showed that (1) AIV was stronger for rotating sound fields containing auditory landmarks as compared to movable sound objects; (2) ITD based acoustic cues were more instrumental than ILD based ones for horizontal AIV; and (3) individual differences in imagery vividness significantly influenced the effects of contextual and perceptual cues. While participants with high scores of kinesthetic and visual imagery were helped by vection “rich” cues, i.e., acoustic landmarks and ITD cues, the participants from the low-vivid imagery group did not benefit from these cues automatically. Only when specifically asked to use their imagination intentionally did these external cues start influencing vection sensation in a similar way to high-vivid imagers. These findings are in line with the recent fMRI work which suggested that high-vivid imagers employ automatic, almost unconscious mechanisms in imagery generation, while low-vivid imagers rely on more schematic and conscious framework. Consequently, our results provide an additional insight into the interaction between perceptual and contextual cues when experiencing purely auditorily or multisensory induced vection. PMID:25520683

  12. Differential presence of anthropogenic compounds dissolved in the marine waters of Puget Sound, WA and Barkley Sound, BC.

    PubMed

    Keil, Richard; Salemme, Keri; Forrest, Brittany; Neibauer, Jaqui; Logsdon, Miles

    2011-11-01

    Organic compounds were evaluated in March 2010 at 22 stations in Barkley Sound, Vancouver Island Canada and at 66 locations in Puget Sound. Of 37 compounds, 15 were xenobiotics, 8 were determined to have an anthropogenic imprint over natural sources, and 13 were presumed to be of natural or mixed origin. The three most frequently detected compounds were salicyclic acid, vanillin and thymol. The three most abundant compounds were diethylhexyl phthalate (DEHP), ethyl vanillin and benzaldehyde (∼600 n g L(-1) on average). Concentrations of xenobiotics were 10-100 times higher in Puget Sound relative to Barkley Sound. Three compound couplets are used to illustrate the influence of human activity on marine waters; vanillin and ethyl vanillin, salicylic acid and acetylsalicylic acid, and cinnamaldehyde and cinnamic acid. Ratios indicate that anthropogenic activities are the predominant source of these chemicals in Puget Sound. Published by Elsevier Ltd.

  13. A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS

    NASA Astrophysics Data System (ADS)

    Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto

    At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.

  14. Mapping the sound field of an erupting submarine volcano using an acoustic glider.

    PubMed

    Matsumoto, Haru; Haxel, Joseph H; Dziak, Robert P; Bohnenstiehl, Delwayne R; Embley, Robert W

    2011-03-01

    An underwater glider with an acoustic data logger flew toward a recently discovered erupting submarine volcano in the northern Lau basin. With the volcano providing a wide-band sound source, recordings from the two-day survey produced a two-dimensional sound level map spanning 1 km (depth) × 40 km(distance). The observed sound field shows depth- and range-dependence, with the first-order spatial pattern being consistent with the predictions of a range-dependent propagation model. The results allow constraining the acoustic source level of the volcanic activity and suggest that the glider provides an effective platform for monitoring natural and anthropogenic ocean sounds. © 2011 Acoustical Society of America

  15. Study of environmental sound source identification based on hidden Markov model for robust speech recognition

    NASA Astrophysics Data System (ADS)

    Nishiura, Takanobu; Nakamura, Satoshi

    2003-10-01

    Humans communicate with each other through speech by focusing on the target speech among environmental sounds in real acoustic environments. We can easily identify the target sound from other environmental sounds. For hands-free speech recognition, the identification of the target speech from environmental sounds is imperative. This mechanism may also be important for a self-moving robot to sense the acoustic environments and communicate with humans. Therefore, this paper first proposes hidden Markov model (HMM)-based environmental sound source identification. Environmental sounds are modeled by three states of HMMs and evaluated using 92 kinds of environmental sounds. The identification accuracy was 95.4%. This paper also proposes a new HMM composition method that composes speech HMMs and an HMM of categorized environmental sounds for robust environmental sound-added speech recognition. As a result of the evaluation experiments, we confirmed that the proposed HMM composition outperforms the conventional HMM composition with speech HMMs and a noise (environmental sound) HMM trained using noise periods prior to the target speech in a captured signal. [Work supported by Ministry of Public Management, Home Affairs, Posts and Telecommunications of Japan.

  16. Theoretical and Experimental Aspects of Acoustic Modelling of Engine Exhaust Systems with Applications to a Vacuum Pump

    NASA Astrophysics Data System (ADS)

    Sridhara, Basavapatna Sitaramaiah

    In an internal combustion engine, the engine is the noise source and the exhaust pipe is the main transmitter of noise. Mufflers are often used to reduce engine noise level in the exhaust pipe. To optimize a muffler design, a series of experiments could be conducted using various mufflers installed in the exhaust pipe. For each configuration, the radiated sound pressure could be measured. However, this is not a very efficient method. A second approach would be to develop a scheme involving only a few measurements which can predict the radiated sound pressure at a specified distance from the open end of the exhaust pipe. In this work, the engine exhaust system was modelled as a lumped source-muffler-termination system. An expression for the predicted sound pressure level was derived in terms of the source and termination impedances, and the muffler geometry. The pressure source and monopole radiation models were used for the source and the open end of the exhaust pipe. The four pole parameters were used to relate the acoustic properties at two different cross sections of the muffler and the pipe. The developed formulation was verified through a series of experiments. Two loudspeakers and a reciprocating type vacuum pump were used as sound sources during the tests. The source impedance was measured using the direct, two-load and four-load methods. A simple expansion chamber and a side-branch resonator were used as mufflers. Sound pressure level measurements for the prediction scheme were made for several source-muffler and source-straight pipe combinations. The predicted and measured sound pressure levels were compared for all cases considered. In all cases, correlation of the experimental results and those predicted by the developed expressions was good. Predicted and measured values of the insertion loss of the mufflers were compared. The agreement between the two was good. Also, an error analysis of the four-load method was done.

  17. Ambient Sound-Based Collaborative Localization of Indeterministic Devices

    PubMed Central

    Kamminga, Jacob; Le, Duc; Havinga, Paul

    2016-01-01

    Localization is essential in wireless sensor networks. To our knowledge, no prior work has utilized low-cost devices for collaborative localization based on only ambient sound, without the support of local infrastructure. The reason may be the fact that most low-cost devices are indeterministic and suffer from uncertain input latencies. This uncertainty makes accurate localization challenging. Therefore, we present a collaborative localization algorithm (Cooperative Localization on Android with ambient Sound Sources (CLASS)) that simultaneously localizes the position of indeterministic devices and ambient sound sources without local infrastructure. The CLASS algorithm deals with the uncertainty by splitting the devices into subsets so that outliers can be removed from the time difference of arrival values and localization results. Since Android is indeterministic, we select Android devices to evaluate our approach. The algorithm is evaluated with an outdoor experiment and achieves a mean Root Mean Square Error (RMSE) of 2.18 m with a standard deviation of 0.22 m. Estimated directions towards the sound sources have a mean RMSE of 17.5° and a standard deviation of 2.3°. These results show that it is feasible to simultaneously achieve a relative positioning of both devices and sound sources with sufficient accuracy, even when using non-deterministic devices and platforms, such as Android. PMID:27649176

  18. Amplitude and Wavelength Measurement of Sound Waves in Free Space using a Sound Wave Phase Meter

    NASA Astrophysics Data System (ADS)

    Ham, Sounggil; Lee, Kiwon

    2018-05-01

    We developed a sound wave phase meter (SWPM) and measured the amplitude and wavelength of sound waves in free space. The SWPM consists of two parallel metal plates, where the front plate was operated as a diaphragm. An aluminum perforated plate was additionally installed in front of the diaphragm, and the same signal as that applied to the sound source was applied to the perforated plate. The SWPM measures both the sound wave signal due to the diaphragm vibration and the induction signal due to the electric field of the aluminum perforated plate. Therefore, the two measurement signals interfere with each other due to the phase difference according to the distance between the sound source and the SWPM, and the amplitude of the composite signal that is output as a result is periodically changed. We obtained the wavelength of the sound wave from this periodic amplitude change measured in the free space and compared it with the theoretically calculated values.

  19. Ultra-thin smart acoustic metasurface for low-frequency sound insulation

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Xiao, Yong; Wen, Jihong; Yu, Dianlong; Wen, Xisen

    2016-04-01

    Insulating low-frequency sound is a conventional challenge due to the high areal mass required by mass law. In this letter, we propose a smart acoustic metasurface consisting of an ultra-thin aluminum foil bonded with piezoelectric resonators. Numerical and experimental results show that the metasurface can break the conventional mass law of sound insulation by 30 dB in the low frequency regime (<1000 Hz), with an ultra-light areal mass density (<1.6 kg/m2) and an ultra-thin thickness (1000 times smaller than the operating wavelength). The underlying physical mechanism of such extraordinary sound insulation performance is attributed to the infinite effective dynamic mass density produced by the smart resonators. It is also demonstrated that the excellent sound insulation property can be conveniently tuned by simply adjusting the external circuits instead of modifying the structure of the metasurface.

  20. Fast Reverse Propagation of Sound in the Living Cochlea

    PubMed Central

    He, Wenxuan; Fridberger, Anders; Porsov, Edward; Ren, Tianying

    2010-01-01

    Abstract The auditory sensory organ, the cochlea, not only detects but also generates sounds. Such sounds, otoacoustic emissions, are widely used for diagnosis of hearing disorders and to estimate cochlear nonlinearity. However, the fundamental question of how the otoacoustic emission exits the cochlea remains unanswered. In this study, emissions were provoked by two tones with a constant frequency ratio, and measured as vibrations at the basilar membrane and at the stapes, and as sound pressure in the ear canal. The propagation direction and delay of the emission were determined by measuring the phase difference between basilar membrane and stapes vibrations. These measurements show that cochlea-generated sound arrives at the stapes earlier than at the measured basilar membrane location. Data also show that basilar membrane vibration at the emission frequency is similar to that evoked by external tones. These results conflict with the backward-traveling-wave theory and suggest that at low and intermediate sound levels, the emission exits the cochlea predominantly through the cochlear fluids. PMID:20513393

  1. Investigation of the Statistics of Pure Tone Sound Power Injection from Low Frequency, Finite Sized Sources in a Reverberant Room

    NASA Technical Reports Server (NTRS)

    Smith, Wayne Farrior

    1973-01-01

    The effect of finite source size on the power statistics in a reverberant room for pure tone excitation was investigated. Theoretical results indicate that the standard deviation of low frequency, pure tone finite sources is always less than that predicted by point source theory and considerably less when the source dimension approaches one-half an acoustic wavelength or greater. A supporting experimental study was conducted utilizing an eight inch loudspeaker and a 30 inch loudspeaker at eleven source positions. The resulting standard deviation of sound power output of the smaller speaker is in excellent agreement with both the derived finite source theory and existing point source theory, if the theoretical data is adjusted to account for experimental incomplete spatial averaging. However, the standard deviation of sound power output of the larger speaker is measurably lower than point source theory indicates, but is in good agreement with the finite source theory.

  2. Localizing nearby sound sources in a classroom: Binaural room impulse responses

    NASA Astrophysics Data System (ADS)

    Shinn-Cunningham, Barbara G.; Kopco, Norbert; Martin, Tara J.

    2005-05-01

    Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy. .

  3. Localizing nearby sound sources in a classroom: binaural room impulse responses.

    PubMed

    Shinn-Cunningham, Barbara G; Kopco, Norbert; Martin, Tara J

    2005-05-01

    Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy.

  4. Captive Bottlenose Dolphins Do Discriminate Human-Made Sounds Both Underwater and in the Air.

    PubMed

    Lima, Alice; Sébilleau, Mélissa; Boye, Martin; Durand, Candice; Hausberger, Martine; Lemasson, Alban

    2018-01-01

    Bottlenose dolphins ( Tursiops truncatus ) spontaneously emit individual acoustic signals that identify them to group members. We tested whether these cetaceans could learn artificial individual sound cues played underwater and whether they would generalize this learning to airborne sounds. Dolphins are thought to perceive only underwater sounds and their training depends largely on visual signals. We investigated the behavioral responses of seven dolphins in a group to learned human-made individual sound cues, played underwater and in the air. Dolphins recognized their own sound cue after hearing it underwater as they immediately moved toward the source, whereas when it was airborne they gazed more at the source of their own sound cue but did not approach it. We hypothesize that they perhaps detected modifications of the sound induced by air or were confused by the novelty of the situation, but nevertheless recognized they were being "targeted." They did not respond when hearing another group member's cue in either situation. This study provides further evidence that dolphins respond to individual-specific sounds and that these marine mammals possess some capacity for processing airborne acoustic signals.

  5. Exploring positive hospital ward soundscape interventions.

    PubMed

    Mackrill, J; Jennings, P; Cain, R

    2014-11-01

    Sound is often considered as a negative aspect of an environment that needs mitigating, particularly in hospitals. It is worthwhile however, to consider how subjective responses to hospital sounds can be made more positive. The authors identified natural sound, steady state sound and written sound source information as having the potential to do this. Listening evaluations were conducted with 24 participants who rated their emotional (Relaxation) and cognitive (Interest and Understanding) response to a variety of hospital ward soundscape clips across these three interventions. A repeated measures ANOVA revealed that the 'Relaxation' response was significantly affected (n(2) = 0.05, p = 0.001) by the interventions with natural sound producing a 10.1% more positive response. Most interestingly, written sound source information produced a 4.7% positive change in response. The authors conclude that exploring different ways to improve the sounds of a hospital offers subjective benefits that move beyond sound level reduction. This is an area for future work to focus upon in an effort to achieve more positively experienced hospital soundscapes and environments. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  6. NPSNET: Aural cues for virtual world immersion

    NASA Astrophysics Data System (ADS)

    Dahl, Leif A.

    1992-09-01

    NPSNET is a low-cost visual and aural simulation system designed and implemented at the Naval Postgraduate School. NPSNET is an example of a virtual world simulation environment that incorporates real-time aural cues through software-hardware interaction. In the current implementation of NPSNET, a graphics workstation functions in the sound server role which involves sending and receiving networked sound message packets across a Local Area Network, composed of multiple graphics workstations. The network messages contain sound file identification information that is transmitted from the sound server across an RS-422 protocol communication line to a serial to Musical Instrument Digital Interface (MIDI) converter. The MIDI converter, in turn relays the sound byte to a sampler, an electronic recording and playback device. The sampler correlates the hexadecimal input to a specific note or stored sound and sends it as an audio signal to speakers via an amplifier. The realism of a simulation is improved by involving multiple participant senses and removing external distractions. This thesis describes the incorporation of sound as aural cues, and the enhancement they provide in the virtual simulation environment of NPSNET.

  7. A precedence effect resolves phantom sound source illusions in the parasitoid fly Ormia ochracea

    PubMed Central

    Lee, Norman; Elias, Damian O.; Mason, Andrew C.

    2009-01-01

    Localizing individual sound sources under reverberant environmental conditions can be a challenge when the original source and its acoustic reflections arrive at the ears simultaneously from different paths that convey ambiguous directional information. The acoustic parasitoid fly Ormia ochracea (Diptera: Tachinidae) relies on a pair of ears exquisitely sensitive to sound direction to localize the 5-kHz tone pulsatile calling song of their host crickets. In nature, flies are expected to encounter a complex sound field with multiple sources and their reflections from acoustic clutter potentially masking temporal information relevant to source recognition and localization. In field experiments, O. ochracea were lured onto a test arena and subjected to small random acoustic asymmetries between 2 simultaneous sources. Most flies successfully localize a single source but some localize a ‘phantom’ source that is a summed effect of both source locations. Such misdirected phonotaxis can be elicited reliably in laboratory experiments that present symmetric acoustic stimulation. By varying onset delay between 2 sources, we test whether hyperacute directional hearing in O. ochracea can function to exploit small time differences to determine source location. Selective localization depends on both the relative timing and location of competing sources. Flies preferred phonotaxis to a forward source. With small onset disparities within a 10-ms temporal window of attention, flies selectively localize the leading source while the lagging source has minimal influence on orientation. These results demonstrate the precedence effect as a mechanism to overcome phantom source illusions that arise from acoustic reflections or competing sources. PMID:19332794

  8. Pressure sound level measurements at an educational environment in Goiânia, Goiás, Brazil

    NASA Astrophysics Data System (ADS)

    Costa, J. J. L.; do Nascimento, E. O.; de Oliveira, L. N.; Caldas, L. V. E.

    2018-03-01

    In this work, 25 points located on the ground floor of the Federal Institute of Education, Science and Technology of Goias - IFG - Campus Goiânia, were analyzed in morning periods of two Saturdays. The pressure sound levels were measured at internal and external environments during routine activities seeking to perform an environmental monitoring at this institution. The initial hypothesis was that an amusement park (Mutirama Park) was responsible for originating noise pollution in the institute, but the results showed, within the campus environment, sound pressure levels in accordance with the Municipal legislation of Goiânia for all points.

  9. Shock waves and the Ffowcs Williams-Hawkings equation

    NASA Technical Reports Server (NTRS)

    Isom, Morris P.; Yu, Yung H.

    1991-01-01

    The expansion of the double divergence of the generalized Lighthill stress tensor, which is the basis of the concept of the role played by shock and contact discontinuities as sources of dipole and monopole sound, is presently applied to the simplest transonic flows: (1) a fixed wing in steady motion, for which there is no sound field, and (2) a hovering helicopter blade that produces a sound field. Attention is given to the contribution of the shock to sound from the viewpoint of energy conservation; the shock emerges as the source of only the quantity of entropy.

  10. Hearing in three dimensions

    NASA Astrophysics Data System (ADS)

    Shinn-Cunningham, Barbara

    2003-04-01

    One of the key functions of hearing is to help us monitor and orient to events in our environment (including those outside the line of sight). The ability to compute the spatial location of a sound source is also important for detecting, identifying, and understanding the content of a sound source, especially in the presence of competing sources from other positions. Determining the spatial location of a sound source poses difficult computational challenges; however, we perform this complex task with proficiency, even in the presence of noise and reverberation. This tutorial will review the acoustic, psychoacoustic, and physiological processes underlying spatial auditory perception. First, the tutorial will examine how the many different features of the acoustic signals reaching a listener's ears provide cues for source direction and distance, both in anechoic and reverberant space. Then we will discuss psychophysical studies of three-dimensional sound localization in different environments and the basic neural mechanisms by which spatial auditory cues are extracted. Finally, ``virtual reality'' approaches for simulating sounds at different directions and distances under headphones will be reviewed. The tutorial will be structured to appeal to a diverse audience with interests in all fields of acoustics and will incorporate concepts from many areas, such as psychological and physiological acoustics, architectural acoustics, and signal processing.

  11. Active control of sound radiation from a vibrating rectangular panel by sound sources and vibration inputs - An experimental comparison

    NASA Technical Reports Server (NTRS)

    Fuller, C. R.; Hansen, C. H.; Snyder, S. D.

    1991-01-01

    Active control of sound radiation from a rectangular panel by two different methods has been experimentally studied and compared. In the first method a single control force applied directly to the structure is used with a single error microphone located in the radiated acoustic field. Global attenuation of radiated sound was observed to occur by two main mechanisms. For 'on-resonance' excitation, the control force had the effect of increasing the total panel input impedance presented to the nosie source, thus reducing all radiated sound. For 'off-resonance' excitation, the control force tends not significantly to modify the panel total response amplitude but rather to restructure the relative phases of the modes leading to a more complex vibration pattern and a decrease in radiation efficiency. For acoustic control, the second method, the number of acoustic sources required for global reduction was seen to increase with panel modal order. The mechanism in this case was that the acoustic sources tended to create an inverse pressure distribution at the panel surface and thus 'unload' the panel by reducing the panel radiation impedance. In general, control by structural inputs appears more effective than control by acoustic sources for structurally radiated noise.

  12. Aerofoil broadband and tonal noise modelling using stochastic sound sources and incorporated large scale fluctuations

    NASA Astrophysics Data System (ADS)

    Proskurov, S.; Darbyshire, O. R.; Karabasov, S. A.

    2017-12-01

    The present work discusses modifications to the stochastic Fast Random Particle Mesh (FRPM) method featuring both tonal and broadband noise sources. The technique relies on the combination of incorporated vortex-shedding resolved flow available from Unsteady Reynolds-Averaged Navier-Stokes (URANS) simulation with the fine-scale turbulence FRPM solution generated via the stochastic velocity fluctuations in the context of vortex sound theory. In contrast to the existing literature, our method encompasses a unified treatment for broadband and tonal acoustic noise sources at the source level, thus, accounting for linear source interference as well as possible non-linear source interaction effects. When sound sources are determined, for the sound propagation, Acoustic Perturbation Equations (APE-4) are solved in the time-domain. Results of the method's application for two aerofoil benchmark cases, with both sharp and blunt trailing edges are presented. In each case, the importance of individual linear and non-linear noise sources was investigated. Several new key features related to the unsteady implementation of the method were tested and brought into the equation. Encouraging results have been obtained for benchmark test cases using the new technique which is believed to be potentially applicable to other airframe noise problems where both tonal and broadband parts are important.

  13. Experimental localization of an acoustic sound source in a wind-tunnel flow by using a numerical time-reversal technique.

    PubMed

    Padois, Thomas; Prax, Christian; Valeau, Vincent; Marx, David

    2012-10-01

    The possibility of using the time-reversal technique to localize acoustic sources in a wind-tunnel flow is investigated. While the technique is widespread, it has scarcely been used in aeroacoustics up to now. The proposed method consists of two steps: in a first experimental step, the acoustic pressure fluctuations are recorded over a linear array of microphones; in a second numerical step, the experimental data are time-reversed and used as input data for a numerical code solving the linearized Euler equations. The simulation achieves the back-propagation of the waves from the array to the source and takes into account the effect of the mean flow on sound propagation. The ability of the method to localize a sound source in a typical wind-tunnel flow is first demonstrated using simulated data. A generic experiment is then set up in an anechoic wind tunnel to validate the proposed method with a flow at Mach number 0.11. Monopolar sources are first considered that are either monochromatic or have a narrow or wide-band frequency content. The source position estimation is well-achieved with an error inferior to the wavelength. An application to a dipolar sound source shows that this type of source is also very satisfactorily characterized.

  14. Different categories of living and non-living sound-sources activate distinct cortical networks

    PubMed Central

    Engel, Lauren R.; Frum, Chris; Puce, Aina; Walker, Nathan A.; Lewis, James W.

    2009-01-01

    With regard to hearing perception, it remains unclear as to whether, or the extent to which, different conceptual categories of real-world sounds and related categorical knowledge are differentially represented in the brain. Semantic knowledge representations are reported to include the major divisions of living versus non-living things, plus more specific categories including animals, tools, biological motion, faces, and places—categories typically defined by their characteristic visual features. Here, we used functional magnetic resonance imaging (fMRI) to identify brain regions showing preferential activity to four categories of action sounds, which included non-vocal human and animal actions (living), plus mechanical and environmental sound-producing actions (non-living). The results showed a striking antero-posterior division in cortical representations for sounds produced by living versus non-living sources. Additionally, there were several significant differences by category, depending on whether the task was category-specific (e.g. human or not) versus non-specific (detect end-of-sound). In general, (1) human-produced sounds yielded robust activation in the bilateral posterior superior temporal sulci independent of task. Task demands modulated activation of left-lateralized fronto-parietal regions, bilateral insular cortices, and subcortical regions previously implicated in observation-execution matching, consistent with “embodied” and mirror-neuron network representations subserving recognition. (2) Animal action sounds preferentially activated the bilateral posterior insulae. (3) Mechanical sounds activated the anterior superior temporal gyri and parahippocampal cortices. (4) Environmental sounds preferentially activated dorsal occipital and medial parietal cortices. Overall, this multi-level dissociation of networks for preferentially representing distinct sound-source categories provides novel support for grounded cognition models that may underlie organizational principles for hearing perception. PMID:19465134

  15. Analysis of sound absorption performance of an electroacoustic absorber using a vented enclosure

    NASA Astrophysics Data System (ADS)

    Cho, Youngeun; Wang, Semyung; Hyun, Jaeyub; Oh, Seungjae; Goo, Seongyeol

    2018-03-01

    The sound absorption performance of an electroacoustic absorber (EA) is primarily influenced by the dynamic characteristics of the loudspeaker that acts as the actuator of the EA system. Therefore, the sound absorption performance of the EA is maximum at the resonance frequency of the loudspeaker and tends to degrade in the low-frequency and high-frequency bands based on this resonance frequency. In this study, to adjust the sound absorption performance of the EA system in the low-frequency band of approximately 20-80 Hz, an EA system using a vented enclosure that has previously been used to enhance the radiating sound pressure of a loudspeaker in the low-frequency band, is proposed. To verify the usefulness of the proposed system, two acoustic environments are considered. In the first acoustic environment, the vent of the vented enclosure is connected to an external sound field that is distinct from the sound field coupled to the EA. In this case, the acoustic effect of the vented enclosure on the performance of the EA is analyzed through an analytical approach using dynamic equations and an impedance-based equivalent circuit. Then, it is verified through numerical and experimental approaches. Next, in the second acoustic environment, the vent is connected to the same external sound field as the EA. In this case, the effect of the vented enclosure on the EA is investigated through an analytical approach and finally verified through a numerical approach. As a result, it is confirmed that the characteristics of the sound absorption performances of the proposed EA system using the vented enclosure in the two acoustic environments considered in this study are different from each other in the low-frequency band of approximately 20-80 Hz. Furthermore, several case studies on the change tendency of the performance of the EA using the vented enclosure according to the critical design factors or vent number for the vented enclosure are also investigated. In the future, even if the proposed EA system using a vented enclosure is extended to a large number of arrays required for 3D sound field control, it is expected to be an attractive solution that can contribute to an improvement in low-frequency noise reduction without causing economic and system complexity problems.

  16. Responses of the ear to low frequency sounds, infrasound and wind turbines

    PubMed Central

    Salt, Alec N.; Hullar, Timothy E.

    2010-01-01

    Infrasonic sounds are generated internally in the body (by respiration, heartbeat, coughing, etc) and by external sources, such as air conditioning systems, inside vehicles, some industrial processes and, now becoming increasingly prevalent, wind turbines. It is widely assumed that infrasound presented at an amplitude below what is audible has no influence on the ear. In this review, we consider possible ways that low frequency sounds, at levels that may or may not be heard, could influence the function of the ear. The inner ear has elaborate mechanisms to attenuate low frequency sound components before they are transmitted to the brain. The auditory portion of the ear, the cochlea, has two types of sensory cells, inner hair cells (IHC) and outer hair cells (OHC), of which the IHC are coupled to the afferent fibers that transmit “hearing” to the brain. The sensory stereocilia (“hairs”) on the IHC are “fluid coupled” to mechanical stimuli, so their responses depend on stimulus velocity and their sensitivity decreases as sound frequency is lowered. In contrast, the OHC are directly coupled to mechanical stimuli, so their input remains greater than for IHC at low frequencies. At very low frequencies the OHC are stimulated by sounds at levels below those that are heard. Although the hair cells in other sensory structures such as the saccule may be tuned to infrasonic frequencies, auditory stimulus coupling to these structures is inefficient so that they are unlikely to be influenced by airborne infrasound. Structures that are involved in endolymph volume regulation are also known to be influenced by infrasound, but their sensitivity is also thought to be low. There are, however, abnormal states in which the ear becomes hypersensitive to infrasound. In most cases, the inner ear’s responses to infrasound can be considered normal, but they could be associated with unfamiliar sensations or subtle changes in physiology. This raises the possibility that exposure to the infrasound component of wind turbine noise could influence the physiology of the ear. PMID:20561575

  17. Responses of the ear to low frequency sounds, infrasound and wind turbines.

    PubMed

    Salt, Alec N; Hullar, Timothy E

    2010-09-01

    Infrasonic sounds are generated internally in the body (by respiration, heartbeat, coughing, etc) and by external sources, such as air conditioning systems, inside vehicles, some industrial processes and, now becoming increasingly prevalent, wind turbines. It is widely assumed that infrasound presented at an amplitude below what is audible has no influence on the ear. In this review, we consider possible ways that low frequency sounds, at levels that may or may not be heard, could influence the function of the ear. The inner ear has elaborate mechanisms to attenuate low frequency sound components before they are transmitted to the brain. The auditory portion of the ear, the cochlea, has two types of sensory cells, inner hair cells (IHC) and outer hair cells (OHC), of which the IHC are coupled to the afferent fibers that transmit "hearing" to the brain. The sensory stereocilia ("hairs") on the IHC are "fluid coupled" to mechanical stimuli, so their responses depend on stimulus velocity and their sensitivity decreases as sound frequency is lowered. In contrast, the OHC are directly coupled to mechanical stimuli, so their input remains greater than for IHC at low frequencies. At very low frequencies the OHC are stimulated by sounds at levels below those that are heard. Although the hair cells in other sensory structures such as the saccule may be tuned to infrasonic frequencies, auditory stimulus coupling to these structures is inefficient so that they are unlikely to be influenced by airborne infrasound. Structures that are involved in endolymph volume regulation are also known to be influenced by infrasound, but their sensitivity is also thought to be low. There are, however, abnormal states in which the ear becomes hypersensitive to infrasound. In most cases, the inner ear's responses to infrasound can be considered normal, but they could be associated with unfamiliar sensations or subtle changes in physiology. This raises the possibility that exposure to the infrasound component of wind turbine noise could influence the physiology of the ear. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  18. The directivity of the sound radiation from panels and openings.

    PubMed

    Davy, John L

    2009-06-01

    This paper presents a method for calculating the directivity of the radiation of sound from a panel or opening, whose vibration is forced by the incidence of sound from the other side. The directivity of the radiation depends on the angular distribution of the incident sound energy in the room or duct in whose wall or end the panel or opening occurs. The angular distribution of the incident sound energy is predicted using a model which depends on the sound absorption coefficient of the room or duct surfaces. If the sound source is situated in the room or duct, the sound absorption coefficient model is used in conjunction with a model for the directivity of the sound source. For angles of radiation approaching 90 degrees to the normal to the panel or opening, the effect of the diffraction by the panel or opening, or by the finite baffle in which the panel or opening is mounted, is included. A simple empirical model is developed to predict the diffraction of sound into the shadow zone when the angle of radiation is greater than 90 degrees to the normal to the panel or opening. The method is compared with published experimental results.

  19. Outdoor concert hall sound design: idea and possible solutions

    NASA Astrophysics Data System (ADS)

    Kim, Yang-Hann; Lee, Jung-Min; Kim, Wanjung; Kim, Hwan; Choi, Jung-Woo; Wang, Semyung

    Sound design of outdoor concert halls needs to satisfy two contradictory objectives: good sound reproduction within the hall, as well as the minimization of external sound radiation. Outdoor concert hall usually has open space, therefore good sound for the listeners can be bad sound for its neighborhood. It would be a good attempt to have a virtual sound wall that can reflect all sound, therefore making a relatively quiet zone in the outside. This attempt can be possible if we could produce invisible but very high impedance mismatch around the hall, for a selected frequency band. This can be possible if we can generate an acoustically bright zone inside and a dark (quite) zone outside. Earlier work [Choi, J.-W. and Kim, Y.-H. (2002). J. Acoust. Soc. Am. 111, 1695-1700], at least, assures it is possible for a selected region and frequencies. Simulations show that it is possible for a two-dimensional case. Experimental verification has been also tried. The discrepancies have been explained in terms of the number of loudspeakers, their spatial distributions, spacing with regard to wavelength. The dependency of its performances with respect to the size of bright and dark zone scaled by wavelength of interest has also been explained.

  20. A Tool for Low Noise Procedures Design and Community Noise Impact Assessment: The Rotorcraft Noise Model (RNM)

    NASA Technical Reports Server (NTRS)

    Conner, David A.; Page, Juliet A.

    2002-01-01

    To improve aircraft noise impact modeling capabilities and to provide a tool to aid in the development of low noise terminal area operations for rotorcraft and tiltrotors, the Rotorcraft Noise Model (RNM) was developed by the NASA Langley Research Center and Wyle Laboratories. RNM is a simulation program that predicts how sound will propagate through the atmosphere and accumulate at receiver locations located on flat ground or varying terrain, for single and multiple vehicle flight operations. At the core of RNM are the vehicle noise sources, input as sound hemispheres. As the vehicle "flies" along its prescribed flight trajectory, the source sound propagation is simulated and accumulated at the receiver locations (single points of interest or multiple grid points) in a systematic time-based manner. These sound signals at the receiver locations may then be analyzed to obtain single event footprints, integrated noise contours, time histories, or numerous other features. RNM may also be used to generate spectral time history data over a ground mesh for the creation of single event sound animation videos. Acoustic properties of the noise source(s) are defined in terms of sound hemispheres that may be obtained from theoretical predictions, wind tunnel experimental results, flight test measurements, or a combination of the three. The sound hemispheres may contain broadband data (source levels as a function of one-third octave band) and pure-tone data (in the form of specific frequency sound pressure levels and phase). A PC executable version of RNM is publicly available and has been adopted by a number of organizations for Environmental Impact Assessment studies of rotorcraft noise. This paper provides a review of the required input data, the theoretical framework of RNM's propagation model and the output results. Code validation results are provided from a NATO helicopter noise flight test as well as a tiltrotor flight test program that used the RNM as a tool to aid in the development of low noise approach profiles.

  1. The Philosophy, Theoretical Bases, and Implementation of the AHAAH Model for Evaluation of Hazard from Exposure to Intense Sounds

    DTIC Science & Technology

    2018-04-01

    empirical, external energy-damage correlation methods for evaluating hearing damage risk associated with impulsive noise exposure. AHAAH applies the...is validated against the measured results of human exposures to impulsive sounds, and unlike wholly empirical correlation approaches, AHAAH’s...a measured level (LAEQ8 of 85 dB). The approach in MIL-STD-1474E is very different. Previous standards tried to find a correlation between some

  2. Spatial hearing in Cope’s gray treefrog: I. Open and closed loop experiments on sound localization in the presence and absence of noise

    PubMed Central

    Caldwell, Michael S.; Bee, Mark A.

    2014-01-01

    The ability to reliably locate sound sources is critical to anurans, which navigate acoustically complex breeding choruses when choosing mates. Yet, the factors influencing sound localization performance in frogs remain largely unexplored. We applied two complementary methodologies, open and closed loop playback trials, to identify influences on localization abilities in Cope’s gray treefrog, Hyla chrysoscelis. We examined localization acuity and phonotaxis behavior of females in response to advertisement calls presented from 12 azimuthal angles, at two signal levels, in the presence and absence of noise, and at two noise levels. Orientation responses were consistent with precise localization of sound sources, rather than binary discrimination between sources on either side of the body (lateralization). Frogs were unable to discriminate between sounds arriving from forward and rearward directions, and accurate localization was limited to forward sound presentation angles. Within this region, sound presentation angle had little effect on localization acuity. The presence of noise and low signal-to-noise ratios also did not strongly impair localization ability in open loop trials, but females exhibited reduced phonotaxis performance consistent with impaired localization during closed loop trials. We discuss these results in light of previous work on spatial hearing in anurans. PMID:24504182

  3. Sex differences present in auditory looming perception, absent in auditory recession

    NASA Astrophysics Data System (ADS)

    Neuhoff, John G.; Seifritz, Erich

    2005-04-01

    When predicting the arrival time of an approaching sound source, listeners typically exhibit an anticipatory bias that affords a margin of safety in dealing with looming objects. The looming bias has been demonstrated behaviorally in the laboratory and in the field (Neuhoff 1998, 2001), neurally in fMRI studies (Seifritz et al., 2002), and comparatively in non-human primates (Ghazanfar, Neuhoff, and Logothetis, 2002). In the current work, male and female listeners were presented with three-dimensional looming sound sources and asked to press a button when the source was at the point of closest approach. Females exhibited a significantly greater anticipatory bias than males. Next, listeners were presented with sounds that either approached or receded and then stopped at three different terminal distances. Consistent with the time-to-arrival judgments, female terminal distance judgments for looming sources were significantly closer than male judgments. However, there was no difference between male and female terminal distance judgments for receding sounds. Taken together with the converging behavioral, neural, and comparative evidence, the current results illustrate the environmental salience of looming sounds and suggest that the anticipatory bias for auditory looming may have been shaped by evolution to provide a selective advantage in dealing with looming objects.

  4. Understanding auditory distance estimation by humpback whales: a computational approach.

    PubMed

    Mercado, E; Green, S R; Schneider, J N

    2008-02-01

    Ranging, the ability to judge the distance to a sound source, depends on the presence of predictable patterns of attenuation. We measured long-range sound propagation in coastal waters to assess whether humpback whales might use frequency degradation cues to range singing whales. Two types of neural networks, a multi-layer and a single-layer perceptron, were trained to classify recorded sounds by distance traveled based on their frequency content. The multi-layer network successfully classified received sounds, demonstrating that the distorting effects of underwater propagation on frequency content provide sufficient cues to estimate source distance. Normalizing received sounds with respect to ambient noise levels increased the accuracy of distance estimates by single-layer perceptrons, indicating that familiarity with background noise can potentially improve a listening whale's ability to range. To assess whether frequency patterns predictive of source distance were likely to be perceived by whales, recordings were pre-processed using a computational model of the humpback whale's peripheral auditory system. Although signals processed with this model contained less information than the original recordings, neural networks trained with these physiologically based representations estimated source distance more accurately, suggesting that listening whales should be able to range singers using distance-dependent changes in frequency content.

  5. The stuff that dreams aren't made of: why wake-state and dream-state sensory experiences differ.

    PubMed

    Symons, D

    1993-06-01

    It is adaptive for individuals to be continuously alert and responsive to external stimuli (such as the sound and odor of an approaching predator or the cry of an infant), even during sleep. Natural selection thus has disfavored the occurrence during sleep of hallucinations that compromise external vigilance. In the great majority of mammalian species, including Homo sapiens, closed eyes and immobility are basic aspects of sleep. Therefore, (a) visual and movement sensory modalities (except kinesthesis) do not provide the sleeper with accurate information about the external environment or the sleeper's relationship to that environment; (b) the sleeper's forebrain "vigilance mechanism" does not monitor these modalities; hence (c) visual and movement hallucinations--similar or identical to percepts--can occur during sleep without compromising vigilance. In contrast, the other sensory modalities do provide the sleeper with a continuous flow of information about the external environment or the sleeper's relationship to that environment, and these modalities are monitored by the vigilance mechanism. Hallucinations of kinesthesis, pain, touch, warmth, cold, odor, and sound thus would compromise vigilance, and their occurrence during sleep has been disfavored by natural selection. This vigilance hypothesis generates novel predictions about dream phenomenology and REM-state neurophysiology and has implications for the general study of imagery.

  6. Why Do People Like Loud Sound? A Qualitative Study.

    PubMed

    Welch, David; Fremaux, Guy

    2017-08-11

    Many people choose to expose themselves to potentially dangerous sounds such as loud music, either via speakers, personal audio systems, or at clubs. The Conditioning, Adaptation and Acculturation to Loud Music (CAALM) Model has proposed a theoretical basis for this behaviour. To compare the model to data, we interviewed a group of people who were either regular nightclub-goers or who controlled the sound levels in nightclubs (bar managers, musicians, DJs, and sound engineers) about loud sound. Results showed four main themes relating to the enjoyment of loud sound: arousal/excitement, facilitation of socialisation, masking of both external sound and unwanted thoughts, and an emphasis and enhancement of personal identity. Furthermore, an interesting incidental finding was that sound levels appeared to increase gradually over the course of the evening until they plateaued at approximately 97 dBA Leq around midnight. Consideration of the data generated by the analysis revealed a complex of influential factors that support people in wanting exposure to loud sound. Findings were considered in terms of the CAALM Model and could be explained in terms of its principles. From a health promotion perspective, the Social Ecological Model was applied to consider how the themes identified might influence behaviour. They were shown to influence people on multiple levels, providing a powerful system which health promotion approaches struggle to address.

  7. Community Response to Multiple Sound Sources: Integrating Acoustic and Contextual Approaches in the Analysis

    PubMed Central

    Lercher, Peter; De Coensel, Bert; Dekonink, Luc; Botteldooren, Dick

    2017-01-01

    Sufficient data refer to the relevant prevalence of sound exposure by mixed traffic sources in many nations. Furthermore, consideration of the potential effects of combined sound exposure is required in legal procedures such as environmental health impact assessments. Nevertheless, current practice still uses single exposure response functions. It is silently assumed that those standard exposure-response curves accommodate also for mixed exposures—although some evidence from experimental and field studies casts doubt on this practice. The ALPNAP-study population (N = 1641) shows sufficient subgroups with combinations of rail-highway, highway-main road and rail-highway-main road sound exposure. In this paper we apply a few suggested approaches of the literature to investigate exposure-response curves and its major determinants in the case of exposure to multiple traffic sources. Highly/moderate annoyance and full scale mean annoyance served as outcome. The results show several limitations of the current approaches. Even facing the inherent methodological limitations (energy equivalent summation of sound, rating of overall annoyance) the consideration of main contextual factors jointly occurring with the sources (such as vibration, air pollution) or coping activities and judgments of the wider area soundscape increases the variance explanation from up to 8% (bivariate), up to 15% (base adjustments) up to 55% (full contextual model). The added predictors vary significantly, depending on the source combination. (e.g., significant vibration effects with main road/railway, not highway). Although no significant interactions were found, the observed additive effects are of public health importance. Especially in the case of a three source exposure situation the overall annoyance is already high at lower levels and the contribution of the acoustic indicators is small compared with the non-acoustic and contextual predictors. Noise mapping needs to go down to levels of 40 dBA,Lden to ensure the protection of quiet areas and prohibit the silent “filling up” of these areas with new sound sources. Eventually, to better predict the annoyance in the exposure range between 40 and 60 dBA and support the protection of quiet areas in city and rural areas in planning sound indicators need to be oriented at the noticeability of sound and consider other traffic related by-products (air quality, vibration, coping strain) in future studies and environmental impact assessments. PMID:28632198

  8. A possible approach to optimization of parameters of sound-absorbing structures for multimode waveguides

    NASA Astrophysics Data System (ADS)

    Mironov, M. A.

    2011-11-01

    A method of allowing for the spatial sound field structure in designing the sound-absorbing structures for turbojet aircraft engine ducts is proposed. The acoustic impedance of a duct should be chosen so as to prevent the reflection of the primary sound field, which is generated by the sound source in the absence of the duct, from the duct walls.

  9. Quantifying the influence of flow asymmetries on glottal sound sources in speech

    NASA Astrophysics Data System (ADS)

    Erath, Byron; Plesniak, Michael

    2008-11-01

    Human speech is made possible by the air flow interaction with the vocal folds. During phonation, asymmetries in the glottal flow field may arise from flow phenomena (e.g. the Coanda effect) as well as from pathological vocal fold motion (e.g. unilateral paralysis). In this study, the effects of flow asymmetries on glottal sound sources were investigated. Dynamically-programmable 7.5 times life-size vocal fold models with 2 degrees-of-freedom (linear and rotational) were constructed to provide a first-order approximation of vocal fold motion. Important parameters (Reynolds, Strouhal, and Euler numbers) were scaled to physiological values. Normal and abnormal vocal fold motions were synthesized, and the velocity field and instantaneous transglottal pressure drop were measured. Variability in the glottal jet trajectory necessitated sorting of the data according to the resulting flow configuration. The dipole sound source is related to the transglottal pressure drop via acoustic analogies. Variations in the transglottal pressure drop (and subsequently the dipole sound source) arising from flow asymmetries are discussed.

  10. Psychophysical evidence for auditory motion parallax.

    PubMed

    Genzel, Daria; Schutte, Michael; Brimijoin, W Owen; MacNeilage, Paul R; Wiegrebe, Lutz

    2018-04-17

    Distance is important: From an ecological perspective, knowledge about the distance to either prey or predator is vital. However, the distance of an unknown sound source is particularly difficult to assess, especially in anechoic environments. In vision, changes in perspective resulting from observer motion produce a reliable, consistent, and unambiguous impression of depth known as motion parallax. Here we demonstrate with formal psychophysics that humans can exploit auditory motion parallax, i.e., the change in the dynamic binaural cues elicited by self-motion, to assess the relative depths of two sound sources. Our data show that sensitivity to relative depth is best when subjects move actively; performance deteriorates when subjects are moved by a motion platform or when the sound sources themselves move. This is true even though the dynamic binaural cues elicited by these three types of motion are identical. Our data demonstrate a perceptual strategy to segregate intermittent sound sources in depth and highlight the tight interaction between self-motion and binaural processing that allows assessment of the spatial layout of complex acoustic scenes.

  11. Auditory event perception: the source-perception loop for posture in human gait.

    PubMed

    Pastore, Richard E; Flint, Jesse D; Gaston, Jeremy R; Solomon, Matthew J

    2008-01-01

    There is a small but growing literature on the perception of natural acoustic events, but few attempts have been made to investigate complex sounds not systematically controlled within a laboratory setting. The present study investigates listeners' ability to make judgments about the posture (upright-stooped) of the walker who generated acoustic stimuli contrasted on each trial. We use a comprehensive three-stage approach to event perception, in which we develop a solid understanding of the source event and its sound properties, as well as the relationships between these two event stages. Developing this understanding helps both to identify the limitations of common statistical procedures and to develop effective new procedures for investigating not only the two information stages above, but also the decision strategies employed by listeners in making source judgments from sound. The result is a comprehensive, ultimately logical, but not necessarily expected picture of both the source-sound-perception loop and the utility of alternative research tools.

  12. Does External Knowledge Sourcing Enhance Market Performance? Evidence from the Korean Manufacturing Industry

    PubMed Central

    Lee, Kibaek; Yoo, Jaeheung; Choi, Munkee; Zo, Hangjung; Ciganek, Andrew P.

    2016-01-01

    Firms continuously search for external knowledge that can contribute to product innovation, which may ultimately increase market performance. The relationship between external knowledge sourcing and market performance is not well-documented. The extant literature primarily examines the causal relationship between external knowledge sources and product innovation performance or to identify factors which moderates the relationship between external knowledge sourcing and product innovation. Non-technological innovations, such as organization and marketing innovations, intervene in the process of external knowledge sourcing to product innovation to market performance but has not been extensively examined. This study addresses two research questions: does external knowledge sourcing lead to market performance and how does external knowledge sourcing interact with a firm’s different innovation activities to enhance market performance. This study proposes a comprehensive model to capture the causal mechanism from external knowledge sourcing to market performance. The research model was tested using survey data from manufacturing firms in South Korea and the results demonstrate a strong statistical relationship in the path of external knowledge sourcing (EKS) to product innovation performance (PIP) to market performance (MP). Organizational innovation is an antecedent to EKS while marketing innovation is a consequence of EKS, which significantly influences PIP and MP. The results imply that any potential EKS effort should also consider organizational innovations which may ultimately enhance market performance. Theoretical and practical implications are discussed as well as concluding remarks. PMID:28006022

  13. Does External Knowledge Sourcing Enhance Market Performance? Evidence from the Korean Manufacturing Industry.

    PubMed

    Lee, Kibaek; Yoo, Jaeheung; Choi, Munkee; Zo, Hangjung; Ciganek, Andrew P

    2016-01-01

    Firms continuously search for external knowledge that can contribute to product innovation, which may ultimately increase market performance. The relationship between external knowledge sourcing and market performance is not well-documented. The extant literature primarily examines the causal relationship between external knowledge sources and product innovation performance or to identify factors which moderates the relationship between external knowledge sourcing and product innovation. Non-technological innovations, such as organization and marketing innovations, intervene in the process of external knowledge sourcing to product innovation to market performance but has not been extensively examined. This study addresses two research questions: does external knowledge sourcing lead to market performance and how does external knowledge sourcing interact with a firm's different innovation activities to enhance market performance. This study proposes a comprehensive model to capture the causal mechanism from external knowledge sourcing to market performance. The research model was tested using survey data from manufacturing firms in South Korea and the results demonstrate a strong statistical relationship in the path of external knowledge sourcing (EKS) to product innovation performance (PIP) to market performance (MP). Organizational innovation is an antecedent to EKS while marketing innovation is a consequence of EKS, which significantly influences PIP and MP. The results imply that any potential EKS effort should also consider organizational innovations which may ultimately enhance market performance. Theoretical and practical implications are discussed as well as concluding remarks.

  14. Nonlinear theory of shocked sound propagation in a nearly choked duct flow

    NASA Technical Reports Server (NTRS)

    Myers, M. K.; Callegari, A. J.

    1982-01-01

    The development of shocks in the sound field propagating through a nearly choked duct flow is analyzed by extending a quasi-one dimensional theory. The theory is applied to the case in which sound is introduced into the flow by an acoustic source located in the vicinity of a near-sonic throat. Analytical solutions for the field are obtained which illustrate the essential features of the nonlinear interaction between sound and flow. Numerical results are presented covering ranges of variation of source strength, throat Mach number, and frequency. It is found that the development of shocks leads to appreciable attenuation of acoustic power transmitted upstream through the near-sonic flow. It is possible, for example, that the power loss in the fundamental harmonic can be as much as 90% of that introduced at the source.

  15. Noise abatement in a pine plantation

    Treesearch

    R. E. Leonard; L. P. Herrington

    1971-01-01

    Observations on sound propagation were made in two red pine plantations. Measurements were taken of attenuation of prerecorded frequencies at various distances from the sound source. Sound absorption was strongly dependent on frequencies. Peak absorption was at 500 Hz.

  16. Hearing in three dimensions: Sound localization

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Kistler, Doris J.

    1990-01-01

    The ability to localize a source of sound in space is a fundamental component of the three dimensional character of the sound of audio. For over a century scientists have been trying to understand the physical and psychological processes and physiological mechanisms that subserve sound localization. This research has shown that important information about sound source position is provided by interaural differences in time of arrival, interaural differences in intensity and direction-dependent filtering provided by the pinnae. Progress has been slow, primarily because experiments on localization are technically demanding. Control of stimulus parameters and quantification of the subjective experience are quite difficult problems. Recent advances, such as the ability to simulate a three dimensional sound field over headphones, seem to offer potential for rapid progress. Research using the new techniques has already produced new information. It now seems that interaural time differences are a much more salient and dominant localization cue than previously believed.

  17. Using sounds for making decisions: greater tube-nosed bats prefer antagonistic calls over non-communicative sounds when feeding

    PubMed Central

    Jiang, Tinglei; Long, Zhenyu; Ran, Xin; Zhao, Xue; Xu, Fei; Qiu, Fuyuan; Kanwal, Jagmeet S.

    2016-01-01

    ABSTRACT Bats vocalize extensively within different social contexts. The type and extent of information conveyed via their vocalizations and their perceptual significance, however, remains controversial and difficult to assess. Greater tube-nosed bats, Murina leucogaster, emit calls consisting of long rectangular broadband noise burst (rBNBl) syllables during aggression between males. To experimentally test the behavioral impact of these sounds for feeding, we deployed an approach and place-preference paradigm. Two food trays were placed on opposite sides and within different acoustic microenvironments, created by sound playback, within a specially constructed tent. Specifically, we tested whether the presence of rBNBl sounds at a food source effectively deters the approach of male bats in comparison to echolocation sounds and white noise. In each case, contrary to our expectation, males preferred to feed at a location where rBNBl sounds were present. We propose that the species-specific rBNBl provides contextual information, not present within non-communicative sounds, to facilitate approach towards a food source. PMID:27815241

  18. What the Toadfish Ear Tells the Toadfish Brain About Sound.

    PubMed

    Edds-Walton, Peggy L

    2016-01-01

    Of the three, paired otolithic endorgans in the ear of teleost fishes, the saccule is the one most often demonstrated to have a major role in encoding frequencies of biologically relevant sounds. The toadfish saccule also encodes sound level and sound source direction in the phase-locked activity conveyed via auditory afferents to nuclei of the ipsilateral octaval column in the medulla. Although paired auditory receptors are present in teleost fishes, binaural processes were believed to be unimportant due to the speed of sound in water and the acoustic transparency of the tissues in water. In contrast, there are behavioral and anatomical data that support binaural processing in fishes. Studies in the toadfish combined anatomical tract-tracing and physiological recordings from identified sites along the ascending auditory pathway to document response characteristics at each level. Binaural computations in the medulla and midbrain sharpen the directional information provided by the saccule. Furthermore, physiological studies in the central nervous system indicated that encoding frequency, sound level, temporal pattern, and sound source direction are important components of what the toadfish ear tells the toadfish brain about sound.

  19. Replacing the Orchestra? – The Discernibility of Sample Library and Live Orchestra Sounds

    PubMed Central

    Wolf, Anna; Platz, Friedrich; Mons, Jan

    2016-01-01

    Recently, musical sounds from pre-recorded orchestra sample libraries (OSL) have become indispensable in music production for the stage or popular charts. Surprisingly, it is unknown whether human listeners can identify sounds as stemming from real orchestras or OSLs. Thus, an internet-based experiment was conducted to investigate whether a classic orchestral work, produced with sounds from a state-of-the-art OSL, could be reliably discerned from a live orchestra recording of the piece. It could be shown that the entire sample of listeners (N = 602) on average identified the correct sound source at 72.5%. This rate slightly exceeded Alan Turing's well-known upper threshold of 70% for a convincing, simulated performance. However, while sound experts tended to correctly identify the sound source, participants with lower listening expertise, who resembled the majority of music consumers, only achieved 68.6%. As non-expert listeners in the experiment were virtually unable to tell the real-life and OSL sounds apart, it is assumed that OSLs will become more common in music production for economic reasons. PMID:27382932

  20. The Coast Artillery Journal. Volume 65, Number 4, October 1926

    DTIC Science & Technology

    1926-10-01

    sound. a. Sound location of airplanes by binaural observation in all antiaircraft regiments. b. Sound ranging on report of enemy guns, together with...Direction finding by binaural observation. [Subparagraphs 30 a and 30 c (l).J This applies to continuous sounds such as pro- pellor noises. b. Point...impacts. 32. The so-called binaural sense is our means of sensing the direc- tion of a sound source. When we hear a sound we judge the approxi- mate

  1. Object localization using a biosonar beam: how opening your mouth improves localization.

    PubMed

    Arditi, G; Weiss, A J; Yovel, Y

    2015-08-01

    Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions.

  2. Object localization using a biosonar beam: how opening your mouth improves localization

    PubMed Central

    Arditi, G.; Weiss, A. J.; Yovel, Y.

    2015-01-01

    Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions. PMID:26361552

  3. Hemispherical breathing mode speaker using a dielectric elastomer actuator.

    PubMed

    Hosoya, Naoki; Baba, Shun; Maeda, Shingo

    2015-10-01

    Although indoor acoustic characteristics should ideally be assessed by measuring the reverberation time using a point sound source, a regular polyhedron loudspeaker, which has multiple loudspeakers on a chassis, is typically used. However, such a configuration is not a point sound source if the size of the loudspeaker is large relative to the target sound field. This study investigates a small lightweight loudspeaker using a dielectric elastomer actuator vibrating in the breathing mode (the pulsating mode such as the expansion and contraction of a balloon). Acoustic testing with regard to repeatability, sound pressure, vibration mode profiles, and acoustic radiation patterns indicate that dielectric elastomer loudspeakers may be feasible.

  4. A stepped-plate bi-frequency source for generating a difference frequency sound with a parametric array.

    PubMed

    Je, Yub; Lee, Haksue; Park, Jongkyu; Moon, Wonkyu

    2010-06-01

    An ultrasonic radiator is developed to generate a difference frequency sound from two frequencies of ultrasound in air with a parametric array. A design method is proposed for an ultrasonic radiator capable of generating highly directive, high-amplitude ultrasonic sound beams at two different frequencies in air based on a modification of the stepped-plate ultrasonic radiator. The stepped-plate ultrasonic radiator was introduced by Gallego-Juarez et al. [Ultrasonics 16, 267-271 (1978)] in their previous study and can effectively generate highly directive, large-amplitude ultrasonic sounds in air, but only at a single frequency. Because parametric array sources must be able to generate sounds at more than one frequency, a design modification is crucial to the application of a stepped-plate ultrasonic radiator as a parametric array source in air. The aforementioned method was employed to design a parametric radiator for use in air. A prototype of this design was constructed and tested to determine whether it could successfully generate a difference frequency sound with a parametric array. The results confirmed that the proposed single small-area transducer was suitable as a parametric radiator in air.

  5. Technical basis for external dosimetry at the Waste Isolation Pilot Plant (WIPP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, E.W.; Wu, C.F.; Goff, T.E.

    1993-12-31

    The WIPP External Dosimetry Program, administered by Westinghouse Electric Corporation, Waste Isolation Division, for the US Department of Energy (DOE), provides external dosimetry support services for operations at the Waste Isolation Pilot Plant (WIPP) Site. These operations include the receipt, experimentation with, storage, and disposal of transuranic (TRU) wastes. This document describes the technical basis for the WIPP External Radiation Dosimetry Program. The purposes of this document are to: (1) provide assurance that the WIPP External Radiation Dosimetry Program is in compliance with all regulatory requirements, (2) provide assurance that the WIPP External Radiation Dosimetry Program is derived from amore » sound technical base, (3) serve as a technical reference for radiation protection personnel, and (4) aid in identifying and planning for future needs. The external radiation exposure fields are those that are documented in the WIPP Final Safety Analysis Report.« less

  6. An open access database for the evaluation of heart sound algorithms.

    PubMed

    Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D

    2016-12-01

    In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.

  7. Neuromagnetic recordings reveal the temporal dynamics of auditory spatial processing in the human cortex.

    PubMed

    Tiitinen, Hannu; Salminen, Nelli H; Palomäki, Kalle J; Mäkinen, Ville T; Alku, Paavo; May, Patrick J C

    2006-03-20

    In an attempt to delineate the assumed 'what' and 'where' processing streams, we studied the processing of spatial sound in the human cortex by using magnetoencephalography in the passive and active recording conditions and two kinds of spatial stimuli: individually constructed, highly realistic spatial (3D) stimuli and stimuli containing interaural time difference (ITD) cues only. The auditory P1m, N1m, and P2m responses of the event-related field were found to be sensitive to the direction of sound source in the azimuthal plane. In general, the right-hemispheric responses to spatial sounds were more prominent than the left-hemispheric ones. The right-hemispheric P1m and N1m responses peaked earlier for sound sources in the contralateral than for sources in the ipsilateral hemifield and the peak amplitudes of all responses reached their maxima for contralateral sound sources. The amplitude of the right-hemispheric P2m response reflected the degree of spatiality of sound, being twice as large for the 3D than ITD stimuli. The results indicate that the right hemisphere is specialized in the processing of spatial cues in the passive recording condition. Minimum current estimate (MCE) localization revealed that temporal areas were activated both in the active and passive condition. This initial activation, taking place at around 100 ms, was followed by parietal and frontal activity at 180 and 200 ms, respectively. The latter activations, however, were specific to attentional engagement and motor responding. This suggests that parietal activation reflects active responding to a spatial sound rather than auditory spatial processing as such.

  8. Sound Source Localization through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network.

    PubMed

    Beck, Christoph; Garreau, Guillaume; Georgiou, Julius

    2016-01-01

    Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.

  9. Captive Bottlenose Dolphins Do Discriminate Human-Made Sounds Both Underwater and in the Air

    PubMed Central

    Lima, Alice; Sébilleau, Mélissa; Boye, Martin; Durand, Candice; Hausberger, Martine; Lemasson, Alban

    2018-01-01

    Bottlenose dolphins (Tursiops truncatus) spontaneously emit individual acoustic signals that identify them to group members. We tested whether these cetaceans could learn artificial individual sound cues played underwater and whether they would generalize this learning to airborne sounds. Dolphins are thought to perceive only underwater sounds and their training depends largely on visual signals. We investigated the behavioral responses of seven dolphins in a group to learned human-made individual sound cues, played underwater and in the air. Dolphins recognized their own sound cue after hearing it underwater as they immediately moved toward the source, whereas when it was airborne they gazed more at the source of their own sound cue but did not approach it. We hypothesize that they perhaps detected modifications of the sound induced by air or were confused by the novelty of the situation, but nevertheless recognized they were being “targeted.” They did not respond when hearing another group member’s cue in either situation. This study provides further evidence that dolphins respond to individual-specific sounds and that these marine mammals possess some capacity for processing airborne acoustic signals. PMID:29445350

  10. The influence of crowd density on the sound environment of commercial pedestrian streets.

    PubMed

    Meng, Qi; Kang, Jian

    2015-04-01

    Commercial pedestrian streets are very common in China and Europe, with many situated in historic or cultural centres. The environments of these streets are important, including their sound environments. The objective of this study is to explore the relationships between the crowd density and the sound environments of commercial pedestrian streets. On-site measurements were performed at the case study site in Harbin, China, and a questionnaire was administered. The sound pressure measurements showed that the crowd density has an insignificant effect on sound pressure below 0.05 persons/m2, whereas when the crowd density is greater than 0.05 persons/m2, the sound pressure increases with crowd density. The sound sources were analysed, showing that several typical sound sources, such as traffic noise, can be masked by the sounds resulting from dense crowds. The acoustic analysis showed that crowd densities outside the range of 0.10 to 0.25 persons/m2 exhibited lower acoustic comfort evaluation scores. In terms of audiovisual characteristics, the subjective loudness increases with greater crowd density, while the acoustic comfort decreases. The results for an indoor underground shopping street are also presented for comparison. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Soundscapes and the sense of hearing of fishes.

    PubMed

    Fay, Richard

    2009-03-01

    Underwater soundscapes have probably played an important role in the adaptation of ears and auditory systems of fishes throughout evolutionary time, and for all species. These sounds probably contain important information about the environment and about most objects and events that confront the receiving fish so that appropriate behavior is possible. For example, the sounds from reefs appear to be used by at least some fishes for their orientation and migration. These sorts of environmental sounds should be considered much like "acoustic daylight," that continuously bathes all environments and contain information that all organisms can potentially use to form a sort of image of the environment. At present, however, we are generally ignorant of the nature of ambient sound fields impinging on fishes, and the adaptive value of processing these fields to resolve the multiple sources of sound. Our field has focused almost exclusively on the adaptive value of processing species-specific communication sounds, and has not considered the informational value of ambient "noise." Since all fishes can detect and process acoustic particle motion, including the directional characteristics of this motion, underwater sound fields are potentially more complex and information-rich than terrestrial acoustic environments. The capacities of one fish species (goldfish) to receive and make use of such sound source information have been demonstrated (sound source segregation and auditory scene analysis), and it is suggested that all vertebrate species have this capacity. A call is made to better understand underwater soundscapes, and the associated behaviors they determine in fishes. © 2009 ISZS, Blackwell Publishing and IOZ/CAS.

  12. Possibilities of psychoacoustics to determine sound quality

    NASA Astrophysics Data System (ADS)

    Genuit, Klaus

    For some years, acoustic engineers have increasingly become aware of the importance of analyzing and minimizing noise problems not only with regard to the A-weighted sound pressure level, but to design sound quality. It is relatively easy to determine the A-weighted SPL according to international standards. However, the objective evaluation to describe subjectively perceived sound quality - taking into account psychoacoustic parameters such as loudness, roughness, fluctuation strength, sharpness and so forth - is more difficult. On the one hand, the psychoacoustic measurement procedures which are known so far have yet not been standardized. On the other hand, they have only been tested in laboratories by means of listening tests in the free-field and one sound source and simple signals. Therefore, the results achieved cannot be transferred to complex sound situations with several spatially distributed sound sources without difficulty. Due to the directional hearing and selectivity of human hearing, individual sound events can be selected among many. Already in the late seventies a new binaural Artificial Head Measurement System was developed which met the requirements of the automobile industry in terms of measurement technology. The first industrial application of the Artificial Head Measurement System was in 1981. Since that time the system was further developed, particularly by the cooperation between HEAD acoustics and Mercedes-Benz. In addition to a calibratable Artificial Head Measurement System which is compatible with standard measurement technologies and has transfer characteristics comparable to human hearing, a Binaural Analysis System is now also available. This system permits the analysis of binaural signals regarding physical and psychoacoustic procedures. Moreover, the signals to be analyzed can be simultaneously monitored through headphones and manipulated in the time and frequency domain so that those signal components being responsible for noise annoyance can be found. Especially in complex sound situations with several spatially distributed sound sources, standard, one-channel measurements methods cannot adequately determine sound quality, the acoustic comfort, or annoyance of sound events.

  13. Hydrodynamic ion sound instability in systems of a finite length

    NASA Astrophysics Data System (ADS)

    Koshkarov, O.; Chapurin, O.; Smolyakov, A.; Kaganovich, I.; Ilgisonis, V.

    2016-09-01

    Plasmas permeated by an energetic ion beam is prone to the kinetic ion-sound instability that occurs as a result of the inverse Landau damping for ion velocity. It is shown here that in a finite length system there exists another type of the ion sound instability which occurs for v02

  14. Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners With Bilateral and With Hearing-Preservation Cochlear Implants.

    PubMed

    Loiselle, Louise H; Dorman, Michael F; Yost, William A; Cook, Sarah J; Gifford, Rene H

    2016-08-01

    To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Eleven bilateral listeners with MED-EL (Durham, NC) CIs and 8 listeners with hearing-preservation CIs with symmetrical low frequency, acoustic hearing using the MED-EL or Cochlear device were evaluated using 2 tests designed to task binaural hearing, localization, and a simulated cocktail party. Access to interaural cues for localization was constrained by the use of low-pass, high-pass, and wideband noise stimuli. Sound-source localization accuracy for listeners with bilateral CIs in response to the high-pass noise stimulus and sound-source localization accuracy for the listeners with hearing-preservation CIs in response to the low-pass noise stimulus did not differ significantly. Speech understanding in a cocktail party listening environment improved for all listeners when interaural cues, either interaural time difference or interaural level difference, were available. The findings of the current study indicate that similar degrees of benefit to sound-source localization and speech understanding in complex listening environments are possible with 2 very different rehabilitation strategies: the provision of bilateral CIs and the preservation of hearing.

  15. Spherical harmonic analysis of the sound radiation from omnidirectional loudspeaker arrays

    NASA Astrophysics Data System (ADS)

    Pasqual, A. M.

    2014-09-01

    Omnidirectional sound sources are widely used in room acoustics. These devices are made up of loudspeakers mounted on a spherical or polyhedral cabinet, where the dodecahedral shape prevails. Although such electroacoustic sources have been made readily available to acousticians by many manufacturers, an in-depth investigation of their vibroacoustic behavior has not been provided yet. In order to fulfill this lack, this paper presents a theoretical study of the sound radiation from omnidirectional loudspeaker arrays, which is carried out by using a mathematical model based on the spherical harmonic analysis. Eight different loudspeaker arrangements on the sphere are considered: the well-known five Platonic solid layouts and three extremal system layouts. The latter possess useful properties for spherical loudspeaker arrays used as directivity controlled sound sources, so that these layouts are included here in order to investigate whether or not they could be of interest as omnidirectional sources as well. It is shown through a comparative analysis that the dodecahedral array leads to the lowest error in producing an omnidirectional sound field and to the highest acoustic power, which corroborates the prevalence of such a layout. In addition, if a source with less than 12 loudspeakers is required, it is shown that tetrahedra or hexahedra can be used alternatively, whereas the extremal system layouts are not interesting choices for omnidirectional loudspeaker arrays.

  16. The use of an active controlled enclosure to attenuate sound radiation from a heavy radiator

    NASA Astrophysics Data System (ADS)

    Sun, Yao; Yang, Tiejun; Zhu, Minggang; Pan, Jie

    2017-03-01

    Active structural acoustical control usually experiences difficulty in the control of heavy sources or sources where direct applications of control forces are not practical. To overcome this difficulty, an active controlled enclosure, which forms a cavity with both flexible and open boundary, is employed. This configuration permits indirect implementation of active control in which the control inputs can be applied to subsidiary structures other than the sources. To determine the control effectiveness of the configuration, the vibro-acoustic behavior of the system, which consists of a top plate with an open, a sound cavity and a source panel, is investigated in this paper. A complete mathematical model of the system is formulated involving modified Fourier series formulations and the governing equations are solved using the Rayleigh-Ritz method. The coupling mechanisms of a partly opened cavity and a plate are analysed in terms of modal responses and directivity patterns. Furthermore, to attenuate sound power radiated from both the top panel and the open, two strategies are studied: minimizing the total radiated power and the cancellation of volume velocity. Moreover, three control configurations are compared, using a point force on the control panel (structural control), using a sound source in the cavity (acoustical control) and applying hybrid structural-acoustical control. In addition, the effects of boundary condition of the control panel on the sound radiation and control performance are discussed.

  17. Material sound source localization through headphones

    NASA Astrophysics Data System (ADS)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  18. Aeroacoustic analysis of the human phonation process based on a hybrid acoustic PIV approach

    NASA Astrophysics Data System (ADS)

    Lodermeyer, Alexander; Tautz, Matthias; Becker, Stefan; Döllinger, Michael; Birk, Veronika; Kniesburges, Stefan

    2018-01-01

    The detailed analysis of sound generation in human phonation is severely limited as the accessibility to the laryngeal flow region is highly restricted. Consequently, the physical basis of the underlying fluid-structure-acoustic interaction that describes the primary mechanism of sound production is not yet fully understood. Therefore, we propose the implementation of a hybrid acoustic PIV procedure to evaluate aeroacoustic sound generation during voice production within a synthetic larynx model. Focusing on the flow field downstream of synthetic, aerodynamically driven vocal folds, we calculated acoustic source terms based on the velocity fields obtained by time-resolved high-speed PIV applied to the mid-coronal plane. The radiation of these sources into the acoustic far field was numerically simulated and the resulting acoustic pressure was finally compared with experimental microphone measurements. We identified the tonal sound to be generated downstream in a small region close to the vocal folds. The simulation of the sound propagation underestimated the tonal components, whereas the broadband sound was well reproduced. Our results demonstrate the feasibility to locate aeroacoustic sound sources inside a synthetic larynx using a hybrid acoustic PIV approach. Although the technique employs a 2D-limited flow field, it accurately reproduces the basic characteristics of the aeroacoustic field in our larynx model. In future studies, not only the aeroacoustic mechanisms of normal phonation will be assessable, but also the sound generation of voice disorders can be investigated more profoundly.

  19. Selective attention to sound location or pitch studied with event-related brain potentials and magnetic fields.

    PubMed

    Degerman, Alexander; Rinne, Teemu; Särkkä, Anna-Kaisa; Salmi, Juha; Alho, Kimmo

    2008-06-01

    Event-related brain potentials (ERPs) and magnetic fields (ERFs) were used to compare brain activity associated with selective attention to sound location or pitch in humans. Sixteen healthy adults participated in the ERP experiment, and 11 adults in the ERF experiment. In different conditions, the participants focused their attention on a designated sound location or pitch, or pictures presented on a screen, in order to detect target sounds or pictures among the attended stimuli. In the Attend Location condition, the location of sounds varied randomly (left or right), while their pitch (high or low) was kept constant. In the Attend Pitch condition, sounds of varying pitch (high or low) were presented at a constant location (left or right). Consistent with previous ERP results, selective attention to either sound feature produced a negative difference (Nd) between ERPs to attended and unattended sounds. In addition, ERPs showed a more posterior scalp distribution for the location-related Nd than for the pitch-related Nd, suggesting partially different generators for these Nds. The ERF source analyses found no source distribution differences between the pitch-related Ndm (the magnetic counterpart of the Nd) and location-related Ndm in the superior temporal cortex (STC), where the main sources of the Ndm effects are thought to be located. Thus, the ERP scalp distribution differences between the location-related and pitch-related Nd effects may have been caused by activity of areas outside the STC, perhaps in the inferior parietal regions.

  20. Sound Explorations from the Ages of 10 to 37 Months: The Ontogenesis of Musical Conducts

    ERIC Educational Resources Information Center

    Delalande, Francois; Cornara, Silvia

    2010-01-01

    One of the forms of first musical conduct is the exploration of sound sources. When young children produce sounds with any object, these sounds may surprise them and so they make the sounds again--not exactly the same, but introducing some variation. A process of repetition with slight changes is set in motion which can be analysed, as did Piaget,…

  1. Monitoring the Ocean Using High Frequency Ambient Sound

    DTIC Science & Technology

    2008-10-01

    even identify specific groups within the resident killer whale type ( Puget Sound Southern Resident pods J, K and L) because these groups have...particular, the different populations of killer whales in the NE Pacific Ocean. This has been accomplished by detecting transient sounds with short...high sea state (the sound of spray), general shipping - close and distant, clanking and whale calls and clicking. These sound sources form the basis

  2. Modeling sound transmission through the pulmonary system and chest with application to diagnosis of a collapsed lung

    NASA Astrophysics Data System (ADS)

    Royston, T. J.; Zhang, X.; Mansy, H. A.; Sandler, R. H.

    2002-04-01

    A theoretical and experimental study was undertaken to examine the feasibility of using audible-frequency vibro-acoustic waves for diagnosis of pneumothorax, a collapsed lung. The hypothesis was that the acoustic response of the chest to external excitation would change with this condition. In experimental canine studies, external acoustic energy was introduced into the trachea via an endotracheal tube. For the control (nonpneumothorax) state, it is hypothesized that sound waves primarily travel through the airways, couple to the lung parenchyma, and then are transmitted directly to the chest wall. In contradistinction, when a pneumothorax is present the intervening air presents an added barrier to efficient acoustic energy transfer. Theoretical models of sound transmission through the pulmonary system and chest region to the chest wall surface are developed to more clearly understand the mechanisms of intensity loss when a pneumothorax is present, relative to a baseline case. These models predict significant decreases in acoustic transmission strength when a pneumothorax is present, in qualitative agreement with experimental measurements. Development of the models, their extension via finite element analysis, and comparisons with experimental canine studies are reviewed.

  3. Meteorological effects on long-range outdoor sound propagation

    NASA Technical Reports Server (NTRS)

    Klug, Helmut

    1990-01-01

    Measurements of sound propagation over distances up to 1000 m were carried out with an impulse sound source offering reproducible, short time signals. Temperature and wind speed at several heights were monitored simultaneously; the meteorological data are used to determine the sound speed gradients according to the Monin-Obukhov similarity theory. The sound speed profile is compared to a corresponding prediction, gained through the measured travel time difference between direct and ground reflected pulse (which depends on the sound speed gradient). Positive sound speed gradients cause bending of the sound rays towards the ground yielding enhanced sound pressure levels. The measured meteorological effects on sound propagation are discussed and illustrated by ray tracing methods.

  4. Sound transmission in porcine thorax through airway insonification.

    PubMed

    Peng, Ying; Dai, Zoujun; Mansy, Hansen A; Henry, Brian M; Sandler, Richard H; Balk, Robert A; Royston, Thomas J

    2016-04-01

    Many pulmonary injuries and pathologies may lead to structural and functional changes in the lungs resulting in measurable sound transmission changes on the chest surface. Additionally, noninvasive imaging of externally driven mechanical wave motion in the chest (e.g., using magnetic resonance elastography) can provide information about lung structural property changes and, hence, may be of diagnostic value. In the present study, a comprehensive computational simulation (in silico) model was developed to simulate sound wave propagation in the airways, lung, and chest wall under normal and pneumothorax conditions. Experiments were carried out to validate the model. Here, sound waves with frequency content from 50 to 700 Hz were introduced into airways of five porcine subjects via an endotracheal tube, and transmitted waves were measured by scanning laser Doppler vibrometry at the chest wall surface. The computational model predictions of decreased sound transmission with pneumothorax were consistent with experimental measurements. The in silico model can also be used to visualize wave propagation inside and on the chest wall surface for other pulmonary pathologies, which may help in developing and interpreting diagnostic procedures that utilize sound and vibration.

  5. Sound transmission in porcine thorax through airway insonification

    PubMed Central

    Dai, Zoujun; Mansy, Hansen A.; Henry, Brian M.; Sandler, Richard H.; Balk, Robert A.; Royston, Thomas J.

    2015-01-01

    Many pulmonary injuries and pathologies may lead to structural and functional changes in the lungs resulting in measurable sound transmission changes on the chest surface. Additionally, noninvasive imaging of externally driven mechanical wave motion in the chest (e.g., using magnetic resonance elastography) can provide information about lung structural property changes and, hence, may be of diagnostic value. In the present study, a comprehensive computational simulation (in silico) model was developed to simulate sound wave propagation in the airways, lung, and chest wall under normal and pneumothorax conditions. Experiments were carried out to validate the model. Here, sound waves with frequency content from 50 to 700 Hz were introduced into airways of five porcine subjects via an endotracheal tube, and transmitted waves were measured by scanning laser Doppler vibrometry at the chest wall surface. The computational model predictions of decreased sound transmission with pneumothorax were consistent with experimental measurements. The in silico model can also be used to visualize wave propagation inside and on the chest wall surface for other pulmonary pathologies, which may help in developing and interpreting diagnostic procedures that utilize sound and vibration. PMID:26280512

  6. Temporal Organization of Sound Information in Auditory Memory.

    PubMed

    Song, Kun; Luo, Huan

    2017-01-01

    Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.

  7. The Problems with "Noise Numbers" for Wind Farm Noise Assessment

    ERIC Educational Resources Information Center

    Thorne, Bob

    2011-01-01

    Human perception responds primarily to sound character rather than sound level. Wind farms are unique sound sources and exhibit special audible and inaudible characteristics that can be described as modulating sound or as a tonal complex. Wind farm compliance measures based on a specified noise number alone will fail to address problems with noise…

  8. Ion source with external RF antenna

    DOEpatents

    Leung, Ka-Ngo; Ji, Qing; Wilde, Stephen

    2005-12-13

    A radio frequency (RF) driven plasma ion source has an external RF antenna, i.e. the RF antenna is positioned outside the plasma generating chamber rather than inside. The RF antenna is typically formed of a small diameter metal tube coated with an insulator. An external RF antenna assembly is used to mount the external RF antenna to the ion source. The RF antenna tubing is wound around the external RF antenna assembly to form a coil. The external RF antenna assembly is formed of a material, e.g. quartz, which is essentially transparent to the RF waves. The external RF antenna assembly is attached to and forms a part of the plasma source chamber so that the RF waves emitted by the RF antenna enter into the inside of the plasma chamber and ionize a gas contained therein. The plasma ion source is typically a multi-cusp ion source.

  9. Sound at the zoo: Using animal monitoring, sound measurement, and noise reduction in zoo animal management.

    PubMed

    Orban, David A; Soltis, Joseph; Perkins, Lori; Mellen, Jill D

    2017-05-01

    A clear need for evidence-based animal management in zoos and aquariums has been expressed by industry leaders. Here, we show how individual animal welfare monitoring can be combined with measurement of environmental conditions to inform science-based animal management decisions. Over the last several years, Disney's Animal Kingdom® has been undergoing significant construction and exhibit renovation, warranting institution-wide animal welfare monitoring. Animal care and science staff developed a model that tracked animal keepers' daily assessments of an animal's physical health, behavior, and responses to husbandry activity; these data were matched to different external stimuli and environmental conditions, including sound levels. A case study of a female giant anteater and her environment is presented to illustrate how this process worked. Associated with this case, several sound-reducing barriers were tested for efficacy in mitigating sound. Integrating daily animal welfare assessment with environmental monitoring can lead to a better understanding of animals and their sensory environment and positively impact animal welfare. © 2017 Wiley Periodicals, Inc.

  10. Nonlinear Bubble Interactions in Acoustic Pressure Fields

    NASA Technical Reports Server (NTRS)

    Barbat, Tiberiu; Ashgriz, Nasser; Liu, Ching-Shi

    1996-01-01

    The systems consisting of a two-phase mixture, as clouds of bubbles or drops, have shown many common features in their responses to different external force fields. One of particular interest is the effect of an unsteady pressure field applied to these systems, case in which the coupling of the vibrations induced in two neighboring components (two drops or two bubbles) may result in an interaction force between them. This behavior was explained by Bjerknes by postulating that every body that is moving in an accelerating fluid is subjected to a 'kinetic buoyancy' equal with the product of the acceleration of the fluid multiplied by the mass of the fluid displaced by the body. The external sound wave applied to a system of drops/bubbles triggers secondary sound waves from each component of the system. These secondary pressure fields integrated over the surface of the neighboring drop/bubble may result in a force additional to the effect of the primary sound wave on each component of the system. In certain conditions, the magnitude of these secondary forces may result in significant changes in the dynamics of each component, thus in the behavior of the entire system. In a system containing bubbles, the sound wave radiated by one bubble at the location of a neighboring one is dominated by the volume oscillation mode and its effects can be important for a large range of frequencies. The interaction forces in a system consisting of drops are much smaller than those consisting of bubbles. Therefore, as a first step towards the understanding of the drop-drop interaction subject to external pressure fluctuations, it is more convenient to study the bubble interactions. This paper presents experimental results and theoretical predictions concerning the interaction and the motion of two levitated air bubbles in water in the presence of an acoustic field at high frequencies (22-23 KHz).

  11. Negative ion source with external RF antenna

    DOEpatents

    Leung, Ka-Ngo; Hahto, Sami K.; Hahto, Sari T.

    2007-02-13

    A radio frequency (RF) driven plasma ion source has an external RF antenna, i.e. the RF antenna is positioned outside the plasma generating chamber rather than inside. The RF antenna is typically formed of a small diameter metal tube coated with an insulator. An external RF antenna assembly is used to mount the external RF antenna to the ion source. The RF antenna tubing is wound around the external RF antenna assembly to form a coil. The external RF antenna assembly is formed of a material, e.g. quartz, which is essentially transparent to the RF waves. The external RF antenna assembly is attached to and forms a part of the plasma source chamber so that the RF waves emitted by the RF antenna enter into the inside of the plasma chamber and ionize a gas contained therein. The plasma ion source is typically a multi-cusp ion source. A converter can be included in the ion source to produce negative ions.

  12. Spatial sound field synthesis and upmixing based on the equivalent source method.

    PubMed

    Bai, Mingsian R; Hsu, Hoshen; Wen, Jheng-Ciang

    2014-01-01

    Given scarce number of recorded signals, spatial sound field synthesis with an extended sweet spot is a challenging problem in acoustic array signal processing. To address the problem, a synthesis and upmixing approach inspired by the equivalent source method (ESM) is proposed. The synthesis procedure is based on the pressure signals recorded by a microphone array and requires no source model. The array geometry can also be arbitrary. Four upmixing strategies are adopted to enhance the resolution of the reproduced sound field when there are more channels of loudspeakers than the microphones. Multi-channel inverse filtering with regularization is exploited to deal with the ill-posedness in the reconstruction process. The distance between the microphone and loudspeaker arrays is optimized to achieve the best synthesis quality. To validate the proposed system, numerical simulations and subjective listening experiments are performed. The results demonstrated that all upmixing methods improved the quality of reproduced target sound field over the original reproduction. In particular, the underdetermined ESM interpolation method yielded the best spatial sound field synthesis in terms of the reproduction error, timbral quality, and spatial quality.

  13. 100 Day Kit for Newly Diagnosed Families of School Age Children

    MedlinePlus

    ... sense of smell may be highly sensitive. The fish at the meat counter isn’t quite fresh, ... action or condition, internal (e.g., heart rate, temperature) or external (e.g., sights, sounds, tastes, smells, ...

  14. A new approach to the effect of sound on vortex dynamics

    NASA Technical Reports Server (NTRS)

    Lund, Fernando; Zabusky, Norman J.

    1987-01-01

    Analytical results are presented on the effect of acoustic radiation on three-dimensional vortex motions in a homogeneous, slightly compressible, inviscid fluid. The flow is considered as linear and irrotational everywhere except inside a very thin cylindrical core region around the vortex filament. In the outside region, a velocity potential is introduced that must be multivalued, and it is shown how to compute this scalar potential if the motion of the vortex filament is prescribed. To find the motion of this singularity in an external potential flow, a variational principle involving a volume integral that must exclude the singular region is considered. A functional of the external potential and vortex filament position is obtained whose extrema give equations to determine the sought-after evolution. Thus, a generalization of the Biot-Savart law to flows with constant sound speed at low Mach number is obtained.

  15. Development and Testing of a High Level Axial Array Duct Sound Source for the NASA Flow Impedance Test Facility

    NASA Technical Reports Server (NTRS)

    Johnson, Marty E.; Fuller, Chris R.; Jones, Michael G. (Technical Monitor)

    2000-01-01

    In this report both a frequency domain method for creating high level harmonic excitation and a time domain inverse method for creating large pulses in a duct are developed. To create controllable, high level sound an axial array of six JBL-2485 compression drivers was used. The pressure downstream is considered as input voltages to the sources filtered by the natural dynamics of the sources and the duct. It is shown that this dynamic behavior can be compensated for by filtering the inputs such that both time delays and phase changes are taken into account. The methods developed maximize the sound output while (i) keeping within the power constraints of the sources and (ii) maintaining a suitable level of reproduction accuracy. Harmonic excitation pressure levels of over 155dB were created experimentally over a wide frequency range (1000-4000Hz). For pulse excitation there is a tradeoff between accuracy of reproduction and sound level achieved. However, the accurate reproduction of a pulse with a maximum pressure level over 6500Pa was achieved experimentally. It was also shown that the throat connecting the driver to the duct makes it difficult to inject sound just below the cut-on of each acoustic mode (pre cut-on loading effect).

  16. Perceptual assessment of quality of urban soundscapes with combined noise sources and water sounds.

    PubMed

    Jeon, Jin Yong; Lee, Pyoung Jik; You, Jin; Kang, Jian

    2010-03-01

    In this study, urban soundscapes containing combined noise sources were evaluated through field surveys and laboratory experiments. The effect of water sounds on masking urban noises was then examined in order to enhance the soundscape perception. Field surveys in 16 urban spaces were conducted through soundwalking to evaluate the annoyance of combined noise sources. Synthesis curves were derived for the relationships between noise levels and the percentage of highly annoyed (%HA) and the percentage of annoyed (%A) for the combined noise sources. Qualitative analysis was also made using semantic scales for evaluating the quality of the soundscape, and it was shown that the perception of acoustic comfort and loudness was strongly related to the annoyance. A laboratory auditory experiment was then conducted in order to quantify the total annoyance caused by road traffic noise and four types of construction noise. It was shown that the annoyance ratings were related to the types of construction noise in combination with road traffic noise and the level of the road traffic noise. Finally, water sounds were determined to be the best sounds to use for enhancing the urban soundscape. The level of the water sounds should be similar to or not less than 3 dB below the level of the urban noises.

  17. Development of a Finite-Difference Time Domain (FDTD) Model for Propagation of Transient Sounds in Very Shallow Water.

    PubMed

    Sprague, Mark W; Luczkovich, Joseph J

    2016-01-01

    This finite-difference time domain (FDTD) model for sound propagation in very shallow water uses pressure and velocity grids with both 3-dimensional Cartesian and 2-dimensional cylindrical implementations. Parameters, including water and sediment properties, can vary in each dimension. Steady-state and transient signals from discrete and distributed sources, such as the surface of a vibrating pile, can be used. The cylindrical implementation uses less computation but requires axial symmetry. The Cartesian implementation allows asymmetry. FDTD calculations compare well with those of a split-step parabolic equation. Applications include modeling the propagation of individual fish sounds, fish aggregation sounds, and distributed sources.

  18. Acoustically excited heated jets. 2: In search of a better understanding

    NASA Technical Reports Server (NTRS)

    Lepicovsky, J.; Ahuja, K. K.; Brown, W. H.; Salikuddin, M.; Morris, P. J.

    1988-01-01

    The second part of a three-part report on the effects of acoustic excitation on jet mixing includes the results of an experimental investigation directed at resolving the question of poor excitability of some of the heated jets. The theoretical predictions discussed in Part 1 are examined to find explanations for the observed discrepancies between the measured and the predicted results. Additional testing was performed by studying the self excitation of the shock containing hot jets and also by exciting the jet by sound radiated through source tubes located externally around the periphery of the jet. The effects of nozzle-exit boundary layer conditions on jet excitability was also investigated. It is concluded that high-speed, heated jet mixing rates and consequently also the jet excitability strongly depends on nozzle exit boundary layer conditions.

  19. Robotic vision. [process control applications

    NASA Technical Reports Server (NTRS)

    Williams, D. S.; Wilf, J. M.; Cunningham, R. T.; Eskenazi, R.

    1979-01-01

    Robotic vision, involving the use of a vision system to control a process, is discussed. Design and selection of active sensors employing radiation of radio waves, sound waves, and laser light, respectively, to light up unobservable features in the scene are considered, as are design and selection of passive sensors, which rely on external sources of illumination. The segmentation technique by which an image is separated into different collections of contiguous picture elements having such common characteristics as color, brightness, or texture is examined, with emphasis on the edge detection technique. The IMFEX (image feature extractor) system performing edge detection and thresholding at 30 frames/sec television frame rates is described. The template matching and discrimination approach to recognize objects are noted. Applications of robotic vision in industry for tasks too monotonous or too dangerous for the workers are mentioned.

  20. Adaptive near-field beamforming techniques for sound source imaging.

    PubMed

    Cho, Yong Thung; Roan, Michael J

    2009-02-01

    Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.

  1. Aircraft laser sensing of sound velocity in water - Brillouin scattering

    NASA Technical Reports Server (NTRS)

    Hickman, G. D.; Harding, John M.; Carnes, Michael; Pressman, AL; Kattawar, George W.; Fry, Edward S.

    1991-01-01

    A real-time data source for sound speed in the upper 100 m has been proposed for exploratory development. This data source is planned to be generated via a ship- or aircraft-mounted optical pulsed laser using the spontaneous Brillouin scattering technique. The system should be capable (from a single 10 ns 500 mJ pulse) of yielding range resolved sound speed profiles in water to depths of 75-100 m to an accuracy of 1 m/s. The 100 m profiles will provide the capability of rapidly monitoring the upper-ocean vertical structure. They will also provide an extensive, subsurface-data source for existing real-time, operational ocean nowcast/forecast systems.

  2. Short-Latency, Goal-Directed Movements of the Pinnae to Sounds That Produce Auditory Spatial Illusions

    PubMed Central

    McClaine, Elizabeth M.; Yin, Tom C. T.

    2010-01-01

    The precedence effect (PE) is an auditory spatial illusion whereby two identical sounds presented from two separate locations with a delay between them are perceived as a fused single sound source whose position depends on the value of the delay. By training cats using operant conditioning to look at sound sources, we have previously shown that cats experience the PE similarly to humans. For delays less than ±400 μs, cats exhibit summing localization, the perception of a “phantom” sound located between the sources. Consistent with localization dominance, for delays from 400 μs to ∼10 ms, cats orient toward the leading source location only, with little influence of the lagging source. Finally, echo threshold was reached for delays >10 ms, where cats first began to orient to the lagging source. It has been hypothesized by some that the neural mechanisms that produce facets of the PE, such as localization dominance and echo threshold, must likely occur at cortical levels. To test this hypothesis, we measured both pinnae position, which were not under any behavioral constraint, and eye position in cats and found that the pinnae orientations to stimuli that produce each of the three phases of the PE illusion was similar to the gaze responses. Although both eye and pinnae movements behaved in a manner that reflected the PE, because the pinnae moved with strikingly short latencies (∼30 ms), these data suggest a subcortical basis for the PE and that the cortex is not likely to be directly involved. PMID:19889848

  3. Short-latency, goal-directed movements of the pinnae to sounds that produce auditory spatial illusions.

    PubMed

    Tollin, Daniel J; McClaine, Elizabeth M; Yin, Tom C T

    2010-01-01

    The precedence effect (PE) is an auditory spatial illusion whereby two identical sounds presented from two separate locations with a delay between them are perceived as a fused single sound source whose position depends on the value of the delay. By training cats using operant conditioning to look at sound sources, we have previously shown that cats experience the PE similarly to humans. For delays less than +/-400 mus, cats exhibit summing localization, the perception of a "phantom" sound located between the sources. Consistent with localization dominance, for delays from 400 mus to approximately 10 ms, cats orient toward the leading source location only, with little influence of the lagging source. Finally, echo threshold was reached for delays >10 ms, where cats first began to orient to the lagging source. It has been hypothesized by some that the neural mechanisms that produce facets of the PE, such as localization dominance and echo threshold, must likely occur at cortical levels. To test this hypothesis, we measured both pinnae position, which were not under any behavioral constraint, and eye position in cats and found that the pinnae orientations to stimuli that produce each of the three phases of the PE illusion was similar to the gaze responses. Although both eye and pinnae movements behaved in a manner that reflected the PE, because the pinnae moved with strikingly short latencies ( approximately 30 ms), these data suggest a subcortical basis for the PE and that the cortex is not likely to be directly involved.

  4. Investigation of hydraulic transmission noise sources

    NASA Astrophysics Data System (ADS)

    Klop, Richard J.

    Advanced hydrostatic transmissions and hydraulic hybrids show potential in new market segments such as commercial vehicles and passenger cars. Such new applications regard low noise generation as a high priority, thus, demanding new quiet hydrostatic transmission designs. In this thesis, the aim is to investigate noise sources of hydrostatic transmissions to discover strategies for designing compact and quiet solutions. A model has been developed to capture the interaction of a pump and motor working in a hydrostatic transmission and to predict overall noise sources. This model allows a designer to compare noise sources for various configurations and to design compact and inherently quiet solutions. The model describes dynamics of the system by coupling lumped parameter pump and motor models with a one-dimensional unsteady compressible transmission line model. The model has been verified with dynamic pressure measurements in the line over a wide operating range for several system structures. Simulation studies were performed illustrating sensitivities of several design variables and the potential of the model to design transmissions with minimal noise sources. A semi-anechoic chamber has been designed and constructed suitable for sound intensity measurements that can be used to derive sound power. Measurements proved the potential to reduce audible noise by predicting and reducing both noise sources. Sound power measurements were conducted on a series hybrid transmission test bench to validate the model and compare predicted noise sources with sound power.

  5. Why Do People Like Loud Sound? A Qualitative Study

    PubMed Central

    Welch, David; Fremaux, Guy

    2017-01-01

    Many people choose to expose themselves to potentially dangerous sounds such as loud music, either via speakers, personal audio systems, or at clubs. The Conditioning, Adaptation and Acculturation to Loud Music (CAALM) Model has proposed a theoretical basis for this behaviour. To compare the model to data, we interviewed a group of people who were either regular nightclub-goers or who controlled the sound levels in nightclubs (bar managers, musicians, DJs, and sound engineers) about loud sound. Results showed four main themes relating to the enjoyment of loud sound: arousal/excitement, facilitation of socialisation, masking of both external sound and unwanted thoughts, and an emphasis and enhancement of personal identity. Furthermore, an interesting incidental finding was that sound levels appeared to increase gradually over the course of the evening until they plateaued at approximately 97 dBA Leq around midnight. Consideration of the data generated by the analysis revealed a complex of influential factors that support people in wanting exposure to loud sound. Findings were considered in terms of the CAALM Model and could be explained in terms of its principles. From a health promotion perspective, the Social Ecological Model was applied to consider how the themes identified might influence behaviour. They were shown to influence people on multiple levels, providing a powerful system which health promotion approaches struggle to address. PMID:28800097

  6. On the Possible Detection of Lightning Storms by Elephants

    PubMed Central

    Kelley, Michael C.; Garstang, Michael

    2013-01-01

    Simple Summary We use data similar to that taken by the International Monitoring System for the detection of nuclear explosions, to determine whether elephants might be capable of detecting and locating the source of sounds generated by thunderstorms. Knowledge that elephants might be capable of responding to such storms, particularly at the end of the dry season when migrations are initiated, is of considerable interest to management and conservation. Abstract Theoretical calculations suggest that sounds produced by thunderstorms and detected by a system similar to the International Monitoring System (IMS) for the detection of nuclear explosions at distances ≥100 km, are at sound pressure levels equal to or greater than 6 × 10−3 Pa. Such sound pressure levels are well within the range of elephant hearing. Frequencies carrying these sounds might allow for interaural time delays such that adult elephants could not only hear but could also locate the source of these sounds. Determining whether it is possible for elephants to hear and locate thunderstorms contributes to the question of whether elephant movements are triggered or influenced by these abiotic sounds. PMID:26487406

  7. International Space Station External Contamination Status

    NASA Technical Reports Server (NTRS)

    Mikatarian, Ron; Soares, Carlos

    2000-01-01

    PResentation slides examine external contamination requirements; International Space Station (ISS) external contamination sources; ISS external contamination sensitive surfaces; external contamination control; external contamination control for pre-launch verification; flight experiments and observations; the Space Shuttle Orbiter waste water dump, materials outgassing, active vacuum vents; example of molecular column density profile, modeling and analysis tools; sources of outgassing induced contamination analyzed to date, quiescent sources, observations on optical degradation due to induced external contamination in LEO; examples of typical contaminant and depth profiles; and status of the ISS system, material outgassing, thruster plumes, and optical degradation.

  8. Minke whale song, spacing, and acoustic communication on the Great Barrier Reef, Australia

    NASA Astrophysics Data System (ADS)

    Gedamke, Jason

    An inquisitive population of minke whale (Balaenoptera acutorostrata ) that concentrates on the Great Barrier Reef during its suspected breeding season offered a unique opportunity to conduct a multi-faceted study of a little-known Balaenopteran species' acoustic behavior. Chapter one investigates whether the minke whale is the source of an unusual, complex, and stereotyped sound recorded, the "star-wars" vocalization. A hydrophone array was towed from a vessel to record sounds from circling whales for subsequent localization of sound sources. These acoustic locations were matched with shipboard and in-water observations of the minke whale, demonstrating the minke whale was the source of this unusual sound. Spectral and temporal features of this sound and the source levels at which it is produced are described. The repetitive "star-wars" vocalization appears similar to the songs of other whale species and has characteristics consistent with reproductive advertisement displays. Chapter two investigates whether song (i.e. the "star-wars" vocalization) has a spacing function through passive monitoring of singer spatial patterns with a moored five-sonobuoy array. Active song playback experiments to singers were also conducted to further test song function. This study demonstrated that singers naturally maintain spatial separations between them through a nearest-neighbor analysis and animated tracks of singer movements. In response to active song playbacks, singers generally moved away and repeated song more quickly suggesting that song repetition interval may help regulate spatial interaction and singer separation. These results further indicate the Great Barrier Reef may be an important reproductive habitat for this species. Chapter three investigates whether song is part of a potentially graded repertoire of acoustic signals. Utilizing both vessel-based recordings and remote recordings from the sonobuoy array, temporal and spectral features, source levels, and associated contextual data of recorded sounds were analyzed. Two categories of sound are described here: (1) patterned song, which was regularly repeated in one of three patterns: slow, fast, and rapid-clustered repetition, and (2) non-patterned "social" sounds recorded from gregarious assemblages of whales. These discrete acoustic signals may comprise a graded system of communication (Slow/fast song → Rapid-clustered song → Social sounds) that is related to the spacing between whales.

  9. Investigation of the sound generation mechanisms for in-duct orifice plates.

    PubMed

    Tao, Fuyang; Joseph, Phillip; Zhang, Xin; Stalnov, Oksana; Siercke, Matthias; Scheel, Henning

    2017-08-01

    Sound generation due to an orifice plate in a hard-walled flow duct which is commonly used in air distribution systems (ADS) and flow meters is investigated. The aim is to provide an understanding of this noise generation mechanism based on measurements of the source pressure distribution over the orifice plate. A simple model based on Curle's acoustic analogy is described that relates the broadband in-duct sound field to the surface pressure cross spectrum on both sides of the orifice plate. This work describes careful measurements of the surface pressure cross spectrum over the orifice plate from which the surface pressure distribution and correlation length is deduced. This information is then used to predict the radiated in-duct sound field. Agreement within 3 dB between the predicted and directly measured sound fields is obtained, providing direct confirmation that the surface pressure fluctuations acting over the orifice plates are the main noise sources. Based on the developed model, the contributions to the sound field from different radial locations of the orifice plate are calculated. The surface pressure is shown to follow a U 3.9 velocity scaling law and the area over which the surface sources are correlated follows a U 1.8 velocity scaling law.

  10. Effect of eye position on saccades and neuronal responses to acoustic stimuli in the superior colliculus of the behaving cat.

    PubMed

    Populin, Luis C; Tollin, Daniel J; Yin, Tom C T

    2004-10-01

    We examined the motor error hypothesis of visual and auditory interaction in the superior colliculus (SC), first tested by Jay and Sparks in the monkey. We trained cats to direct their eyes to the location of acoustic sources and studied the effects of eye position on both the ability of cats to localize sounds and the auditory responses of SC neurons with the head restrained. Sound localization accuracy was generally not affected by initial eye position, i.e., accuracy was not proportionally affected by the deviation of the eyes from the primary position at the time of stimulus presentation, showing that eye position is taken into account when orienting to acoustic targets. The responses of most single SC neurons to acoustic stimuli in the intact cat were modulated by eye position in the direction consistent with the predictions of the "motor error" hypothesis, but the shift accounted for only two-thirds of the initial deviation of the eyes. However, when the average horizontal sound localization error, which was approximately 35% of the target amplitude, was taken into account, the magnitude of the horizontal shifts in the SC auditory receptive fields matched the observed behavior. The modulation by eye position was not due to concomitant movements of the external ears, as confirmed by recordings carried out after immobilizing the pinnae of one cat. However, the pattern of modulation after pinnae immobilization was inconsistent with the observations in the intact cat, suggesting that, in the intact animal, information about the position of the pinnae may be taken into account.

  11. Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review

    PubMed Central

    Hidaka, Souta; Teramoto, Wataru; Sugita, Yoichi

    2015-01-01

    Research regarding crossmodal interactions has garnered much interest in the last few decades. A variety of studies have demonstrated that multisensory information (vision, audition, tactile sensation, and so on) can perceptually interact with each other in the spatial and temporal domains. Findings regarding crossmodal interactions in the spatiotemporal domain (i.e., motion processing) have also been reported, with updates in the last few years. In this review, we summarize past and recent findings on spatiotemporal processing in crossmodal interactions regarding perception of the external world. A traditional view regarding crossmodal interactions holds that vision is superior to audition in spatial processing, but audition is dominant over vision in temporal processing. Similarly, vision is considered to have dominant effects over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. However, recent findings demonstrate that sound could have a driving effect on visual motion perception. Moreover, studies regarding perceptual associative learning reported that, after association is established between a sound sequence without spatial information and visual motion information, the sound sequence could trigger visual motion perception. Other sensory information, such as motor action or smell, has also exhibited similar driving effects on visual motion perception. Additionally, recent brain imaging studies demonstrate that similar activation patterns could be observed in several brain areas, including the motion processing areas, between spatiotemporal information from different sensory modalities. Based on these findings, we suggest that multimodal information could mutually interact in spatiotemporal processing in the percept of the external world and that common perceptual and neural underlying mechanisms would exist for spatiotemporal processing. PMID:26733827

  12. The Confirmation of the Inverse Square Law Using Diffraction Gratings

    ERIC Educational Resources Information Center

    Papacosta, Pangratios; Linscheid, Nathan

    2014-01-01

    Understanding the inverse square law, how for example the intensity of light or sound varies with distance, presents conceptual and mathematical challenges. Students know intuitively that intensity decreases with distance. A light source appears dimmer and sound gets fainter as the distance from the source increases. The difficulty is in…

  13. The sound strength parameter G and its importance in evaluating and planning the acoustics of halls for music.

    PubMed

    Beranek, Leo

    2011-05-01

    The parameter, "Strength of Sound G" is closely related to loudness. Its magnitude is dependent, inversely, on the total sound absorption in a room. By comparison, the reverberation time (RT) is both inversely related to the total sound absorption in a hall and directly related to its cubic volume. Hence, G and RT in combination are vital in planning the acoustics of a concert hall. A newly proposed "Bass Index" is related to the loudness of the bass sound and equals the value of G at 125 Hz in decibels minus its value at mid-frequencies. Listener envelopment (LEV) is shown for most halls to be directly related to the mid-frequency value of G. The broadening of sound, i.e., apparent source width (ASW) is given by degree of source broadening (DSB) which is determined from the combined effect of early lateral reflections as measured by binaural quality index (BQI) and strength G. The optimum values and limits of these parameters are discussed.

  14. The role of long-term familiarity and attentional maintenance in short-term memory for timbre.

    PubMed

    Siedenburg, Kai; McAdams, Stephen

    2017-04-01

    We study short-term recognition of timbre using familiar recorded tones from acoustic instruments and unfamiliar transformed tones that do not readily evoke sound-source categories. Participants indicated whether the timbre of a probe sound matched with one of three previously presented sounds (item recognition). In Exp. 1, musicians better recognised familiar acoustic compared to unfamiliar synthetic sounds, and this advantage was particularly large in the medial serial position. There was a strong correlation between correct rejection rate and the mean perceptual dissimilarity of the probe to the tones from the sequence. Exp. 2 compared musicians' and non-musicians' performance with concurrent articulatory suppression, visual interference, and with a silent control condition. Both suppression tasks disrupted performance by a similar margin, regardless of musical training of participants or type of sounds. Our results suggest that familiarity with sound source categories and attention play important roles in short-term memory for timbre, which rules out accounts solely based on sensory persistence.

  15. Octopus Cells in the Posteroventral Cochlear Nucleus Provide the Main Excitatory Input to the Superior Paraolivary Nucleus

    PubMed Central

    Felix II, Richard A.; Gourévitch, Boris; Gómez-Álvarez, Marcelo; Leijon, Sara C. M.; Saldaña, Enrique; Magnusson, Anna K.

    2017-01-01

    Auditory streaming enables perception and interpretation of complex acoustic environments that contain competing sound sources. At early stages of central processing, sounds are segregated into separate streams representing attributes that later merge into acoustic objects. Streaming of temporal cues is critical for perceiving vocal communication, such as human speech, but our understanding of circuits that underlie this process is lacking, particularly at subcortical levels. The superior paraolivary nucleus (SPON), a prominent group of inhibitory neurons in the mammalian brainstem, has been implicated in processing temporal information needed for the segmentation of ongoing complex sounds into discrete events. The SPON requires temporally precise and robust excitatory input(s) to convey information about the steep rise in sound amplitude that marks the onset of voiced sound elements. Unfortunately, the sources of excitation to the SPON and the impact of these inputs on the behavior of SPON neurons have yet to be resolved. Using anatomical tract tracing and immunohistochemistry, we identified octopus cells in the contralateral cochlear nucleus (CN) as the primary source of excitatory input to the SPON. Cluster analysis of miniature excitatory events also indicated that the majority of SPON neurons receive one type of excitatory input. Precise octopus cell-driven onset spiking coupled with transient offset spiking make SPON responses well-suited to signal transitions in sound energy contained in vocalizations. Targets of octopus cell projections, including the SPON, are strongly implicated in the processing of temporal sound features, which suggests a common pathway that conveys information critical for perception of complex natural sounds. PMID:28620283

  16. Harmonic Hopping, and Both Punctuated and Gradual Evolution of Acoustic Characters in Selasphorus Hummingbird Tail-Feathers

    PubMed Central

    Clark, Christopher James

    2014-01-01

    Models of character evolution often assume a single mode of evolutionary change, such as continuous, or discrete. Here I provide an example in which a character exhibits both types of change. Hummingbirds in the genus Selasphorus produce sound with fluttering tail-feathers during courtship. The ancestral character state within Selasphorus is production of sound with an inner tail-feather, R2, in which the sound usually evolves gradually. Calliope and Allen's Hummingbirds have evolved autapomorphic acoustic mechanisms that involve feather-feather interactions. I develop a source-filter model of these interactions. The ‘source’ comprises feather(s) that are both necessary and sufficient for sound production, and are aerodynamically coupled to neighboring feathers, which act as filters. Filters are unnecessary or insufficient for sound production, but may evolve to become sources. Allen's Hummingbird has evolved to produce sound with two sources, one with feather R3, another frequency-modulated sound with R4, and their interaction frequencies. Allen's R2 retains the ancestral character state, a ∼1 kHz “ghost” fundamental frequency masked by R3, which is revealed when R3 is experimentally removed. In the ancestor to Allen's Hummingbird, the dominant frequency has ‘hopped’ to the second harmonic without passing through intermediate frequencies. This demonstrates that although the fundamental frequency of a communication sound may usually evolve gradually, occasional jumps from one character state to another can occur in a discrete fashion. Accordingly, mapping acoustic characters on a phylogeny may produce misleading results if the physical mechanism of production is not known. PMID:24722049

  17. A Green Soundscape Index (GSI): The potential of assessing the perceived balance between natural sound and traffic noise.

    PubMed

    Kogan, Pablo; Arenas, Jorge P; Bermejo, Fernando; Hinalaf, María; Turra, Bruno

    2018-06-13

    Urban soundscapes are dynamic and complex multivariable environmental systems. Soundscapes can be organized into three main entities containing the multiple variables: Experienced Environment (EE), Acoustic Environment (AE), and Extra-Acoustic Environment (XE). This work applies a multidimensional and synchronic data-collecting methodology at eight urban environments in the city of Córdoba, Argentina. The EE was assessed by means of surveys, the AE by acoustic measurements and audio recordings, and the XE by photos, video, and complementary sources. In total, 39 measurement locations were considered, where data corresponding to 61 AE and 203 EE were collected. Multivariate analysis and GIS techniques were used for data processing. The types of sound sources perceived, and their extents make up part of the collected variables that belong to the EE, i.e. traffic, people, natural sounds, and others. Sources explaining most of the variance were traffic noise and natural sounds. Thus, a Green Soundscape Index (GSI) is defined here as the ratio of the perceived extents of natural sounds to traffic noise. Collected data were divided into three ranges according to GSI value: 1) perceptual predominance of traffic noise, 2) balanced perception, and 3) perceptual predominance of natural sounds. For each group, three additional variables from the EE and three from the AE were applied, which reported significant differences, especially between ranges 1 and 2 with 3. These results confirm the key role of perceiving natural sounds in a town environment and also support the proposal of a GSI as a valuable indicator to classify urban soundscapes. In addition, the collected GSI-related data significantly helps to assess the overall soundscape. It is noted that this proposed simple perceptual index not only allows one to assess and classify urban soundscapes but also contributes greatly toward a technique for separating environmental sound sources. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Trip Report - June 1989 Swallow Float Deployment with RUM

    DTIC Science & Technology

    1990-12-01

    Float 1. with its external geophone package resting on the sediment, and float 3, equipped with an infra - sonic hydrophone and tethered to the bottom...an external, triaxial geophone package resting on the ocean bottom and the other equippd with an infrasonic hydrophone and bottom-tethered by a 0.5... infrasonic hydrophone and bottom-tethered by a 0.5-meter line, are presented in this report Introduction An experiment designed to compare the ambient sound

  19. Focusing and directional beaming effects of airborne sound through a planar lens with zigzag slits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Kun; Qiu, Chunyin, E-mail: cyqiu@whu.edu.cn; Lu, Jiuyang

    2015-01-14

    Based on the Huygens-Fresnel principle, we design a planar lens to efficiently realize the interconversion between the point-like sound source and Gaussian beam in ambient air. The lens is constructed by a planar plate perforated elaborately with a nonuniform array of zigzag slits, where the slit exits act as subwavelength-sized secondary sources carrying desired sound responses. The experiments operated at audible regime agree well with the theoretical predictions. This compact device could be useful in daily life applications, such as for medical and detection purposes.

  20. Acoustics Research of Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Gao, Ximing; Houston, Janice

    2014-01-01

    The liftoff phase induces high acoustic loading over a broad frequency range for a launch vehicle. These external acoustic environments are used in the prediction of the internal vibration responses of the vehicle and components. Present liftoff vehicle acoustic environment prediction methods utilize stationary data from previously conducted hold-down tests to generate 1/3 octave band Sound Pressure Level (SPL) spectra. In an effort to update the accuracy and quality of liftoff acoustic loading predictions, non-stationary flight data from the Ares I-X were processed in PC-Signal in two flight phases: simulated hold-down and liftoff. In conjunction, the Prediction of Acoustic Vehicle Environments (PAVE) program was developed in MATLAB to allow for efficient predictions of sound pressure levels (SPLs) as a function of station number along the vehicle using semi-empirical methods. This consisted of generating the Dimensionless Spectrum Function (DSF) and Dimensionless Source Location (DSL) curves from the Ares I-X flight data. These are then used in the MATLAB program to generate the 1/3 octave band SPL spectra. Concluding results show major differences in SPLs between the hold-down test data and the processed Ares I-X flight data making the Ares I-X flight data more practical for future vehicle acoustic environment predictions.

  1. Flow Patterns in the Jugular Veins of Pulsatile Tinnitus Patients

    PubMed Central

    Kao, Evan; Kefayati, Sarah; Amans, Matthew R.; Faraji, Farshid; Ballweber, Megan; Halbach, Van; Saloner, David

    2017-01-01

    Pulsatile Tinnitus (PT) is a pulse-synchronous sound heard in the absence of an external source. PT is often related to abnormal flow in vascular structures near the cochlea. One vascular territory implicated in PT is the internal jugular vein (IJV). Using computational fluid dynamics (CFD) based on patient-specific Magnetic Resonance Imaging (MRI), we investigated the flow within the IJV of seven subjects, four symptomatic and three asymptomatic of PT. We found that there were two extreme anatomic types classified by the shape and position of the jugular bulbs: elevated and rounded. PT patients had elevated jugular bulbs that led to a distinctive helical flow pattern within the proximal internal jugular vein. Asymptomatic subjects generally had rounded jugular bulbs that neatly redirected flow from the sigmoid sinus directly into the jugular vein. These two flow patterns were quantified by calculating the length-averaged streamline curvature of the flow within the proximal jugular vein: 130.3 ± 8.1 m-1 for geometries with rounded bulbs, 260.7 ± 29.4 m-1 for those with elevated bulbs (P < 0.005). Our results suggest that variations in the jugular bulb geometry lead to distinct flow patterns that are linked to PT, but further investigation is needed to determine if the vortex pattern is causal to sound generation. PMID:28057349

  2. Electrical Stimulation of the Ear, Head, Cranial Nerve, or Cortex for the Treatment of Tinnitus: A Scoping Review

    PubMed Central

    Adjamian, Peyman

    2016-01-01

    Tinnitus is defined as the perception of sound in the absence of an external source. It is often associated with hearing loss and is thought to result from abnormal neural activity at some point or points in the auditory pathway, which is incorrectly interpreted by the brain as an actual sound. Neurostimulation therapies therefore, which interfere on some level with that abnormal activity, are a logical approach to treatment. For tinnitus, where the pathological neuronal activity might be associated with auditory and other areas of the brain, interventions using electromagnetic, electrical, or acoustic stimuli separately, or paired electrical and acoustic stimuli, have been proposed as treatments. Neurostimulation therapies should modulate neural activity to deliver a permanent reduction in tinnitus percept by driving the neuroplastic changes necessary to interrupt abnormal levels of oscillatory cortical activity and restore typical levels of activity. This change in activity should alter or interrupt the tinnitus percept (reduction or extinction) making it less bothersome. Here we review developments in therapies involving electrical stimulation of the ear, head, cranial nerve, or cortex in the treatment of tinnitus which demonstrably, or are hypothesised to, interrupt pathological neuronal activity in the cortex associated with tinnitus. PMID:27403346

  3. High-frequency monopole sound source for anechoic chamber qualification

    NASA Astrophysics Data System (ADS)

    Saussus, Patrick; Cunefare, Kenneth A.

    2003-04-01

    Anechoic chamber qualification procedures require the use of an omnidirectional monopole sound source. Required characteristics for these monopole sources are explicitly listed in ISO 3745. Building a high-frequency monopole source that meets these characteristics has proved difficult due to the size limitations imposed by small wavelengths at high frequency. A prototype design developed for use in hemianechoic chambers employs telescoping tubes, which act as an inverse horn. This same design can be used in anechoic chambers, with minor adaptations. A series of gradually decreasing brass telescoping tubes is attached to the throat of a well-insulated high-frequency compression driver. Therefore, all of the sound emitted from the driver travels through the horn and exits through an opening of approximately 2.5 mm. Directivity test data show that this design meets all of the requirements set forth by ISO 3745.

  4. Assessment of sound levels in a neonatal intensive care unit in tabriz, iran.

    PubMed

    Valizadeh, Sousan; Bagher Hosseini, Mohammad; Alavi, Nasrinsadat; Asadollahi, Malihe; Kashefimehr, Siamak

    2013-03-01

    High levels of sound have several negative effects, such as noise-induced hearing loss and delayed growth and development, on premature infants in neonatal intensive care units (NICUs). In order to reduce sound levels, they should first be measured. This study was performed to assess sound levels and determine sources of noise in the NICU of Alzahra Teaching Hospital (Tabriz, Iran). In a descriptive study, 24 hours in 4 workdays were randomly selected. Equivalent continuous sound level (Leq), sound level that is exceeded only 10% of the time (L10), maximum sound level (Lmax), and peak instantaneous sound pressure level (Lzpeak) were measured by CEL-440 sound level meter (SLM) at 6 fixed locations in the NICU. Data was collected using a questionnaire. SPSS13 was then used for data analysis. Mean values of Leq, L10, and Lmax were determined as 63.46 dBA, 65.81 dBA, and 71.30 dBA, respectively. They were all higher than standard levels (Leq < 45 dB, L10 ≤50 dB, and Lmax ≤65 dB). The highest Leq was measured at the time of nurse rounds. Leq was directly correlated with the number of staff members present in the ward. Finally, sources of noise were ordered based on their intensity. Considering that sound levels were higher than standard levels in our studied NICU, it is necessary to adopt policies to reduce sound.

  5. Assessment of Sound Levels in a Neonatal Intensive Care Unit in Tabriz, Iran

    PubMed Central

    Valizadeh, Sousan; Bagher Hosseini, Mohammad; Alavi, Nasrinsadat; Asadollahi, Malihe; Kashefimehr, Siamak

    2013-01-01

    Introduction: High levels of sound have several negative effects, such as noise-induced hearing loss and delayed growth and development, on premature infants in neonatal intensive care units (NICUs). In order to reduce sound levels, they should first be measured. This study was performed to assess sound levels and determine sources of noise in the NICU of Alzahra Teaching Hospital (Tabriz, Iran). Methods: In a descriptive study, 24 hours in 4 workdays were randomly selected. Equivalent continuous sound level (Leq), sound level that is exceeded only 10% of the time (L10), maximum sound level (Lmax), and peak instantaneous sound pressure level (Lzpeak) were measured by CEL-440 sound level meter (SLM) at 6 fixed locations in the NICU. Data was collected using a questionnaire. SPSS13 was then used for data analysis. Results: Mean values of Leq, L10, and Lmax were determined as 63.46 dBA, 65.81 dBA, and 71.30 dBA, respectively. They were all higher than standard levels (Leq < 45 dB, L10 ≤50 dB, and Lmax ≤65 dB). The highest Leq was measured at the time of nurse rounds. Leq was directly correlated with the number of staff members present in the ward. Finally, sources of noise were ordered based on their intensity. Conclusion: Considering that sound levels were higher than standard levels in our studied NICU, it is necessary to adopt policies to reduce sound. PMID:25276706

  6. Statistics of natural binaural sounds.

    PubMed

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  7. Statistics of Natural Binaural Sounds

    PubMed Central

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction. PMID:25285658

  8. Electrophysiological correlates of cocktail-party listening.

    PubMed

    Lewald, Jörg; Getzmann, Stephan

    2015-10-01

    Detecting, localizing, and selectively attending to a particular sound source of interest in complex auditory scenes composed of multiple competing sources is a remarkable capacity of the human auditory system. The neural basis of this so-called "cocktail-party effect" has remained largely unknown. Here, we studied the cortical network engaged in solving the "cocktail-party" problem, using event-related potentials (ERPs) in combination with two tasks demanding horizontal localization of a naturalistic target sound presented either in silence or in the presence of multiple competing sound sources. Presentation of multiple sound sources, as compared to single sources, induced an increased P1 amplitude, a reduction in N1, and a strong N2 component, resulting in a pronounced negativity in the ERP difference waveform (N2d) around 260 ms after stimulus onset. About 100 ms later, the anterior contralateral N2 subcomponent (N2ac) occurred in the multiple-sources condition, as computed from the amplitude difference for targets in the left minus right hemispaces. Cortical source analyses of the ERP modulation, resulting from the contrast of multiple vs. single sources, generally revealed an initial enhancement of electrical activity in right temporo-parietal areas, including auditory cortex, by multiple sources (at P1) that is followed by a reduction, with the primary sources shifting from right inferior parietal lobule (at N1) to left dorso-frontal cortex (at N2d). Thus, cocktail-party listening, as compared to single-source localization, appears to be based on a complex chronology of successive electrical activities within a specific cortical network involved in spatial hearing in complex situations. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Determination of equivalent sound speed profiles for ray tracing in near-ground sound propagation.

    PubMed

    Prospathopoulos, John M; Voutsinas, Spyros G

    2007-09-01

    The determination of appropriate sound speed profiles in the modeling of near-ground propagation using a ray tracing method is investigated using a ray tracing model which is capable of performing axisymmetric calculations of the sound field around an isolated source. Eigenrays are traced using an iterative procedure which integrates the trajectory equations for each ray launched from the source at a specific direction. The calculation of sound energy losses is made by introducing appropriate coefficients to the equations representing the effect of ground and atmospheric absorption and the interaction with the atmospheric turbulence. The model is validated against analytical and numerical predictions of other methodologies for simple cases, as well as against measurements for nonrefractive atmospheric environments. A systematic investigation for near-ground propagation in downward and upward refractive atmosphere is made using experimental data. Guidelines for the suitable simulation of the wind velocity profile are derived by correlating predictions with measurements.

  10. Acoustic centering of sources measured by surrounding spherical microphone arrays.

    PubMed

    Hagai, Ilan Ben; Pollow, Martin; Vorländer, Michael; Rafaely, Boaz

    2011-10-01

    The radiation patterns of acoustic sources have great significance in a wide range of applications, such as measuring the directivity of loudspeakers and investigating the radiation of musical instruments for auralization. Recently, surrounding spherical microphone arrays have been studied for sound field analysis, facilitating measurement of the pressure around a sphere and the computation of the spherical harmonics spectrum of the sound source. However, the sound radiation pattern may be affected by the location of the source inside the microphone array, which is an undesirable property when aiming to characterize source radiation in a unique manner. This paper presents a theoretical analysis of the spherical harmonics spectrum of spatially translated sources and defines four measures for the misalignment of the acoustic center of a radiating source. Optimization is used to promote optimal alignment based on the proposed measures and the errors caused by numerical and array-order limitations are investigated. This methodology is examined using both simulated and experimental data in order to investigate the performance and limitations of the different alignment methods. © 2011 Acoustical Society of America

  11. Propagation characteristics of audible noise generated by single corona source under positive DC voltage

    NASA Astrophysics Data System (ADS)

    Li, Xuebao; Cui, Xiang; Lu, Tiebing; Wang, Donglai

    2017-10-01

    The directivity and lateral profile of corona-generated audible noise (AN) from a single corona source are measured through experiments carried out in the semi-anechoic laboratory. The experimental results show that the waveform of corona-generated AN consists of a series of random sound pressure pulses whose pulse amplitudes decrease with the increase of measurement distance. A single corona source can be regarded as a non-directional AN source, and the A-weighted SPL (sound pressure level) decreases 6 dB(A) as doubling the measurement distance. Then, qualitative explanations for the rationality of treating the single corona source as a point source are given on the basis of the Ingard's theory for sound generation in corona discharge. Furthermore, we take into consideration of the ground reflection and the air attenuation to reconstruct the propagation features of AN from the single corona source. The calculated results agree with the measurement well, which validates the propagation model. Finally, the influence of the ground reflection on the SPL is presented in the paper.

  12. Cognitive and Linguistic Sources of Variance in 2-Year-Olds' Speech-Sound Discrimination: A Preliminary Investigation

    ERIC Educational Resources Information Center

    Lalonde, Kaylah; Holt, Rachael Frush

    2014-01-01

    Purpose: This preliminary investigation explored potential cognitive and linguistic sources of variance in 2- year-olds' speech-sound discrimination by using the toddler change/no-change procedure and examined whether modifications would result in a procedure that can be used consistently with younger 2-year-olds. Method: Twenty typically…

  13. The use of an intraoral electrolarynx for an edentulous patient: a clinical report.

    PubMed

    Wee, Alvin G; Wee, Lisa A; Cheng, Ansgar C; Cwynar, Roger B

    2004-06-01

    This clinical report describes the clinical requirements, treatment sequence, and use of a relatively new intraoral electrolarynx for a completely edentulous patient. This device consists of a sound source attached to the maxilla and a hand-held controller unit that controls the pitch and volume of the intraoral sound source via transmitted radio waves.

  14. Spherical beamforming for spherical array with impedance surface

    NASA Astrophysics Data System (ADS)

    Tontiwattanakul, Khemapat

    2018-01-01

    Spherical microphone array beamforming has been a popular research topic for recent years. Due to their isotropic beam in three dimensional spaces as well as a certain frequency range, the arrays are widely used in many applications such as sound field recording, acoustic beamforming, and noise source localisation. The body of a spherical array is usually considered perfectly rigid. A sound field captured by the sensors on spherical array can be decomposed into a series of spherical harmonics. In noise source localisation, the amplitude density of sound sources is estimated and illustrated by mean of colour maps. In this work, a rigid spherical array covered by fibrous materials is studied via numerical simulation and the performance of the spherical beamforming is discussed.

  15. Study on the Non-contact Acoustic Inspection Method for Concrete Structures by using Strong Ultrasonic Sound source

    NASA Astrophysics Data System (ADS)

    Sugimoto, Tsuneyoshi; Uechi, Itsuki; Sugimoto, Kazuko; Utagawa, Noriyuki; Katakura, Kageyoshi

    Hammering test is widely used to inspect the defects in concrete structures. However, this method has a major difficulty in inspect at high-places, such as a tunnel ceiling or a bridge girder. Moreover, its detection accuracy is dependent on a tester's experience. Therefore, we study about the non-contact acoustic inspection method of the concrete structure using the air borne sound wave and a laser Doppler vibrometer. In this method, the concrete surface is excited by air-borne sound wave emitted with a long range acoustic device (LRAD), and the vibration velocity on the concrete surface is measured by a laser Doppler vibrometer. A defect part is detected by the same flexural resonance as the hammer method. It is already shown clearly that detection of a defect can be performed from a long distance of 5 m or more using a concrete test object. Moreover, it is shown that a real concrete structure can also be applied. However, when the conventional LRAD was used as a sound source, there were problems, such as restrictions of a measurement angle and the surrounding noise. In order to solve these problems, basic examination which used the strong ultrasonic wave sound source was carried out. In the experiment, the concrete test object which includes an imitation defect from 5-m distance was used. From the experimental result, when the ultrasonic sound source was used, restrictions of a measurement angle become less severe and it was shown that circumference noise also falls dramatically.

  16. Double external jugular vein and other rare venous variations of the head and neck.

    PubMed

    Shenoy, Varsha; Saraswathi, Perumal; Raghunath, Gunapriya; Karthik, Jayakumar Sai

    2012-12-01

    Superficial veins of the head and neck are utilised for central venous cannulation, oral reconstruction and parenteral nutrition in debilitated patients. Clinical and sonological examinations of these veins may provide clues toward underlying cardiac pathology. Hence, although variations in these vessels are common, a sound knowledge of such variations becomes clinically important to surgeons, radiologists and interventional anaesthetists. We report a rare case of a left-sided double external jugular vein where the common facial vein continued as the second external jugular vein, and where there was a communicating channel between the internal jugular vein on the same side and the anterior jugular vein.

  17. Measurements of farfield sound generation from a flow-excited cavity

    NASA Technical Reports Server (NTRS)

    Block, P. J. W.; Heller, H.

    1975-01-01

    Results of 1/3-octave-band spectral measurements of internal pressures and the external acoustic field of a tangentially blown rectangular cavity are compared. Proposed mechanisms for sound generation are reviewed, and spectra and directivity plots of cavity noise are presented. Directivity plots show a slightly modified monopole pattern. Frequencies of cavity response are calculated using existing predictions and are compared with those obtained experimentally. The effect of modifying the upstream boundary layer on the noise was investigated, and its effectiveness was found to be a function of cavity geometry and flow velocity.

  18. Ultrasonic waves in classical gases

    NASA Astrophysics Data System (ADS)

    Magner, A. G.; Gorenstein, M. I.; Grygoriev, U. V.

    2017-12-01

    The velocity and absorption coefficient for the plane sound waves in a classical gas are obtained by solving the Boltzmann kinetic equation, which describes the reaction of the single-particle distribution function to a periodic external field. Within the linear response theory, the nonperturbative dispersion equation valid for all sound frequencies is derived and solved numerically. The results are in agreement with the approximate analytical solutions found for both the frequent- and rare-collision regimes. These results are also in qualitative agreement with the experimental data for ultrasonic waves in dilute gases.

  19. Time-of-Flight Measurement of Sound Speed in Air

    ERIC Educational Resources Information Center

    Ganci, Salvatore

    2011-01-01

    This paper describes a set of simple experiments with a very low cost using a notebook as a measuring instrument without external hardware. The major purpose is to provide demonstration experiments for schools with very low budgets. (Contains 6 figures.)

  20. 75 FR 39915 - Marine Mammals; File No. 15483

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-13

    ... whales adjust their bearing to avoid received sound pressure levels greater than 120 dB, which would... marine mammals may be taken by Level B harassment as researchers attempt to provoke an avoidance response through sound transmission into their environment. The sound source consists of a transmitter and...

  1. Characterisation of structure-borne sound source using reception plate method.

    PubMed

    Putra, A; Saari, N F; Bakri, H; Ramlan, R; Dan, R M

    2013-01-01

    A laboratory-based experiment procedure of reception plate method for structure-borne sound source characterisation is reported in this paper. The method uses the assumption that the input power from the source installed on the plate is equal to the power dissipated by the plate. In this experiment, rectangular plates having high and low mobility relative to that of the source were used as the reception plates and a small electric fan motor was acting as the structure-borne source. The data representing the source characteristics, namely, the free velocity and the source mobility, were obtained and compared with those from direct measurement. Assumptions and constraints employing this method are discussed.

  2. Complete data listings for CSEM soundings on Kilauea Volcano, Hawaii

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kauahikaua, J.; Jackson, D.B.; Zablocki, C.J.

    1983-01-01

    This document contains complete data from a controlled-source electromagnetic (CSEM) sounding/mapping project at Kilauea volcano, Hawaii. The data were obtained at 46 locations about a fixed-location, horizontal, polygonal loop source in the summit area of the volcano. The data consist of magnetic field amplitudes and phases at excitation frequencies between 0.04 and 8 Hz. The vector components were measured in a cylindrical coordinate system centered on the loop source. 5 references.

  3. The Physiological Basis of Chinese Höömii Generation.

    PubMed

    Li, Gelin; Hou, Qian

    2017-01-01

    The study aimed to investigate the physiological basis of vibration mode of sound source of a variety of Mongolian höömii forms of singing in China. The participant is a Mongolian höömii performing artist who was recommended by the Chinese Medical Association of Art. He used three types of höömii, namely vibration höömii, whistle höömii, and overtone höömii, which were compared with general comfortable pronunciation of /i:/ as control. Phonation was observed during /i:/. A laryngostroboscope (Storz) was used to determine vibration source-mucosal wave in the throat. For vibration höömii, bilateral ventricular folds approximated to the midline and made contact at the midline during pronunciation. Ventricular and vocal folds oscillated together as a single unit to form a composite vibration (double oscillator) sound source. For whistle höömii, ventricular folds approximated to the midline to cover part of vocal folds, but did not contact each other. It did not produce mucosal wave. The vocal folds produced mucosal wave to form a single vibration sound source. For overtone höömii, the anterior two-thirds of ventricular folds touched each other during pronunciation. The last one-third produced the mucosal wave. The vocal folds produced mucosal wave at the same time, which was a composite vibration (double oscillator) sound source mode. The Höömii form of singing, including mixed voices and multivoice, was related to the presence of dual vibration sound sources. Its high overtone form of singing (whistle höömii) was related to stenosis at the resonance chambers' initiation site (ventricular folds level). Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  4. Spacecraft Internal Acoustic Environment Modeling

    NASA Technical Reports Server (NTRS)

    Chu, Shao-Sheng R.; Allen Christopher S.

    2010-01-01

    Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. This paper describes the implementation of acoustic modeling for design purposes by incrementally increasing model fidelity and validating the accuracy of the model while predicting the noise of sources under various conditions. During FY 07, a simple-geometry Statistical Energy Analysis (SEA) model was developed and validated using a physical mockup and acoustic measurements. A process for modeling the effects of absorptive wall treatments and the resulting reverberation environment were developed. During FY 08, a model with more complex and representative geometry of the Orion Crew Module (CM) interior was built, and noise predictions based on input noise sources were made. A corresponding physical mockup was also built. Measurements were made inside this mockup, and comparisons were made with the model and showed excellent agreement. During FY 09, the fidelity of the mockup and corresponding model were increased incrementally by including a simple ventilation system. The airborne noise contribution of the fans was measured using a sound intensity technique, since the sound power levels were not known beforehand. This is opposed to earlier studies where Reference Sound Sources (RSS) with known sound power level were used. Comparisons of the modeling result with the measurements in the mockup showed excellent results. During FY 10, the fidelity of the mockup and the model were further increased by including an ECLSS (Environmental Control and Life Support System) wall, associated closeout panels, and the gap between ECLSS wall and mockup wall. The effect of sealing the gap and adding sound absorptive treatment to ECLSS wall were also modeled and validated.

  5. Modeling the utility of binaural cues for underwater sound localization.

    PubMed

    Schneider, Jennifer N; Lloyd, David R; Banks, Patchouly N; Mercado, Eduardo

    2014-06-01

    The binaural cues used by terrestrial animals for sound localization in azimuth may not always suffice for accurate sound localization underwater. The purpose of this research was to examine the theoretical limits of interaural timing and level differences available underwater using computational and physical models. A paired-hydrophone system was used to record sounds transmitted underwater and recordings were analyzed using neural networks calibrated to reflect the auditory capabilities of terrestrial mammals. Estimates of source direction based on temporal differences were most accurate for frequencies between 0.5 and 1.75 kHz, with greater resolution toward the midline (2°), and lower resolution toward the periphery (9°). Level cues also changed systematically with source azimuth, even at lower frequencies than expected from theoretical calculations, suggesting that binaural mechanical coupling (e.g., through bone conduction) might, in principle, facilitate underwater sound localization. Overall, the relatively limited ability of the model to estimate source position using temporal and level difference cues underwater suggests that animals such as whales may use additional cues to accurately localize conspecifics and predators at long distances. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. The meaning of city noises: Investigating sound quality in Paris (France)

    NASA Astrophysics Data System (ADS)

    Dubois, Daniele; Guastavino, Catherine; Maffiolo, Valerie; Guastavino, Catherine; Maffiolo, Valerie

    2004-05-01

    The sound quality of Paris (France) was investigated by using field inquiries in actual environments (open questionnaires) and using recordings under laboratory conditions (free-sorting tasks). Cognitive categories of soundscapes were inferred by means of psycholinguistic analyses of verbal data and of mathematical analyses of similarity judgments. Results show that auditory judgments mainly rely on source identification. The appraisal of urban noise therefore depends on the qualitative evaluation of noise sources. The salience of human sounds in public spaces has been demonstrated, in relation to pleasantness judgments: soundscapes with human presence tend to be perceived as more pleasant than soundscapes consisting solely of mechanical sounds. Furthermore, human sounds are qualitatively processed as indicators of human outdoor activities, such as open markets, pedestrian areas, and sidewalk cafe districts that reflect city life. In contrast, mechanical noises (mainly traffic noise) are commonly described in terms of physical properties (temporal structure, intensity) of a permanent background noise that also characterizes urban areas. This connotes considering both quantitative and qualitative descriptions to account for the diversity of cognitive interpretations of urban soundscapes, since subjective evaluations depend both on the meaning attributed to noise sources and on inherent properties of the acoustic signal.

  7. Sound Levels and Risk Perceptions of Music Students During Classes.

    PubMed

    Rodrigues, Matilde A; Amorim, Marta; Silva, Manuela V; Neves, Paula; Sousa, Aida; Inácio, Octávio

    2015-01-01

    It is well recognized that professional musicians are at risk of hearing damage due to the exposure to high sound pressure levels during music playing. However, it is important to recognize that the musicians' exposure may start early in the course of their training as students in the classroom and at home. Studies regarding sound exposure of music students and their hearing disorders are scarce and do not take into account important influencing variables. Therefore, this study aimed to describe sound level exposures of music students at different music styles, classes, and according to the instrument played. Further, this investigation attempted to analyze the perceptions of students in relation to exposure to loud music and consequent health risks, as well as to characterize preventive behaviors. The results showed that music students are exposed to high sound levels in the course of their academic activity. This exposure is potentiated by practice outside the school and other external activities. Differences were found between music style, instruments, and classes. Tinnitus, hyperacusis, diplacusis, and sound distortion were reported by the students. However, students were not entirely aware of the health risks related to exposure to high sound pressure levels. These findings reflect the importance of starting intervention in relation to noise risk reduction at an early stage, when musicians are commencing their activity as students.

  8. Acoustic positioning for space processing experiments

    NASA Technical Reports Server (NTRS)

    Whymark, R. R.

    1974-01-01

    An acoustic positioning system is described that is adaptable to a range of processing chambers and furnace systems. Operation at temperatures exceeding 1000 C is demonstrated in experiments involving the levitation of liquid and solid glass materials up to several ounces in weight. The system consists of a single source of sound that is beamed at a reflecting surface placed a distance away. Stable levitation is achieved at a succession of discrete energy minima contained throughout the volume between the reflector and the sound source. Several specimens can be handled at one time. Metal discs up to 3 inches in diameter can be levitated, solid spheres of dense material up to 0.75 inches diameter, and liquids can be freely suspended in l-g in the form of near-spherical droplets up to 0.25 inch diameter, or flattened liquid discs up to 0.6 inches diameter. Larger specimens may be handled by increasing the size of the sound source or by reducing the sound frequency.

  9. Sound source localization on an axial fan at different operating points

    NASA Astrophysics Data System (ADS)

    Zenger, Florian J.; Herold, Gert; Becker, Stefan; Sarradj, Ennes

    2016-08-01

    A generic fan with unskewed fan blades is investigated using a microphone array method. The relative motion of the fan with respect to the stationary microphone array is compensated by interpolating the microphone data to a virtual rotating array with the same rotational speed as the fan. Hence, beamforming algorithms with deconvolution, in this case CLEAN-SC, could be applied. Sound maps and integrated spectra of sub-components are evaluated for five operating points. At selected frequency bands, the presented method yields sound maps featuring a clear circular source pattern corresponding to the nine fan blades. Depending on the adjusted operating point, sound sources are located on the leading or trailing edges of the fan blades. Integrated spectra show that in most cases leading edge noise is dominant for the low-frequency part and trailing edge noise for the high-frequency part. The shift from leading to trailing edge noise is strongly dependent on the operating point and frequency range considered.

  10. Righting elicited by novel or familiar auditory or vestibular stimulation in the haloperidol-treated rat: rat posturography as a model to study anticipatory motor control.

    PubMed

    Clark, Callie A M; Sacrey, Lori-Ann R; Whishaw, Ian Q

    2009-09-15

    External cues, including familiar music, can release Parkinson's disease patients from catalepsy but the neural basis of the effect is not well understood. In the present study, posturography, the study of posture and its allied reflexes, was used to develop an animal model that could be used to investigate the underlying neural mechanisms of this sound-induced behavioral activation. In the rat, akinetic catalepsy induced by a dopamine D2 receptor antagonist (haloperidol 5mg/kg) can model human catalepsy. Using this model, two experiments examined whether novel versus familiar sound stimuli could interrupt haloperidol-induced catalepsy in the rat. Rats were placed on a variably inclined grid and novel or familiar auditory cues (single key jingle or multiple key jingles) were presented. The dependent variable was movement by the rats to regain equilibrium as assessed with a movement notation score. The sound cues enhanced movements used to regain postural stability and familiar sound stimuli were more effective than unfamiliar sound stimuli. The results are discussed in relation to the idea that nonlemniscal and lemniscal auditory pathways differentially contribute to behavioral activation versus tonotopic processing of sound.

  11. Study of the Acoustic Effects of Hydrokinetic Tidal Turbines in Admiralty Inlet, Puget Sound

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brian Polagye; Jim Thomson; Chris Bassett

    2012-03-30

    Hydrokinetic turbines will be a source of noise in the marine environment - both during operation and during installation/removal. High intensity sound can cause injury or behavioral changes in marine mammals and may also affect fish and invertebrates. These noise effects are, however, highly dependent on the individual marine animals; the intensity, frequency, and duration of the sound; and context in which the sound is received. In other words, production of sound is a necessary, but not sufficient, condition for an environmental impact. At a workshop on the environmental effects of tidal energy development, experts identified sound produced by turbinesmore » as an area of potentially significant impact, but also high uncertainty. The overall objectives of this project are to improve our understanding of the potential acoustic effects of tidal turbines by: (1) Characterizing sources of existing underwater noise; (2) Assessing the effectiveness of monitoring technologies to characterize underwater noise and marine mammal responsiveness to noise; (3) Evaluating the sound profile of an operating tidal turbine; and (4) Studying the effect of turbine sound on surrogate species in a laboratory environment. This study focuses on a specific case study for tidal energy development in Admiralty Inlet, Puget Sound, Washington (USA), but the methodologies and results are applicable to other turbine technologies and geographic locations. The project succeeded in achieving the above objectives and, in doing so, substantially contributed to the body of knowledge around the acoustic effects of tidal energy development in several ways: (1) Through collection of data from Admiralty Inlet, established the sources of sound generated by strong currents (mobilizations of sediment and gravel) and determined that low-frequency sound recorded during periods of strong currents is non-propagating pseudo-sound. This helped to advance the debate within the marine and hydrokinetics acoustic community as to whether strong currents produce propagating sound. (2) Analyzed data collected from a tidal turbine operating at the European Marine Energy Center to develop a profile of turbine sound and developed a framework to evaluate the acoustic effects of deploying similar devices in other locations. This framework has been applied to Public Utility District No. 1 of Snohomish Country's demonstration project in Admiralty Inlet to inform postinstallation acoustic and marine mammal monitoring plans. (3) Demonstrated passive acoustic techniques to characterize the ambient noise environment at tidal energy sites (fixed, long-term observations recommended) and characterize the sound from anthropogenic sources (drifting, short-term observations recommended). (4) Demonstrated the utility and limitations of instrumentation, including bottom mounted instrumentation packages, infrared cameras, and vessel monitoring systems. In doing so, also demonstrated how this type of comprehensive information is needed to interpret observations from each instrument (e.g., hydrophone data can be combined with vessel tracking data to evaluate the contribution of vessel sound to ambient noise). (5) Conducted a study that suggests harbor porpoise in Admiralty Inlet may be habituated to high levels of ambient noise due to omnipresent vessel traffic. The inability to detect behavioral changes associated with a high intensity source of opportunity (passenger ferry) has informed the approach for post-installation marine mammal monitoring. (6) Conducted laboratory exposure experiments of juvenile Chinook salmon and showed that exposure to a worse than worst case acoustic dose of turbine sound does not result in changes to hearing thresholds or biologically significant tissue damage. Collectively, this means that Chinook salmon may be at a relatively low risk of injury from sound produced by tidal turbines located in or near their migration path. In achieving these accomplishments, the project has significantly advanced the District's goals of developing a demonstration-scale tidal energy project in Admiralty Inlet. Pilot demonstrations of this type are an essential step in the development of commercial-scale tidal energy in the United States. This is a renewable resource capable of producing electricity in a highly predictable manner.« less

  12. Concerns of the Institute of Transport Study and Research for reducing the sound level inside completely repaired buses. [noise and vibration control

    NASA Technical Reports Server (NTRS)

    Groza, A.; Calciu, J.; Nicola, I.; Ionasek, A.

    1974-01-01

    Sound level measurements on noise sources on buses are used to observe the effects of attenuating acoustic pressure levels inside the bus by sound-proofing during complete repair. A spectral analysis of the sound level as a function of motor speed, bus speed along the road, and the category of the road is reported.

  13. A sound budget for the southeastern Bering Sea: measuring wind, rainfall, shipping, and other sources of underwater sound.

    PubMed

    Nystuen, Jeffrey A; Moore, Sue E; Stabeno, Phyllis J

    2010-07-01

    Ambient sound in the ocean contains quantifiable information about the marine environment. A passive aquatic listener (PAL) was deployed at a long-term mooring site in the southeastern Bering Sea from 27 April through 28 September 2004. This was a chain mooring with lots of clanking. However, the sampling strategy of the PAL filtered through this noise and allowed the background sound field to be quantified for natural signals. Distinctive signals include the sound from wind, drizzle and rain. These sources dominate the sound budget and their intensity can be used to quantify wind speed and rainfall rate. The wind speed measurement has an accuracy of +/-0.4 m s(-1) when compared to a buoy-mounted anemometer. The rainfall rate measurement is consistent with a land-based measurement in the Aleutian chain at Cold Bay, AK (170 km south of the mooring location). Other identifiable sounds include ships and short transient tones. The PAL was designed to reject transients in the range important for quantification of wind speed and rainfall, but serendipitously recorded peaks in the sound spectrum between 200 Hz and 3 kHz. Some of these tones are consistent with whale calls, but most are apparently associated with mooring self-noise.

  14. Apparatus for measuring surface movement of an object that is subjected to external vibrations

    DOEpatents

    Kotidis, Petros A.; Woodroffe, Jaime A.; Rostler, Peter S.

    1997-01-01

    A system for non-destructively measuring an object and controlling industrial processes in response to the measurement is disclosed in which an impulse laser generates a plurality of sound waves over timed increments in an object. A polarizing interferometer is used to measure surface movement of the object caused by the sound waves and sensed by phase shifts in the signal beam. A photon multiplier senses the phase shift and develops an electrical signal. A signal conditioning arrangement modifies the electrical signals to generate an average signal correlated to the sound waves which in turn is correlated to a physical or metallurgical property of the object, such as temperature, which property may then be used to control the process. External, random vibrations of the workpiece are utilized to develop discernible signals which can be sensed in the interferometer by only one photon multiplier. In addition the interferometer includes an arrangement for optimizing its sensitivity so that movement attributed to various waves can be detected in opaque objects. The interferometer also includes a mechanism for sensing objects with rough surfaces which produce speckle light patterns. Finally the interferometer per se, with the addition of a second photon multiplier is capable of accurately recording beam length distance differences with only one reading.

  15. Method and apparatus for measuring surface movement of a solid object that is subjected to external vibrations

    DOEpatents

    Schultz, Thomas J.; Kotidis, Petros A.; Woodroffe, Jaime A.; Rostler, Peter S.

    1995-01-01

    A system for non-destructively measuring an object and controlling industrial processes in response to the measurement is disclosed in which an impulse laser generates a plurality of sound waves over timed increments in an object. A polarizing interferometer is used to measure surface movement of the object caused by the sound waves and sensed by phase shifts in the signal beam. A photon multiplier senses the phase shift and develops an electrical signal. A signal conditioning arrangement modifies the electrical signals to generate an average signal correlated to the sound waves which in turn is correlated to a physical or metallurgical property of the object, such as temperature, which property may then be used to control the process. External, random vibrations of the workpiece are utilized to develop discernible signals which can be sensed in the interferometer by only one photon multiplier. In addition the interferometer includes an arrangement for optimizing its sensitivity so that movement attributed to various waves can be detected in opaque objects. The interferometer also includes a mechanism for sensing objects with rough surfaces which produce speckle light patterns. Finally the interferometer per se, with the addition of a second photon multiplier is capable of accurately recording beam length distance differences with only one reading.

  16. Method and apparatus for measuring surface movement of a solid object that is subjected to external vibrations

    DOEpatents

    Schultz, T.J.; Kotidis, P.A.; Woodroffe, J.A.; Rostler, P.S.

    1995-04-25

    A system for non-destructively measuring an object and controlling industrial processes in response to the measurement is disclosed in which an impulse laser generates a plurality of sound waves over timed increments in an object. A polarizing interferometer is used to measure surface movement of the object caused by the sound waves and sensed by phase shifts in the signal beam. A photon multiplier senses the phase shift and develops an electrical signal. A signal conditioning arrangement modifies the electrical signals to generate an average signal correlated to the sound waves which in turn is correlated to a physical or metallurgical property of the object, such as temperature, which property may then be used to control the process. External, random vibrations of the workpiece are utilized to develop discernible signals which can be sensed in the interferometer by only one photon multiplier. In addition the interferometer includes an arrangement for optimizing its sensitivity so that movement attributed to various waves can be detected in opaque objects. The interferometer also includes a mechanism for sensing objects with rough surfaces which produce speckle light patterns. Finally the interferometer per se, with the addition of a second photon multiplier is capable of accurately recording beam length distance differences with only one reading. 38 figs.

  17. Apparatus for measuring surface movement of an object that is subjected to external vibrations

    DOEpatents

    Kotidis, P.A.; Woodroffe, J.A.; Rostler, P.S.

    1997-04-22

    A system for non-destructively measuring an object and controlling industrial processes in response to the measurement is disclosed in which an impulse laser generates a plurality of sound waves over timed increments in an object. A polarizing interferometer is used to measure surface movement of the object caused by the sound waves and sensed by phase shifts in the signal beam. A photon multiplier senses the phase shift and develops an electrical signal. A signal conditioning arrangement modifies the electrical signals to generate an average signal correlated to the sound waves which in turn is correlated to a physical or metallurgical property of the object, such as temperature, which property may then be used to control the process. External, random vibrations of the workpiece are utilized to develop discernible signals which can be sensed in the interferometer by only one photon multiplier. In addition the interferometer includes an arrangement for optimizing its sensitivity so that movement attributed to various waves can be detected in opaque objects. The interferometer also includes a mechanism for sensing objects with rough surfaces which produce speckle light patterns. Finally the interferometer per se, with the addition of a second photon multiplier is capable of accurately recording beam length distance differences with only one reading. 38 figs.

  18. Functional morphology of the sound-generating labia in the syrinx of two songbird species.

    PubMed

    Riede, Tobias; Goller, Franz

    2010-01-01

    In songbirds, two sound sources inside the syrinx are used to produce the primary sound. Laterally positioned labia are passively set into vibration, thus interrupting a passing air stream. Together with subsyringeal pressure, the size and tension of the labia determine the spectral characteristics of the primary sound. Very little is known about how the histological composition and morphology of the labia affect their function as sound generators. Here we related the size and microstructure of the labia to their acoustic function in two songbird species with different acoustic characteristics, the white-crowned sparrow and zebra finch. Histological serial sections of the syrinx and different staining techniques were used to identify collagen, elastin and hyaluronan as extracellular matrix components. The distribution and orientation of elastic fibers indicated that the labia in white-crowned sparrows are multi-layered structures, whereas they are more uniformly structured in the zebra finch. Collagen and hyaluronan were evenly distributed in both species. A multi-layered composition could give rise to complex viscoelastic properties of each sound source. We also measured labia size. Variability was found along the dorso-ventral axis in both species. Lateral asymmetry was identified in some individuals but not consistently at the species level. Different size between the left and right sound sources could provide a morphological basis for the acoustic specialization of each sound generator, but only in some individuals. The inconsistency of its presence requires the investigation of alternative explanations, e.g. differences in viscoelastic properties of the labia of the left and right syrinx. Furthermore, we identified attachments of syringeal muscles to the labia as well as to bronchial half rings and suggest a mechanism for their biomechanical function.

  19. Functional morphology of the sound-generating labia in the syrinx of two songbird species

    PubMed Central

    Riede, Tobias; Goller, Franz

    2010-01-01

    In songbirds, two sound sources inside the syrinx are used to produce the primary sound. Laterally positioned labia are passively set into vibration, thus interrupting a passing air stream. Together with subsyringeal pressure, the size and tension of the labia determine the spectral characteristics of the primary sound. Very little is known about how the histological composition and morphology of the labia affect their function as sound generators. Here we related the size and microstructure of the labia to their acoustic function in two songbird species with different acoustic characteristics, the white-crowned sparrow and zebra finch. Histological serial sections of the syrinx and different staining techniques were used to identify collagen, elastin and hyaluronan as extracellular matrix components. The distribution and orientation of elastic fibers indicated that the labia in white-crowned sparrows are multi-layered structures, whereas they are more uniformly structured in the zebra finch. Collagen and hyaluronan were evenly distributed in both species. A multi-layered composition could give rise to complex viscoelastic properties of each sound source. We also measured labia size. Variability was found along the dorso-ventral axis in both species. Lateral asymmetry was identified in some individuals but not consistently at the species level. Different size between the left and right sound sources could provide a morphological basis for the acoustic specialization of each sound generator, but only in some individuals. The inconsistency of its presence requires the investigation of alternative explanations, e.g. differences in viscoelastic properties of the labia of the left and right syrinx. Furthermore, we identified attachments of syringeal muscles to the labia as well as to bronchial half rings and suggest a mechanism for their biomechanical function. PMID:19900184

  20. Theory of acoustic design of opera house and a design proposal

    NASA Astrophysics Data System (ADS)

    Ando, Yoichi

    2004-05-01

    First of all, the theory of subjective preference for sound fields based on the model of auditory-brain system is briefly mentioned. It consists of the temporal factors and spatial factors associated with the left and right cerebral hemispheres, respectively. The temporal criteria are the initial time delay gap between the direct sound and the first Reflection (Dt1) and the subsequent reverberation time (Tsub). These preferred conditions are related to the minimum value of effective duration of the running autocorrelation function of source signals (te)min. The spatial criteria are binaural listening level (LL) and the IACC, which may be extracted from the interaural crosscorrelation function. In the opera house, there are two different kind of sound sources, i.e., the vocal source of relatively short values of (te)min in the stage and the orchestra music of long values of (te)min in the pit. For these sources, a proposal is made here.

  1. Diversity in sound pressure levels and estimated active space of resident killer whale vocalizations.

    PubMed

    Miller, Patrick J O

    2006-05-01

    Signal source intensity and detection range, which integrates source intensity with propagation loss, background noise and receiver hearing abilities, are important characteristics of communication signals. Apparent source levels were calculated for 819 pulsed calls and 24 whistles produced by free-ranging resident killer whales by triangulating the angles-of-arrival of sounds on two beamforming arrays towed in series. Levels in the 1-20 kHz band ranged from 131 to 168 dB re 1 microPa at 1 m, with differences in the means of different sound classes (whistles: 140.2+/-4.1 dB; variable calls: 146.6+/-6.6 dB; stereotyped calls: 152.6+/-5.9 dB), and among stereotyped call types. Repertoire diversity carried through to estimates of active space, with "long-range" stereotyped calls all containing overlapping, independently-modulated high-frequency components (mean estimated active space of 10-16 km in sea state zero) and "short-range" sounds (5-9 km) included all stereotyped calls without a high-frequency component, whistles, and variable calls. Short-range sounds are reported to be more common during social and resting behaviors, while long-range stereotyped calls predominate in dispersed travel and foraging behaviors. These results suggest that variability in sound pressure levels may reflect diverse social and ecological functions of the acoustic repertoire of killer whales.

  2. Sounds and source levels from bowhead whales off Pt. Barrow, Alaska.

    PubMed

    Cummings, W C; Holliday, D V

    1987-09-01

    Sounds were recorded from bowhead whales migrating past Pt. Barrow, AK, to the Canadian Beaufort Sea. They mainly consisted of various low-frequency (25- to 900-Hz) moans and well-defined sound sequences organized into "song" (20-5000 Hz) recorded with our 2.46-km hydrophone array suspended from the ice. Songs were composed of up to 20 repeated phrases (mean, 10) which lasted up to 146 s (mean, 66.3). Several bowhead whales often were within acoustic range of the array at once, but usually only one sang at a time. Vocalizations exhibited diurnal peaks of occurrence (0600-0800, 1600-1800 h). Sounds which were located in the horizontal plane had peak source spectrum levels as follows--44 moans: 129-178 dB re: 1 microPa, 1 m (median, 159); 3 garglelike utterances: 152, 155, and 169 dB; 33 songs: 158-189 dB (median, 177), all presumably from different whales. Based on ambient noise levels, measured total propagation loss, and whale sound source levels, our detection of whale sounds was theoretically noise-limited beyond 2.5 km (moans) and beyond 10.7 km (songs), a model supported by actual localizations. This study showed that over much of the shallow Arctic and sub-Arctic waters, underwater communications of the bowhead whale would be limited to much shorter ranges than for other large whales in lower latitude, deep-water regions.

  3. Sound field separation with sound pressure and particle velocity measurements.

    PubMed

    Fernandez-Grande, Efren; Jacobsen, Finn; Leclère, Quentin

    2012-12-01

    In conventional near-field acoustic holography (NAH) it is not possible to distinguish between sound from the two sides of the array, thus, it is a requirement that all the sources are confined to only one side and radiate into a free field. When this requirement cannot be fulfilled, sound field separation techniques make it possible to distinguish between outgoing and incoming waves from the two sides, and thus NAH can be applied. In this paper, a separation method based on the measurement of the particle velocity in two layers and another method based on the measurement of the pressure and the velocity in a single layer are proposed. The two methods use an equivalent source formulation with separate transfer matrices for the outgoing and incoming waves, so that the sound from the two sides of the array can be modeled independently. A weighting scheme is proposed to account for the distance between the equivalent sources and measurement surfaces and for the difference in magnitude between pressure and velocity. Experimental and numerical studies have been conducted to examine the methods. The double layer velocity method seems to be more robust to noise and flanking sound than the combined pressure-velocity method, although it requires an additional measurement surface. On the whole, the separation methods can be useful when the disturbance of the incoming field is significant. Otherwise the direct reconstruction is more accurate and straightforward.

  4. Neural population encoding and decoding of sound source location across sound level in the rabbit inferior colliculus

    PubMed Central

    Delgutte, Bertrand

    2015-01-01

    At lower levels of sensory processing, the representation of a stimulus feature in the response of a neural population can vary in complex ways across different stimulus intensities, potentially changing the amount of feature-relevant information in the response. How higher-level neural circuits could implement feature decoding computations that compensate for these intensity-dependent variations remains unclear. Here we focused on neurons in the inferior colliculus (IC) of unanesthetized rabbits, whose firing rates are sensitive to both the azimuthal position of a sound source and its sound level. We found that the azimuth tuning curves of an IC neuron at different sound levels tend to be linear transformations of each other. These transformations could either increase or decrease the mutual information between source azimuth and spike count with increasing level for individual neurons, yet population azimuthal information remained constant across the absolute sound levels tested (35, 50, and 65 dB SPL), as inferred from the performance of a maximum-likelihood neural population decoder. We harnessed evidence of level-dependent linear transformations to reduce the number of free parameters in the creation of an accurate cross-level population decoder of azimuth. Interestingly, this decoder predicts monotonic azimuth tuning curves, broadly sensitive to contralateral azimuths, in neurons at higher levels in the auditory pathway. PMID:26490292

  5. Active noise control using a steerable parametric array loudspeaker.

    PubMed

    Tanaka, Nobuo; Tanaka, Motoki

    2010-06-01

    Arguably active noise control enables the sound suppression at the designated control points, while the sound pressure except the targeted locations is likely to augment. The reason is clear; a control source normally radiates the sound omnidirectionally. To cope with this problem, this paper introduces a parametric array loudspeaker (PAL) which produces a spatially focused sound beam due to the attribute of ultrasound used for carrier waves, thereby allowing one to suppress the sound pressure at the designated point without causing spillover in the whole sound field. First the fundamental characteristics of PAL are overviewed. The scattered pressure in the near field contributed by source strength of PAL is then described, which is needed for the design of an active noise control system. Furthermore, the optimal control law for minimizing the sound pressure at control points is derived, the control effect being investigated analytically and experimentally. With a view to tracking a moving target point, a steerable PAL based upon a phased array scheme is presented, with the result that the generation of a moving zone of quiet becomes possible without mechanically rotating the PAL. An experiment is finally conducted, demonstrating the validity of the proposed method.

  6. A mechanism study of sound wave-trapping barriers.

    PubMed

    Yang, Cheng; Pan, Jie; Cheng, Li

    2013-09-01

    The performance of a sound barrier is usually degraded if a large reflecting surface is placed on the source side. A wave-trapping barrier (WTB), with its inner surface covered by wedge-shaped structures, has been proposed to confine waves within the area between the barrier and the reflecting surface, and thus improve the performance. In this paper, the deterioration in performance of a conventional sound barrier due to the reflecting surface is first explained in terms of the resonance effect of the trapped modes. At each resonance frequency, a strong and mode-controlled sound field is generated by the noise source both within and in the vicinity outside the region bounded by the sound barrier and the reflecting surface. It is found that the peak sound pressures in the barrier's shadow zone, which correspond to the minimum values in the barrier's insertion loss, are largely determined by the resonance frequencies and by the shapes and losses of the trapped modes. These peak pressures usually result in high sound intensity component impinging normal to the barrier surface near the top. The WTB can alter the sound wave diffraction at the top of the barrier if the wavelengths of the sound wave are comparable or smaller than the dimensions of the wedge. In this case, the modified barrier profile is capable of re-organizing the pressure distribution within the bounded domain and altering the acoustic properties near the top of the sound barrier.

  7. Improvements to the internal and external antenna H(-) ion sources at the Spallation Neutron Source.

    PubMed

    Welton, R F; Dudnikov, V G; Han, B X; Murray, S N; Pennisi, T R; Pillar, C; Santana, M; Stockli, M P; Turvey, M W

    2014-02-01

    The Spallation Neutron Source (SNS), a large scale neutron production facility, routinely operates with 30-40 mA peak current in the linac. Recent measurements have shown that our RF-driven internal antenna, Cs-enhanced, multi-cusp ion sources injects ∼55 mA of H(-) beam current (∼1 ms, 60 Hz) at 65-kV into a Radio Frequency Quadrupole (RFQ) accelerator through a closely coupled electrostatic Low-Energy Beam Transport system. Over the last several years a decrease in RFQ transmission and issues with internal antennas has stimulated source development at the SNS both for the internal and external antenna ion sources. This report discusses progress in improving internal antenna reliability, H(-) yield improvements which resulted from modifications to the outlet aperture assembly (applicable to both internal and external antenna sources) and studies made of the long standing problem of beam persistence with the external antenna source. The current status of the external antenna ion source will also be presented.

  8. A capital idea. Bonds and nontraditional financing options.

    PubMed

    Wareham, Therese L

    2004-05-01

    Not-for-profit healthcare organizations have four basic sources of capital: internal sources, philanthropy, asset sales, and external sources. External sources, in particular, offer a wealth of options that are important for such organizations--especially those facing significant capital shortfalls--to consider. External sources include bond offerings and nontraditional offerings, such as receivables financing, off-balance-sheet options, real estate investment trusts, and subordinated securities.

  9. Surface acoustical intensity measurements on a diesel engine

    NASA Technical Reports Server (NTRS)

    Mcgary, M. C.; Crocker, M. J.

    1980-01-01

    The use of surface intensity measurements as an alternative to the conventional selective wrapping technique of noise source identification and ranking on diesel engines was investigated. A six cylinder, in line turbocharged, 350 horsepower diesel engine was used. Sound power was measured under anechoic conditions for eight separate parts of the engine at steady state operating conditions using the conventional technique. Sound power measurements were repeated on five separate parts of the engine using the surface intensity at the same steady state operating conditions. The results were compared by plotting sound power level against frequency and noise source rankings for the two methods.

  10. Building Acoustics

    NASA Astrophysics Data System (ADS)

    Cowan, James

    This chapter summarizes and explains key concepts of building acoustics. These issues include the behavior of sound waves in rooms, the most commonly used rating systems for sound and sound control in buildings, the most common noise sources found in buildings, practical noise control methods for these sources, and the specific topic of office acoustics. Common noise issues for multi-dwelling units can be derived from most of the sections of this chapter. Books can be and have been written on each of these topics, so the purpose of this chapter is to summarize this information and provide appropriate resources for further exploration of each topic.

  11. External Photoevaporation of the Solar Nebula. II. Effects on Disk Structure and Evolution with Non-uniform Turbulent Viscosity due to the Magnetorotational Instability

    NASA Astrophysics Data System (ADS)

    Kalyaan, A.; Desch, S. J.; Monga, N.

    2015-12-01

    The structure and evolution of protoplanetary disks, especially the radial flows of gas through them, are sensitive to a number of factors. One that has been considered only occasionally in the literature is external photoevaporation by far-ultraviolet (FUV) radiation from nearby, massive stars, despite the fact that nearly half of disks will experience photoevaporation. Another effect apparently not considered in the literature is a spatially and temporally varying value of α in the disk (where the turbulent viscosity ν is α times the sound speed C times the disk scale height H). Here we use the formulation of Bai & Stone to relate α to the ionization fraction in the disk, assuming turbulent transport of angular momentum is due to the magnetorotational instability. We calculate the ionization fraction of the disk gas under various assumptions about ionization sources and dust grain properties. Disk evolution is most sensitive to the surface area of dust. We find that typically α ≲ 10-5 in the inner disk (<2 AU), rising to ˜10-1 beyond 20 AU. This drastically alters the structure of the disk and the flow of mass through it: while the outer disk rapidly viscously spreads, the inner disk hardly evolves; this leads to a steep surface density profile ({{Σ }}\\propto {r}-< p> with < p> ≈ 2-5 in the 5-30 AU region) that is made steeper by external photoevaporation. We also find that the combination of variable α and external photoevaporation eventually causes gas as close as 3 AU, previously accreting inward, to be drawn outward to the photoevaporated outer edge of the disk. These effects have drastic consequences for planet formation and volatile transport in protoplanetary disks.

  12. Peer Review Report for the Draft EPA Handbook on the Benefits, Costs and Impacts of Land Cleanup and Reuse (2011)

    EPA Pesticide Factsheets

    This external review has several objectives including to assess whether the literature summarized is comprehensive and accurate as of 2010, and whether the Handbook’s original portions are sound and useful.

  13. Effects of hydrokinetic turbine sound on the behavior of four species of fish within an experimental mesocosm

    DOE PAGES

    Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin

    2017-02-04

    The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less

  14. Effects of hydrokinetic turbine sound on the behavior of four species of fish within an experimental mesocosm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin

    The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less

  15. On Identifying the Sound Sources in a Turbulent Flow

    NASA Technical Reports Server (NTRS)

    Goldstein, M. E.

    2008-01-01

    A space-time filtering approach is used to divide an unbounded turbulent flow into its radiating and non-radiating components. The result is then used to clarify a number of issues including the possibility of identifying the sources of the sound in such flows. It is also used to investigate the efficacy of some of the more recent computational approaches.

  16. The sound field of a rotating dipole in a plug flow.

    PubMed

    Wang, Zhao-Huan; Belyaev, Ivan V; Zhang, Xiao-Zheng; Bi, Chuan-Xing; Faranosov, Georgy A; Dowell, Earl H

    2018-04-01

    An analytical far field solution for a rotating point dipole source in a plug flow is derived. The shear layer of the jet is modelled as an infinitely thin cylindrical vortex sheet and the far field integral is calculated by the stationary phase method. Four numerical tests are performed to validate the derived solution as well as to assess the effects of sound refraction from the shear layer. First, the calculated results using the derived formulations are compared with the known solution for a rotating dipole in a uniform flow to validate the present model in this fundamental test case. After that, the effects of sound refraction for different rotating dipole sources in the plug flow are assessed. Then the refraction effects on different frequency components of the signal at the observer position, as well as the effects of the motion of the source and of the type of source are considered. Finally, the effect of different sound speeds and densities outside and inside the plug flow is investigated. The solution obtained may be of particular interest for propeller and rotor noise measurements in open jet anechoic wind tunnels.

  17. A Robust Sound Source Localization Approach for Microphone Array with Model Errors

    NASA Astrophysics Data System (ADS)

    Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong

    In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.

  18. Active Exhaust Silencing Systen For the Management of Auxillary Power Unit Sound Signatures

    DTIC Science & Technology

    2014-08-01

    conceptual mass-less pistons are introduced into the system before and after the injection site, such that they will move exactly with the plane wave...Unit Sound Signatures, Helminen, et al. Page 2 of 7 either the primary source or the injected source. It is assumed that the pistons are ‘close...source, it causes both pistons to move identically. The pressures induced by the flow on the pistons do not affect the flow generated by the

  19. The rotary subwoofer: a controllable infrasound source.

    PubMed

    Park, Joseph; Garcés, Milton; Thigpen, Bruce

    2009-04-01

    The rotary subwoofer is a novel acoustic transducer capable of projecting infrasonic signals at high sound pressure levels. The projector produces higher acoustic particle velocities than conventional transducers which translate into higher radiated sound pressure levels. This paper characterizes measured performance of a rotary subwoofer and presents a model to predict sound pressure levels.

  20. How to study the aetiology of burn injury: the epidemiological approach.

    PubMed

    Bouter, L M; Knipschild, P G; van Rijn, J L; Meertens, R M

    1989-06-01

    Effective prevention of burn injury should be based on sound aetiological knowledge. This article deals with epidemiological methods to study the incidence of burn injury as a function of its risk factors. Central methodological issues are comparability of baseline prognosis, comparability of measurements (of effects in cohort studies and of risk factors in case-control studies), and comparability of external circumstances. These principles are clarified with a number of fictitious examples of risk factors for burn injury. It is explained that in preventive trials comparability may be achieved by randomization, blinding and placebo intervention. The main tools in non-experimental studies are deliberate selection and multivariate analysis. Special attention is given to the definition of the source population and to reducing measurement incomparability in case-control studies. Some well-designed case-control studies following these principles might bring effective prevention of burn injury some steps nearer.

  1. Methodology to improve design of accelerated life tests in civil engineering projects.

    PubMed

    Lin, Jing; Yuan, Yongbo; Zhou, Jilai; Gao, Jie

    2014-01-01

    For reliability testing an Energy Expansion Tree (EET) and a companion Energy Function Model (EFM) are proposed and described in this paper. Different from conventional approaches, the EET provides a more comprehensive and objective way to systematically identify external energy factors affecting reliability. The EFM introduces energy loss into a traditional Function Model to identify internal energy sources affecting reliability. The combination creates a sound way to enumerate the energies to which a system may be exposed during its lifetime. We input these energies into planning an accelerated life test, a Multi Environment Over Stress Test. The test objective is to discover weak links and interactions among the system and the energies to which it is exposed, and design them out. As an example, the methods are applied to the pipe in subsea pipeline. However, they can be widely used in other civil engineering industries as well. The proposed method is compared with current methods.

  2. Stochastic road excitation and control feasibility in a 2D linear tyre model

    NASA Astrophysics Data System (ADS)

    Rustighi, E.; Elliott, S. J.

    2007-03-01

    For vehicle under normal driving conditions and speeds above 30-40 km/h the dominating internal and external noise source is the sound generated by the interaction between the tyre and the road. This paper presents a simple model to predict tyre behaviour in the frequency range up to 400 Hz, where the dominant vibration is two dimensional. The tyre is modelled as an elemental system, which permits the analysis of the low-frequency tyre response when excited by distributed stochastic displacements in the contact patch. A linear model has been used to calculate the contact forces from the road roughness and thus calculate the average spectral properties of the resulting radial velocity of the tyre in one step from the spectral properties of the road roughness. Such a model has also been used to provide an estimate of the potential effect of various active control strategies for reducing the tyre vibrations.

  3. Physics of thermo-acoustic sound generation

    NASA Astrophysics Data System (ADS)

    Daschewski, M.; Boehm, R.; Prager, J.; Kreutzbruck, M.; Harrer, A.

    2013-09-01

    We present a generalized analytical model of thermo-acoustic sound generation based on the analysis of thermally induced energy density fluctuations and their propagation into the adjacent matter. The model provides exact analytical prediction of the sound pressure generated in fluids and solids; consequently, it can be applied to arbitrary thermal power sources such as thermophones, plasma firings, laser beams, and chemical reactions. Unlike existing approaches, our description also includes acoustic near-field effects and sound-field attenuation. Analytical results are compared with measurements of sound pressures generated by thermo-acoustic transducers in air for frequencies up to 1 MHz. The tested transducers consist of titanium and indium tin oxide coatings on quartz glass and polycarbonate substrates. The model reveals that thermo-acoustic efficiency increases linearly with the supplied thermal power and quadratically with thermal excitation frequency. Comparison of the efficiency of our thermo-acoustic transducers with those of piezoelectric-based airborne ultrasound transducers using impulse excitation showed comparable sound pressure values. The present results show that thermo-acoustic transducers can be applied as broadband, non-resonant, high-performance ultrasound sources.

  4. 3-D localization of virtual sound sources: effects of visual environment, pointing method, and training.

    PubMed

    Majdak, Piotr; Goupell, Matthew J; Laback, Bernhard

    2010-02-01

    The ability to localize sound sources in three-dimensional space was tested in humans. In Experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE; darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In Experiment 2, subjects were provided with sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of Experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies.

  5. Optical and Acoustic Sensor-Based 3D Ball Motion Estimation for Ball Sport Simulators †.

    PubMed

    Seo, Sang-Woo; Kim, Myunggyu; Kim, Yejin

    2018-04-25

    Estimation of the motion of ball-shaped objects is essential for the operation of ball sport simulators. In this paper, we propose an estimation system for 3D ball motion, including speed and angle of projection, by using acoustic vector and infrared (IR) scanning sensors. Our system is comprised of three steps to estimate a ball motion: sound-based ball firing detection, sound source localization, and IR scanning for motion analysis. First, an impulsive sound classification based on the mel-frequency cepstrum and feed-forward neural network is introduced to detect the ball launch sound. An impulsive sound source localization using a 2D microelectromechanical system (MEMS) microphones and delay-and-sum beamforming is presented to estimate the firing position. The time and position of a ball in 3D space is determined from a high-speed infrared scanning method. Our experimental results demonstrate that the estimation of ball motion based on sound allows a wider activity area than similar camera-based methods. Thus, it can be practically applied to various simulations in sports such as soccer and baseball.

  6. Turbine Sound May Influence the Metamorphosis Behaviour of Estuarine Crab Megalopae

    PubMed Central

    Pine, Matthew K.; Jeffs, Andrew G.; Radford, Craig A.

    2012-01-01

    It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21–31% compared to silent control treatments, 38–47% compared to tidal turbine sound treatments, and 46–60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment. PMID:23240063

  7. Turbine sound may influence the metamorphosis behaviour of estuarine crab megalopae.

    PubMed

    Pine, Matthew K; Jeffs, Andrew G; Radford, Craig A

    2012-01-01

    It is now widely accepted that a shift towards renewable energy production is needed in order to avoid further anthropogenically induced climate change. The ocean provides a largely untapped source of renewable energy. As a result, harvesting electrical power from the wind and tides has sparked immense government and commercial interest but with relatively little detailed understanding of the potential environmental impacts. This study investigated how the sound emitted from an underwater tidal turbine and an offshore wind turbine would influence the settlement and metamorphosis of the pelagic larvae of estuarine brachyuran crabs which are ubiquitous in most coastal habitats. In a laboratory experiment the median time to metamorphosis (TTM) for the megalopae of the crabs Austrohelice crassa and Hemigrapsus crenulatus was significantly increased by at least 18 h when exposed to either tidal turbine or sea-based wind turbine sound, compared to silent control treatments. Contrastingly, when either species were subjected to natural habitat sound, observed median TTM decreased by approximately 21-31% compared to silent control treatments, 38-47% compared to tidal turbine sound treatments, and 46-60% compared to wind turbine sound treatments. A lack of difference in median TTM in A. crassa between two different source levels of tidal turbine sound suggests the frequency composition of turbine sound is more relevant in explaining such responses rather than sound intensity. These results show that estuarine mudflat sound mediates natural metamorphosis behaviour in two common species of estuarine crabs, and that exposure to continuous turbine sound interferes with this natural process. These results raise concerns about the potential ecological impacts of sound generated by renewable energy generation systems placed in the nearshore environment.

  8. Neural plasticity associated with recently versus often heard objects.

    PubMed

    Bourquin, Nathalie M-P; Spierer, Lucas; Murray, Micah M; Clarke, Stephanie

    2012-09-01

    In natural settings the same sound source is often heard repeatedly, with variations in spectro-temporal and spatial characteristics. We investigated how such repetitions influence sound representations and in particular how auditory cortices keep track of recently vs. often heard objects. A set of 40 environmental sounds was presented twice, i.e. as prime and as repeat, while subjects categorized the corresponding sound sources as living vs. non-living. Electrical neuroimaging analyses were applied to auditory evoked potentials (AEPs) comparing primes vs. repeats (effect of presentation) and the four experimental sections. Dynamic analysis of distributed source estimations revealed i) a significant main effect of presentation within the left temporal convexity at 164-215 ms post-stimulus onset; and ii) a significant main effect of section in the right temporo-parietal junction at 166-213 ms. A 3-way repeated measures ANOVA (hemisphere×presentation×section) applied to neural activity of the above clusters during the common time window confirmed the specificity of the left hemisphere for the effect of presentation, but not that of the right hemisphere for the effect of section. In conclusion, spatio-temporal dynamics of neural activity encode the temporal history of exposure to sound objects. Rapidly occurring plastic changes within the semantic representations of the left hemisphere keep track of objects heard a few seconds before, independent of the more general sound exposure history. Progressively occurring and more long-lasting plastic changes occurring predominantly within right hemispheric networks, which are known to code for perceptual, semantic and spatial aspects of sound objects, keep track of multiple exposures. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Numerical Modelling of the Sound Fields in Urban Streets with Diffusely Reflecting Boundaries

    NASA Astrophysics Data System (ADS)

    KANG, J.

    2002-12-01

    A radiosity-based theoretical/computer model has been developed to study the fundamental characteristics of the sound fields in urban streets resulting from diffusely reflecting boundaries, and to investigate the effectiveness of architectural changes and urban design options on noise reduction. Comparison between the theoretical prediction and the measurement in a scale model of an urban street shows very good agreement. Computations using the model in hypothetical rectangular streets demonstrate that though the boundaries are diffusely reflective, the sound attenuation along the length is significant, typically at 20-30 dB/100 m. The sound distribution in a cross-section is generally even unless the cross-section is very close to the source. In terms of the effectiveness of architectural changes and urban design options, it has been shown that over 2-4 dB extra attenuation can be obtained either by increasing boundary absorption evenly or by adding absorbent patches on the façades or the ground. Reducing building height has a similar effect. A gap between buildings can provide about 2-3 dB extra sound attenuation, especially in the vicinity of the gap. The effectiveness of air absorption on increasing sound attenuation along the length could be 3-9 dB at high frequencies. If a treatment is effective with a single source, it is also effective with multiple sources. In addition, it has been demonstrated that if the façades in a street are diffusely reflective, the sound field of the street does not change significantly whether the ground is diffusely or geometrically reflective.

  10. The Effects of Different External Carbon Sources on Nitrous Oxide Emissions during Denitrification in Biological Nutrient Removal Processes

    NASA Astrophysics Data System (ADS)

    Hu, Xiang; Zhang, Jing; Hou, Hongxun

    2018-01-01

    The aim of this study was to investigate the effects of two different external carbon sources (acetate and ethanol) on the nitrous oxide (N2O) emissions during denitrification in biological nutrient removal processes. Results showed that external carbon source significantly influenced N2O emissions during the denitrification process. When acetate served as the external carbon source, 0.49 mg N/L and 0.85 mg N/L of N2O was produced during the denitrificaiton processes in anoxic and anaerobic/anoxic experiments, giving a ratio of N2O-N production to TN removal of 2.37% and 4.96%, respectively. Compared with acetate, the amount of N2O production is negligible when ethanol used as external carbon addition. This suggested that ethanol is a potential alternative external carbon source for acetate from the point of view of N2O emissions.

  11. A Forensically Sound Adversary Model for Mobile Devices.

    PubMed

    Do, Quang; Martini, Ben; Choo, Kim-Kwang Raymond

    2015-01-01

    In this paper, we propose an adversary model to facilitate forensic investigations of mobile devices (e.g. Android, iOS and Windows smartphones) that can be readily adapted to the latest mobile device technologies. This is essential given the ongoing and rapidly changing nature of mobile device technologies. An integral principle and significant constraint upon forensic practitioners is that of forensic soundness. Our adversary model specifically considers and integrates the constraints of forensic soundness on the adversary, in our case, a forensic practitioner. One construction of the adversary model is an evidence collection and analysis methodology for Android devices. Using the methodology with six popular cloud apps, we were successful in extracting various information of forensic interest in both the external and internal storage of the mobile device.

  12. A Forensically Sound Adversary Model for Mobile Devices

    PubMed Central

    Choo, Kim-Kwang Raymond

    2015-01-01

    In this paper, we propose an adversary model to facilitate forensic investigations of mobile devices (e.g. Android, iOS and Windows smartphones) that can be readily adapted to the latest mobile device technologies. This is essential given the ongoing and rapidly changing nature of mobile device technologies. An integral principle and significant constraint upon forensic practitioners is that of forensic soundness. Our adversary model specifically considers and integrates the constraints of forensic soundness on the adversary, in our case, a forensic practitioner. One construction of the adversary model is an evidence collection and analysis methodology for Android devices. Using the methodology with six popular cloud apps, we were successful in extracting various information of forensic interest in both the external and internal storage of the mobile device. PMID:26393812

  13. Mass entrainment and turbulence-driven acceleration of ultra-high energy cosmic rays in Centaurus A

    NASA Astrophysics Data System (ADS)

    Wykes, Sarka; Croston, Judith H.; Hardcastle, Martin J.; Eilek, Jean A.; Biermann, Peter L.; Achterberg, Abraham; Bray, Justin D.; Lazarian, Alex; Haverkorn, Marijke; Protheroe, Ray J.; Bromberg, Omer

    2013-10-01

    Observations of the FR I radio galaxy Centaurus A in radio, X-ray, and gamma-ray bands provide evidence for lepton acceleration up to several TeV and clues about hadron acceleration to tens of EeV. Synthesising the available observational constraints on the physical conditions and particle content in the jets, inner lobes and giant lobes of Centaurus A, we aim to evaluate its feasibility as an ultra-high-energy cosmic-ray source. We apply several methods of determining jet power and affirm the consistency of various power estimates of ~1 × 1043 erg s-1. Employing scaling relations based on previous results for 3C 31, we estimate particle number densities in the jets, encompassing available radio through X-ray observations. Our model is compatible with the jets ingesting ~3 × 1021 g s-1 of matter via external entrainment from hot gas and ~7 × 1022 g s-1 via internal entrainment from jet-contained stars. This leads to an imbalance between the internal lobe pressure available from radiating particles and magnetic field, and our derived external pressure. Based on knowledge of the external environments of other FR I sources, we estimate the thermal pressure in the giant lobes as 1.5 × 10-12 dyn cm-2, from which we deduce a lower limit to the temperature of ~1.6 × 108 K. Using dynamical and buoyancy arguments, we infer ~440-645 Myr and ~560 Myr as the sound-crossing and buoyancy ages of the giant lobes respectively, inconsistent with their spectral ages. We re-investigate the feasibility of particle acceleration via stochastic processes in the lobes, placing new constraints on the energetics and on turbulent input to the lobes. The same "very hot" temperatures that allow self-consistency between the entrainment calculations and the missing pressure also allow stochastic UHECR acceleration models to work.

  14. The Distressed Brain: A Group Blind Source Separation Analysis on Tinnitus

    PubMed Central

    De Ridder, Dirk; Vanneste, Sven; Congedo, Marco

    2011-01-01

    Background Tinnitus, the perception of a sound without an external sound source, can lead to variable amounts of distress. Methodology In a group of tinnitus patients with variable amounts of tinnitus related distress, as measured by the Tinnitus Questionnaire (TQ), an electroencephalography (EEG) is performed, evaluating the patients' resting state electrical brain activity. This resting state electrical activity is compared with a control group and between patients with low (N = 30) and high distress (N = 25). The groups are homogeneous for tinnitus type, tinnitus duration or tinnitus laterality. A group blind source separation (BSS) analysis is performed using a large normative sample (N = 84), generating seven normative components to which high and low tinnitus patients are compared. A correlation analysis of the obtained normative components' relative power and distress is performed. Furthermore, the functional connectivity as reflected by lagged phase synchronization is analyzed between the brain areas defined by the components. Finally, a group BSS analysis on the Tinnitus group as a whole is performed. Conclusions Tinnitus can be characterized by at least four BSS components, two of which are posterior cingulate based, one based on the subgenual anterior cingulate and one based on the parahippocampus. Only the subgenual component correlates with distress. When performed on a normative sample, group BSS reveals that distress is characterized by two anterior cingulate based components. Spectral analysis of these components demonstrates that distress in tinnitus is related to alpha and beta changes in a network consisting of the subgenual anterior cingulate cortex extending to the pregenual and dorsal anterior cingulate cortex as well as the ventromedial prefrontal cortex/orbitofrontal cortex, insula, and parahippocampus. This network overlaps partially with brain areas implicated in distress in patients suffering from pain, functional somatic syndromes and posttraumatic stress disorder, and might therefore represent a specific distress network. PMID:21998628

  15. Impact of external sources of infection on the dynamics of bovine tuberculosis in modelled badger populations.

    PubMed

    Hardstaff, Joanne L; Bulling, Mark T; Marion, Glenn; Hutchings, Michael R; White, Piran C L

    2012-06-27

    The persistence of bovine TB (bTB) in various countries throughout the world is enhanced by the existence of wildlife hosts for the infection. In Britain and Ireland, the principal wildlife host for bTB is the badger (Meles meles). The objective of our study was to examine the dynamics of bTB in badgers in relation to both badger-derived infection from within the population and externally-derived, trickle-type, infection, such as could occur from other species or environmental sources, using a spatial stochastic simulation model. The presence of external sources of infection can increase mean prevalence and reduce the threshold group size for disease persistence. Above the threshold equilibrium group size of 6-8 individuals predicted by the model for bTB persistence in badgers based on internal infection alone, external sources of infection have relatively little impact on the persistence or level of disease. However, within a critical range of group sizes just below this threshold level, external infection becomes much more important in determining disease dynamics. Within this critical range, external infection increases the ratio of intra- to inter-group infections due to the greater probability of external infections entering fully-susceptible groups. The effect is to enable bTB persistence and increase bTB prevalence in badger populations which would not be able to maintain bTB based on internal infection alone. External sources of bTB infection can contribute to the persistence of bTB in badger populations. In high-density badger populations, internal badger-derived infections occur at a sufficient rate that the additional effect of external sources in exacerbating disease is minimal. However, in lower-density populations, external sources of infection are much more important in enhancing bTB prevalence and persistence. In such circumstances, it is particularly important that control strategies to reduce bTB in badgers include efforts to minimise such external sources of infection.

  16. Impact of external sources of infection on the dynamics of bovine tuberculosis in modelled badger populations

    PubMed Central

    2012-01-01

    Background The persistence of bovine TB (bTB) in various countries throughout the world is enhanced by the existence of wildlife hosts for the infection. In Britain and Ireland, the principal wildlife host for bTB is the badger (Meles meles). The objective of our study was to examine the dynamics of bTB in badgers in relation to both badger-derived infection from within the population and externally-derived, trickle-type, infection, such as could occur from other species or environmental sources, using a spatial stochastic simulation model. Results The presence of external sources of infection can increase mean prevalence and reduce the threshold group size for disease persistence. Above the threshold equilibrium group size of 6–8 individuals predicted by the model for bTB persistence in badgers based on internal infection alone, external sources of infection have relatively little impact on the persistence or level of disease. However, within a critical range of group sizes just below this threshold level, external infection becomes much more important in determining disease dynamics. Within this critical range, external infection increases the ratio of intra- to inter-group infections due to the greater probability of external infections entering fully-susceptible groups. The effect is to enable bTB persistence and increase bTB prevalence in badger populations which would not be able to maintain bTB based on internal infection alone. Conclusions External sources of bTB infection can contribute to the persistence of bTB in badger populations. In high-density badger populations, internal badger-derived infections occur at a sufficient rate that the additional effect of external sources in exacerbating disease is minimal. However, in lower-density populations, external sources of infection are much more important in enhancing bTB prevalence and persistence. In such circumstances, it is particularly important that control strategies to reduce bTB in badgers include efforts to minimise such external sources of infection. PMID:22738118

  17. Locating arbitrarily time-dependent sound sources in three dimensional space in real time.

    PubMed

    Wu, Sean F; Zhu, Na

    2010-08-01

    This paper presents a method for locating arbitrarily time-dependent acoustic sources in a free field in real time by using only four microphones. This method is capable of handling a wide variety of acoustic signals, including broadband, narrowband, impulsive, and continuous sound over the entire audible frequency range, produced by multiple sources in three dimensional (3D) space. Locations of acoustic sources are indicated by the Cartesian coordinates. The underlying principle of this method is a hybrid approach that consists of modeling of acoustic radiation from a point source in a free field, triangulation, and de-noising to enhance the signal to noise ratio (SNR). Numerical simulations are conducted to study the impacts of SNR, microphone spacing, source distance and frequency on spatial resolution and accuracy of source localizations. Based on these results, a simple device that consists of four microphones mounted on three mutually orthogonal axes at an optimal distance, a four-channel signal conditioner, and a camera is fabricated. Experiments are conducted in different environments to assess its effectiveness in locating sources that produce arbitrarily time-dependent acoustic signals, regardless whether a sound source is stationary or moves in space, even toward behind measurement microphones. Practical limitations on this method are discussed.

  18. Tinnitus--Current Concepts in Diagnosis and Management.

    ERIC Educational Resources Information Center

    Epstein, Stephen

    1997-01-01

    This article discusses the causes of tinnitus, sound or noise in the ears or head without any external stimulation. Classification of tinnitus, the essentials of medical evaluation of a patient with tinnitus, essential test procedures, and current concepts in the management of tinnitus are addressed. (CR)

  19. Numerical Estimation of Sound Transmission Loss in Launch Vehicle Payload Fairing

    NASA Astrophysics Data System (ADS)

    Chandana, Pawan Kumar; Tiwari, Shashi Bhushan; Vukkadala, Kishore Nath

    2017-08-01

    Coupled acoustic-structural analysis of a typical launch vehicle composite payload faring is carried out, and results are validated with experimental data. Depending on the frequency range of interest, prediction of vibro-acoustic behavior of a structure is usually done using the finite element method, boundary element method or through statistical energy analysis. The present study focuses on low frequency dynamic behavior of a composite payload fairing structure using both coupled and uncoupled vibro-acoustic finite element models up to 710 Hz. A vibro-acoustic model, characterizing the interaction between the fairing structure, air cavity, and satellite, is developed. The external sound pressure levels specified for the payload fairing's acoustic test are considered as external loads for the analysis. Analysis methodology is validated by comparing the interior noise levels with those obtained from full scale Acoustic tests conducted in a reverberation chamber. The present approach has application in the design and optimization of acoustic control mechanisms at lower frequencies.

  20. Sexual dimorphism of sonic apparatus and extreme intersexual variation of sounds in Ophidion rochei (Ophidiidae): first evidence of a tight relationship between morphology and sound characteristics in Ophidiidae

    PubMed Central

    2012-01-01

    Background Many Ophidiidae are active in dark environments and display complex sonic apparatus morphologies. However, sound recordings are scarce and little is known about acoustic communication in this family. This paper focuses on Ophidion rochei which is known to display an important sexual dimorphism in swimbladder and anterior skeleton. The aims of this study were to compare the sound producing morphology, and the resulting sounds in juveniles, females and males of O. rochei. Results Males, females, and juveniles possessed different morphotypes. Females and juveniles contrasted with males because they possessed dramatic differences in morphology of their sonic muscles, swimbladder, supraoccipital crest, and first vertebrae and associated ribs. Further, they lacked the ‘rocker bone’ typically found in males. Sounds from each morphotype were highly divergent. Males generally produced non harmonic, multiple-pulsed sounds that lasted for several seconds (3.5 ± 1.3 s) with a pulse period of ca. 100 ms. Juvenile and female sounds were recorded for the first time in ophidiids. Female sounds were harmonic, had shorter pulse period (±3.7 ms), and never exceeded a few dozen milliseconds (18 ± 11 ms). Moreover, unlike male sounds, female sounds did not have alternating long and short pulse periods. Juvenile sounds were weaker but appear to be similar to female sounds. Conclusions Although it is not possible to distinguish externally male from female in O. rochei, they show a sonic apparatus and sounds that are dramatically different. This difference is likely due to their nocturnal habits that may have favored the evolution of internal secondary sexual characters that help to distinguish males from females and that could facilitate mate choice by females. Moreover, the comparison of different morphotypes in this study shows that these morphological differences result from a peramorphosis that takes place during the development of the gonads. PMID:23217241

  1. Defining ecospace of Arctic marine food webs using a novel quantitative approach

    NASA Astrophysics Data System (ADS)

    Gale, M.; Loseto, L. L.

    2011-12-01

    The Arctic is currently facing unprecedented change with developmental, physical and climatological changes. Food webs within the marine Arctic environment are highly susceptible to anthropogenic stressors and have thus far been understudied. Stable isotopes, in conjunction with a novel set of metrics, may provide a framework that allows us to understand which areas of the Arctic are most vulnerable to change. The objective of this study was to use linear distance metrics applied to stable isotopes to a) define and quantify four Arctic marine food webs in ecospace; b) enable quantifiable comparisons among the four food webs and with other ecosystems; and, c) evaluate vulnerability of the four food webs to anthropogenic stressors such as climate change. The areas studied were Hudson Bay, Beaufort Sea, Lancaster Sound and North Water Polynya. Each region was selected based on the abundance of previous research and published and available stable isotope data in peer-review literature. We selected species to cover trophic levels ranging from particulate matter to polar bears with consideration of pelagic, benthic and ice-associated energy pathways. We interpret higher diversity in baseline carbon energy as signifying higher stability in food web structure. Based on this, the Beaufort Sea food web had the highest stability; the Beaufort Sea food web occupied the largest isotopic niche space and was supported by multiple carbon sources. Areas with top-down control system, such as Lancaster Sound and North Water Polynya, would be the first to experience an increase in trophic redundancy and possible hardships from external stressors, as they have fewer basal carbon sources and greater numbers of mid-high level consumers. We conclude that a diverse carbon energy based ecosystem such as the Beaufort Sea and Hudson Bay regions are more resilient to change than a top down control system.

  2. Improved accuracy of ultrasound-guided therapies using electromagnetic tracking: in-vivo speed of sound measurements

    NASA Astrophysics Data System (ADS)

    Samboju, Vishal; Adams, Matthew; Salgaonkar, Vasant; Diederich, Chris J.; Cunha, J. Adam M.

    2017-02-01

    The speed of sound (SOS) for ultrasound devices used for imaging soft tissue is often calibrated to water, 1540 m/s1 , despite in-vivo soft tissue SOS varying from 1450 to 1613 m/s2 . Images acquired with 1540 m/s and used in conjunction with stereotactic external coordinate systems can thus result in displacement errors of several millimeters. Ultrasound imaging systems are routinely used to guide interventional thermal ablation and cryoablation devices, or radiation sources for brachytherapy3 . Brachytherapy uses small radioactive pellets, inserted interstitially with needles under ultrasound guidance, to eradicate cancerous tissue4 . Since the radiation dose diminishes with distance from the pellet as 1/r2 , imaging uncertainty of a few millimeters can result in significant erroneous dose delivery5,6. Likewise, modeling of power deposition and thermal dose accumulations from ablative sources are also prone to errors due to placement offsets from SOS errors7 . This work presents a method of mitigating needle placement error due to SOS variances without the need of ionizing radiation2,8. We demonstrate the effects of changes in dosimetry in a prostate brachytherapy environment due to patientspecific SOS variances and the ability to mitigate dose delivery uncertainty. Electromagnetic (EM) sensors embedded in the brachytherapy ultrasound system provide information regarding 3D position and orientation of the ultrasound array. Algorithms using data from these two modalities are used to correct bmode images to account for SOS errors. While ultrasound localization resulted in >3 mm displacements, EM resolution was verified to <1 mm precision using custom-built phantoms with various SOS, showing 1% accuracy in SOS measurement.

  3. 200 kHz Commercial Sonar Systems Generate Lower Frequency Side Lobes Audible to Some Marine Mammals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Zhiqun; Southall, Brandon; Carlson, Thomas J.

    2014-04-15

    The spectral properties of pulses transmitted by three commercially available 200 kHz echo sounders were measured to assess the possibility that sound energy in below the center (carrier) frequency might be heard by marine mammals. The study found that all three sounders generated sound at frequencies below the center frequency and within the hearing range of some marine mammals and that this sound was likely detectable by the animals over limited ranges. However, at standard operating source levels for the sounders, the sound below the center frequency was well below potentially harmful levels. It was concluded that the sounds generatedmore » by the sounders could affect the behavior of marine mammals within fairly close proximity to the sources and that that the blanket exclusion of echo sounders from environmental impact analysis based solely on the center frequency output in relation to the range of marine mammal hearing should be reconsidered.« less

  4. Blue whales respond to simulated mid-frequency military sonar.

    PubMed

    Goldbogen, Jeremy A; Southall, Brandon L; DeRuiter, Stacy L; Calambokidis, John; Friedlaender, Ari S; Hazen, Elliott L; Falcone, Erin A; Schorr, Gregory S; Douglas, Annie; Moretti, David J; Kyburg, Chris; McKenna, Megan F; Tyack, Peter L

    2013-08-22

    Mid-frequency military (1-10 kHz) sonars have been associated with lethal mass strandings of deep-diving toothed whales, but the effects on endangered baleen whale species are virtually unknown. Here, we used controlled exposure experiments with simulated military sonar and other mid-frequency sounds to measure behavioural responses of tagged blue whales (Balaenoptera musculus) in feeding areas within the Southern California Bight. Despite using source levels orders of magnitude below some operational military systems, our results demonstrate that mid-frequency sound can significantly affect blue whale behaviour, especially during deep feeding modes. When a response occurred, behavioural changes varied widely from cessation of deep feeding to increased swimming speed and directed travel away from the sound source. The variability of these behavioural responses was largely influenced by a complex interaction of behavioural state, the type of mid-frequency sound and received sound level. Sonar-induced disruption of feeding and displacement from high-quality prey patches could have significant and previously undocumented impacts on baleen whale foraging ecology, individual fitness and population health.

  5. Acceptance of Tinnitus As an Independent Correlate of Tinnitus Severity.

    PubMed

    Hesser, Hugo; Bånkestad, Ellinor; Andersson, Gerhard

    2015-01-01

    Tinnitus is the experience of sounds without an identified external source, and for some the experience is associated with significant severity (i.e., perceived negative affect, activity limitation, and participation restriction due to tinnitus). Acceptance of tinnitus has recently been proposed to play an important role in explaining heterogeneity in tinnitus severity. The purpose of the present study was to extend previous investigations of acceptance in relation to tinnitus by examining the unique contribution of acceptance in accounting for tinnitus severity, beyond anxiety and depression symptoms. In a cross-sectional study, 362 participants with tinnitus attending an ENT clinic in Sweden completed a standard set of psychometrically examined measures of acceptance of tinnitus, tinnitus severity, and anxiety and depression symptoms. Participants also completed a background form on which they provided information about the experience of tinnitus (loudness, localization, sound characteristics), other auditory-related problems (hearing problems and sound sensitivity), and personal characteristics. Correlational analyses showed that acceptance was strongly and inversely related to tinnitus severity and anxiety and depression symptoms. Multivariate regression analysis, in which relevant patient characteristics were controlled, revealed that acceptance accounted for unique variance beyond anxiety and depression symptoms. Acceptance accounted for more of the variance than anxiety and depression symptoms combined. In addition, mediation analysis revealed that acceptance of tinnitus mediated the direct association between self-rated loudness and tinnitus severity, even after anxiety and depression symptoms were taken into account. Findings add to the growing body of work, supporting the unique and important role of acceptance in tinnitus severity. The utility of the concept is discussed in relation to the development of new psychological models and interventions for tinnitus severity.

  6. 77 FR 42279 - Takes of Marine Mammals Incidental to Specified Activities; Taking Marine Mammals Incidental to a...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-18

    ... sound waves emanating from the pile, thereby reducing the sound energy. A confined bubble curtain... physically block sound waves and they prevent air bubbles from migrating away from the pile. The literature... acoustic pressure wave propagates out from a source, was estimated as so-called ``practical spreading loss...

  7. Physics and Psychophysics of High-Fidelity Sound. Part III: The Components of a Sound-Reproducing System: Amplifiers and Loudspeakers.

    ERIC Educational Resources Information Center

    Rossing, Thomas D.

    1980-01-01

    Described are the components for a high-fidelity sound-reproducing system which focuses on various program sources, the amplifier, and loudspeakers. Discussed in detail are amplifier power and distortion, air suspension, loudspeaker baffles and enclosures, bass-reflex enclosure, drone cones, rear horn and acoustic labyrinth enclosures, horn…

  8. Auditory enhancement of increments in spectral amplitude stems from more than one source.

    PubMed

    Carcagno, Samuele; Semal, Catherine; Demany, Laurent

    2012-10-01

    A component of a test sound consisting of simultaneous pure tones perceptually "pops out" if the test sound is preceded by a copy of itself with that component attenuated. Although this "enhancement" effect was initially thought to be purely monaural, it is also observable when the test sound and the precursor sound are presented contralaterally (i.e., to opposite ears). In experiment 1, we assessed the magnitude of ipsilateral and contralateral enhancement as a function of the time interval between the precursor and test sounds (10, 100, or 600 ms). The test sound, randomly transposed in frequency from trial to trial, was followed by a probe tone, either matched or mismatched in frequency to the test sound component which was the target of enhancement. Listeners' ability to discriminate matched probes from mismatched probes was taken as an index of enhancement magnitude. The results showed that enhancement decays more rapidly for ipsilateral than for contralateral precursors, suggesting that ipsilateral enhancement and contralateral enhancement stem from at least partly different sources. It could be hypothesized that, in experiment 1, contralateral precursors were effective only because they provided attentional cues about the target tone frequency. In experiment 2, this hypothesis was tested by presenting the probe tone before the precursor sound rather than after the test sound. Although the probe tone was then serving as a frequency cue, contralateral precursors were again found to produce enhancement. This indicates that contralateral enhancement cannot be explained by cuing alone and is a genuine sensory phenomenon.

  9. Experiments to investigate the acoustic properties of sound propagation

    NASA Astrophysics Data System (ADS)

    Dagdeviren, Omur E.

    2018-07-01

    Propagation of sound waves is one of the fundamental concepts in physics. Some of the properties of sound propagation such as attenuation of sound intensity with increasing distance are familiar to everybody from the experiences of daily life. However, the frequency dependence of sound propagation and the effect of acoustics in confined environments are not straightforward to estimate. In this article, we propose experiments, which can be conducted in a classroom environment with commonly available devices such as smartphones and laptops to measure sound intensity level as a function of the distance between the source and the observer and frequency of the sound. Our experiments and deviations from the theoretical calculations can be used to explain basic concepts of sound propagation and acoustics to a diverse population of students.

  10. The Robustness of Acoustic Analogies

    NASA Technical Reports Server (NTRS)

    Freund, J. B.; Lele, S. K.; Wei, M.

    2004-01-01

    Acoustic analogies for the prediction of flow noise are exact rearrangements of the flow equations N(right arrow q) = 0 into a nominal sound source S(right arrow q) and sound propagation operator L such that L(right arrow q) = S(right arrow q). In practice, the sound source is typically modeled and the propagation operator inverted to make predictions. Since the rearrangement is exact, any sufficiently accurate model of the source will yield the correct sound, so other factors must determine the merits of any particular formulation. Using data from a two-dimensional mixing layer direct numerical simulation (DNS), we evaluate the robustness of two analogy formulations to different errors intentionally introduced into the source. The motivation is that since S can not be perfectly modeled, analogies that are less sensitive to errors in S are preferable. Our assessment is made within the framework of Goldstein's generalized acoustic analogy, in which different choices of a base flow used in constructing L give different sources S and thus different analogies. A uniform base flow yields a Lighthill-like analogy, which we evaluate against a formulation in which the base flow is the actual mean flow of the DNS. The more complex mean flow formulation is found to be significantly more robust to errors in the energetic turbulent fluctuations, but its advantage is less pronounced when errors are made in the smaller scales.

  11. Sound exposure changes European seabass behaviour in a large outdoor floating pen: Effects of temporal structure and a ramp-up procedure.

    PubMed

    Neo, Y Y; Hubert, J; Bolle, L; Winter, H V; Ten Cate, C; Slabbekoorn, H

    2016-07-01

    Underwater sound from human activities may affect fish behaviour negatively and threaten the stability of fish stocks. However, some fundamental understanding is still lacking for adequate impact assessments and potential mitigation strategies. For example, little is known about the potential contribution of the temporal features of sound, the efficacy of ramp-up procedures, and the generalisability of results from indoor studies to the outdoors. Using a semi-natural set-up, we exposed European seabass in an outdoor pen to four treatments: 1) continuous sound, 2) intermittent sound with a regular repetition interval, 3) irregular repetition intervals and 4) a regular repetition interval with amplitude 'ramp-up'. Upon sound exposure, the fish increased swimming speed and depth, and swam away from the sound source. The behavioural readouts were generally consistent with earlier indoor experiments, but the changes and recovery were more variable and were not significantly influenced by sound intermittency and interval regularity. In addition, the 'ramp-up' procedure elicited immediate diving response, similar to the onset of treatment without a 'ramp-up', but the fish did not swim away from the sound source as expected. Our findings suggest that while sound impact studies outdoors increase ecological and behavioural validity, the inherently higher variability also reduces resolution that may be counteracted by increasing sample size or looking into different individual coping styles. Our results also question the efficacy of 'ramp-up' in deterring marine animals, which warrants more investigation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Sound Waves Induce Neural Differentiation of Human Bone Marrow-Derived Mesenchymal Stem Cells via Ryanodine Receptor-Induced Calcium Release and Pyk2 Activation.

    PubMed

    Choi, Yura; Park, Jeong-Eun; Jeong, Jong Seob; Park, Jung-Keug; Kim, Jongpil; Jeon, Songhee

    2016-10-01

    Mesenchymal stem cells (MSCs) have shown considerable promise as an adaptable cell source for use in tissue engineering and other therapeutic applications. The aims of this study were to develop methods to test the hypothesis that human MSCs could be differentiated using sound wave stimulation alone and to find the underlying mechanism. Human bone marrow (hBM)-MSCs were stimulated with sound waves (1 kHz, 81 dB) for 7 days and the expression of neural markers were analyzed. Sound waves induced neural differentiation of hBM-MSC at 1 kHz and 81 dB but not at 1 kHz and 100 dB. To determine the signaling pathways involved in the neural differentiation of hBM-MSCs by sound wave stimulation, we examined the Pyk2 and CREB phosphorylation. Sound wave induced an increase in the phosphorylation of Pyk2 and CREB at 45 min and 90 min, respectively, in hBM-MSCs. To find out the upstream activator of Pyk2, we examined the intracellular calcium source that was released by sound wave stimulation. When we used ryanodine as a ryanodine receptor antagonist, sound wave-induced calcium release was suppressed. Moreover, pre-treatment with a Pyk2 inhibitor, PF431396, prevented the phosphorylation of Pyk2 and suppressed sound wave-induced neural differentiation in hBM-MSCs. These results suggest that specific sound wave stimulation could be used as a neural differentiation inducer of hBM-MSCs.

  13. Study on acoustical properties of sintered bronze porous material for transient exhaust noise of pneumatic system

    NASA Astrophysics Data System (ADS)

    Li, Jingxiang; Zhao, Shengdun; Ishihara, Kunihiko

    2013-05-01

    A novel approach is presented to study the acoustical properties of sintered bronze material, especially used to suppress the transient noise generated by the pneumatic exhaust of pneumatic friction clutch and brake (PFC/B) systems. The transient exhaust noise is impulsive and harmful due to the large sound pressure level (SPL) that has high-frequency. In this paper, the exhaust noise is related to the transient impulsive exhaust, which is described by a one-dimensional aerodynamic model combining with a pressure drop expression of the Ergun equation. A relation of flow parameters and sound source is set up. Additionally, the piston acoustic source approximation of sintered bronze silencer with cylindrical geometry is presented to predict SPL spectrum at a far-field observation point. A semi-phenomenological model is introduced to analyze the sound propagation and reduction in the sintered bronze materials assumed as an equivalent fluid with rigid frame. Experiment results under different initial cylinder pressures are shown to corroborate the validity of the proposed aerodynamic model. In addition, the calculated sound pressures according to the equivalent sound source are compared with the measured noise signals both in time-domain and frequency-domain. Influences of porosity of the sintered bronze material are also discussed.

  14. Evidence of Cnidarians sensitivity to sound after exposure to low frequency noise underwater sources

    NASA Astrophysics Data System (ADS)

    Solé, Marta; Lenoir, Marc; Fontuño, José Manuel; Durfort, Mercè; van der Schaar, Mike; André, Michel

    2016-12-01

    Jellyfishes represent a group of species that play an important role in oceans, particularly as a food source for different taxa and as a predator of fish larvae and planktonic prey. The massive introduction of artificial sound sources in the oceans has become a concern to science and society. While we are only beginning to understand that non-hearing specialists like cephalopods can be affected by anthropogenic noises and regulation is underway to measure European water noise levels, we still don’t know yet if the impact of sound may be extended to other lower level taxa of the food web. Here we exposed two species of Mediterranean Scyphozoan medusa, Cotylorhiza tuberculata and Rhizostoma pulmo to a sweep of low frequency sounds. Scanning electron microscopy (SEM) revealed injuries in the statocyst sensory epithelium of both species after exposure to sound, that are consistent with the manifestation of a massive acoustic trauma observed in other species. The presence of acoustic trauma in marine species that are not hearing specialists, like medusa, shows the magnitude of the problem of noise pollution and the complexity of the task to determine threshold values that would help building up regulation to prevent permanent damage of the ecosystems.

  15. Estimating the sound speed of a shallow-water marine sediment from the head wave excited by a low-flying helicopter.

    PubMed

    Bevans, Dieter A; Buckingham, Michael J

    2017-10-01

    The frequency bandwidth of the sound from a light helicopter, such as a Robinson R44, extends from about 13 Hz to 2.5 kHz. As such, the R44 has potential as a low-frequency sound source in underwater acoustics applications. To explore this idea, an experiment was conducted in shallow water off the coast of southern California in which a horizontal line of hydrophones detected the sound of an R44 hovering in an end-fire position relative to the array. Some of the helicopter sound interacted with seabed to excite the head wave in the water column. A theoretical analysis of the sound field in the water column generated by a stationary airborne source leads to an expression for the two-point horizontal coherence function of the head wave, which, apart from frequency, depends only on the sensor separation and the sediment sound speed. By matching the zero crossings of the measured and theoretical horizontal coherence functions, the sound speed in the sediment was recovered and found to take a value of 1682.42 ± 16.20 m/s. This is consistent with the sediment type at the experiment site, which is known from a previous survey to be a fine to very-fine sand.

  16. iStethoscope: a demonstration of the use of mobile devices for auscultation.

    PubMed

    Bentley, Peter J

    2015-01-01

    iStethoscope Pro is the first piece of software (an "App") produced for iOS devices, which enabled users to exploit their smartphones, music players, or tablets as stethoscopes. The software exploits the built-in microphone (and supports externally added microphones) and performs real-time amplification and filtering to enable heart sounds to be heard with high fidelity. The software also enables the heart sounds to be recorded, analyzed using a spectrogram, and to be transmitted to others via e-mail. This chapter describes the motivation, functionality, and results from this work.

  17. Linking prenatal experience to the emerging musical mind.

    PubMed

    Ullal-Gupta, Sangeeta; Vanden Bosch der Nederlanden, Christina M; Tichko, Parker; Lahav, Amir; Hannon, Erin E

    2013-09-03

    The musical brain is built over time through experience with a multitude of sounds in the auditory environment. However, learning the melodies, timbres, and rhythms unique to the music and language of one's culture begins already within the mother's womb during the third trimester of human development. We review evidence that the intrauterine auditory environment plays a key role in shaping later auditory development and musical preferences. We describe evidence that externally and internally generated sounds influence the developing fetus, and argue that such prenatal auditory experience may set the trajectory for the development of the musical mind.

  18. Electromagnetic sounding of the moon using Apollo 16 and Lunokhod 2 surface magnetometer observations /preliminary results/

    NASA Technical Reports Server (NTRS)

    Vanian, L. L.; Vnutchokova, T. A.; Fainberg, E. B.; Eroschenko, E. A.; Dyal, P.; Parkin, C. W.; Daily, W. D.

    1977-01-01

    A technique of deep electromagnetic sounding of the moon using simultaneous magnetic-field measurements at two lunar surface sites is described. The method, used with the assumption that deep electrical conductivity is a function only of lunar radius, has the advantage of allowing calculation of the external driving field from two surface-site measurements only and therefore does not require data from a lunar orbiting satellite. A transient-response calculation is presented for the example of a magnetic-field discontinuity, measured simultaneously by Apollo 16 and Lunokhod 2 surface magnetometers.

  19. Electromagnetic Sounding of the Moon Using Apollo 16 and Lunokhod 2 Surface Magnetometer Observations (Preliminary Results)

    NASA Technical Reports Server (NTRS)

    Vanyan, L. L.; Vnutchokova, T. A.; Fainberg, E. B.; Eroschenko, E. A.; Dyal, P.; Parkin, C. W.; Parkin, C. W.

    1977-01-01

    A new technique of deep electromagnetic sounding of the Moon using simultaneous magnetic field measurements at two lunar surface sites is described. The method, used with the assumption that deep electrical conductivity is a function only of lunar radius, has the advantage of allowing calculation of the external driving field from two surface site measurements only, and therefore does not require data from a lunar orbiting satellite. A transient response calculation is presented for the example of a magnetic field discontinuity of February 13, 1973, measured simultaneously by Apollo 16 and Lunokhod 2 surface magnetometers.

  20. Modelling sound propagation in the Southern Ocean to estimate the acoustic impact of seismic research surveys on marine mammals

    NASA Astrophysics Data System (ADS)

    Breitzke, Monika; Bohlen, Thomas

    2010-05-01

    Modelling sound propagation in the ocean is an essential tool to assess the potential risk of air-gun shots on marine mammals. Based on a 2.5-D finite-difference code a full waveform modelling approach is presented, which determines both sound exposure levels of single shots and cumulative sound exposure levels of multiple shots fired along a seismic line. Band-limited point source approximations of compact air-gun clusters deployed by R/V Polarstern in polar regions are used as sound sources. Marine mammals are simulated as static receivers. Applications to deep and shallow water models including constant and depth-dependent sound velocity profiles of the Southern Ocean show dipole-like directivities in case of single shots and tubular cumulative sound exposure level fields beneath the seismic line in case of multiple shots. Compared to a semi-infinite model an incorporation of seafloor reflections enhances the seismically induced noise levels close to the sea surface. Refraction due to sound velocity gradients and sound channelling in near-surface ducts are evident, but affect only low to moderate levels. Hence, exposure zone radii derived for different hearing thresholds are almost independent of the sound velocity structure. With decreasing thresholds radii increase according to a spherical 20 log10 r law in case of single shots and according to a cylindrical 10 log10 r law in case of multiple shots. A doubling of the shot interval diminishes the cumulative sound exposure levels by -3 dB and halves the radii. The ocean bottom properties only slightly affect the radii in shallow waters, if the normal incidence reflection coefficient exceeds 0.2.

  1. Acoustic investigation of wall jet over a backward-facing step using a microphone phased array

    NASA Astrophysics Data System (ADS)

    Perschke, Raimund F.; Ramachandran, Rakesh C.; Raman, Ganesh

    2015-02-01

    The acoustic properties of a wall jet over a hard-walled backward-facing step of aspect ratios 6, 3, 2, and 1.5 are studied using a 24-channel microphone phased array at Mach numbers up to M=0.6. The Reynolds number based on inflow velocity and step height assumes values from Reh = 3.0 ×104 to 7.2 ×105. Flow without and with side walls is considered. The experimental setup is open in the wall-normal direction and the expansion ratio is effectively 1. In case of flow through a duct, symmetry of the flow in the spanwise direction is lost downstream of separation at all but the largest aspect ratio as revealed by oil paint flow visualization. Hydrodynamic scattering of turbulence from the trailing edge of the step contributes significantly to the radiated sound. Reflection of acoustic waves from the bottom plate results in a modulation of power spectral densities. Acoustic source localization has been conducted using a 24-channel microphone phased array. Convective mean-flow effects on the apparent source origin have been assessed by placing a loudspeaker underneath a perforated flat plate and evaluating the displacement of the beamforming peak with inflow Mach number. Two source mechanisms are found near the step. One is due to interaction of the turbulent wall jet with the convex edge of the step. Free-stream turbulence sound is found to be peaked downstream of the step. Presence of the side walls increases free-stream sound. Results of the flow visualization are correlated with acoustic source maps. Trailing-edge sound and free-stream turbulence sound can be discriminated using source localization.

  2. Understanding the Doppler effect by analysing spectrograms of the sound of a passing vehicle

    NASA Astrophysics Data System (ADS)

    Lubyako, Dmitry; Martinez-Piedra, Gordon; Ushenin, Arthur; Denvir, Patrick; Dunlop, John; Hall, Alex; Le Roux, Gus; van Someren, Laurence; Weinberger, Harvey

    2017-11-01

    The purpose of this paper is to demonstrate how the Doppler effect can be analysed to deduce information about a moving source of sound waves. Specifically, we find the speed of a car and the distance of its closest approach to an observer using sound recordings from smartphones. A key focus of this paper is how this can be achieved in a classroom, both theoretically and experimentally, to deepen students’ understanding of the Doppler effect. Included are our own experimental data (48 sound recordings) to allow others to reproduce the analysis, if they cannot repeat the whole experiment themselves. In addition to its educational purpose, this paper examines the percentage errors in our results. This enabled us to determine sources of error, allowing those conducting similar future investigations to optimize their accuracy.

  3. Neuromorphic audio-visual sensor fusion on a sound-localizing robot.

    PubMed

    Chan, Vincent Yue-Sek; Jin, Craig T; van Schaik, André

    2012-01-01

    This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment.

  4. Sensory suppression of brain responses to self-generated sounds is observed with and without the perception of agency.

    PubMed

    Timm, Jana; Schönwiesner, Marc; Schröger, Erich; SanMiguel, Iria

    2016-07-01

    Stimuli caused by our own movements are given special treatment in the brain. Self-generated sounds evoke a smaller brain response than externally generated ones. This attenuated response may reflect a predictive mechanism to differentiate the sensory consequences of one's own actions from other sensory input. It may also relate to the feeling of being the agent of the movement and its effects, but little is known about how sensory suppression of brain responses to self-generated sounds is related to judgments of agency. To address this question, we recorded event-related potentials in response to sounds initiated by button presses. In one condition, participants perceived agency over the production of the sounds, whereas in another condition, participants experience an illusory lack of agency caused by changes in the delay between actions and effects. We compared trials in which the timing of button press and sound was physically identical, but participants' agency judgment differed. Results show reduced amplitudes of the auditory N1 component in response to self-generated sounds irrespective of agency experience, whilst P2 effects correlate with the perception of agency. Our findings suggest that suppression of the auditory N1 component to self-generated sounds does not depend on adaptation to specific action-effect time delays, and does not determine agency judgments, however, the suppression of the P2 component might relate more directly to the experience of agency. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Experimental and Analytical Determination of the Geometric Far Field for Round Jets

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle; Bridges, James E.; Brown, Clifford E.; Khavaran, Abbas

    2005-01-01

    An investigation was conducted at the NASA Glenn Research Center using a set of three round jets operating under unheated subsonic conditions to address the question: "How close is too close?" Although sound sources are distributed at various distances throughout a jet plume downstream of the nozzle exit, at great distances from the nozzle the sound will appear to emanate from a point and the inverse-square law can be properly applied. Examination of normalized sound spectra at different distances from a jet, from experiments and from computational tools, established the required minimum distance for valid far-field measurements of the sound from subsonic round jets. Experimental data were acquired in the Aeroacoustic Propulsion Laboratory at the NASA Glenn Research Center. The WIND computer program solved the Reynolds-Averaged Navier-Stokes equations for aerodynamic computations; the MGBK jet-noise prediction computer code was used to predict the sound pressure levels. Results from both the experiments and the analytical exercises indicated that while the shortest measurement arc (with radius approximately 8 nozzle diameters) was already in the geometric far field for high-frequency sound (Strouhal number >5), low-frequency sound (Strouhal number <0.2) reached the geometric far field at a measurement radius of at least 50 nozzle diameters because of its extended source distribution.

  6. On the sound insulation of acoustic metasurface using a sub-structuring approach

    NASA Astrophysics Data System (ADS)

    Yu, Xiang; Lu, Zhenbo; Cheng, Li; Cui, Fangsen

    2017-08-01

    The feasibility of using an acoustic metasurface (AMS) with acoustic stop-band property to realize sound insulation with ventilation function is investigated. An efficient numerical approach is proposed to evaluate its sound insulation performance. The AMS is excited by a reverberant sound source and the standardized sound reduction index (SRI) is numerically investigated. To facilitate the modeling, the coupling between the AMS and the adjacent acoustic fields is formulated using a sub-structuring approach. A modal based formulation is applied to both the source and receiving room, enabling an efficient calculation in the frequency range from 125 Hz to 2000 Hz. The sound pressures and the velocities at the interface are matched by using a transfer function relation based on ;patches;. For illustration purposes, numerical examples are investigated using the proposed approach. The unit cell constituting the AMS is constructed in the shape of a thin acoustic chamber with tailored inner structures, whose stop-band property is numerically analyzed and experimentally demonstrated. The AMS is shown to provide effective sound insulation of over 30 dB in the stop-band frequencies from 600 to 1600 Hz. It is also shown that the proposed approach has the potential to be applied to a broad range of AMS studies and optimization problems.

  7. An Inexpensive and Versatile Version of Kundt's Tube for Measuring the Speed of Sound in Air

    NASA Astrophysics Data System (ADS)

    Papacosta, Pangratios; Linscheid, Nathan

    2016-01-01

    Experiments that measure the speed of sound in air are common in high schools and colleges. In the Kundt's tube experiment, a horizontal air column is adjusted until a resonance mode is achieved for a specific frequency of sound. When this happens, the cork dust in the tube is disturbed at the displacement antinode regions. The location of the displacement antinodes enables the measurement of the wavelength of the sound that is being used. This paper describes a design that uses a speaker instead of the traditional aluminum rod as the sound source. This allows the use of multiple sound frequencies that yield a much more accurate speed of sound in air.

  8. Nonlinear wave fronts and ionospheric irregularities observed by HF sounding over a powerful acoustic source

    NASA Astrophysics Data System (ADS)

    Blanc, Elisabeth; Rickel, Dwight

    1989-06-01

    Different wave fronts affected by significant nonlinearities have been observed in the ionosphere by a pulsed HF sounding experiment at a distance of 38 km from the source point of a 4800-kg ammonium nitrate and fuel oil (ANFO) explosion on the ground. These wave fronts are revealed by partial reflections of the radio sounding waves. A small-scale irregular structure has been generated by a first wave front at the level of a sporadic E layer which characterized the ionosphere at the time of the experiment. The time scale of these fluctuations is about 1 to 2 s; its lifetime is about 2 min. Similar irregularities were also observed at the level of a second wave front in the F region. This structure appears also as diffusion on a continuous wave sounding at horizontal distances of the order of 200 km from the source. In contrast, a third front unaffected by irregularities may originate from the lowest layers of the ionosphere or from a supersonic wave front propagating at the base of the thermosphere. The origin of these structures is discussed.

  9. Transmission and scattering of acoustic energy in turbulent flows

    NASA Astrophysics Data System (ADS)

    Gaitonde, Datta; Unnikrishnan, S.

    2017-11-01

    Sound scattering and transmission in turbulent jets are explored through a control volume analysis of a Large-Eddy Simulation. The fluctuating momentum flux across any control surface is first split into its rotational turbulent ((ρu)'H) and the irrotational-isentropic acoustic ((ρu)'A) components using momentum potential theory (MPT). The former has low spatio-temporal coherence, while the latter exhibits a persistent wavepacket form. The energy variable, specifically, total fluctuating enthalpy, is also split into its turbulent and acoustic modes, HH' and HA' respectively. Scattering of acoustic energy is then (ρu)'HHA' , and transmission is (ρu)'AHA' . This facilitates a quantitative comparison of scattering versus transmission in the presence of acoustic energy sources, also obtained from MPT, in any turbulent scenario. The wavepacket converts stochastic sound sources into coherent sound radiation. Turbulent eddies are not only sources of sound, but also play a strong role in scattering, particularly near the lipline. The net acoustic flux from the jet is the transport of HA' by the wavepacket, whose axisymmetric and higher azimuthal modes contribute to downstream and sideline radiation respectively.

  10. Patch nearfield acoustic holography combined with sound field separation technique applied to a non-free field

    NASA Astrophysics Data System (ADS)

    Bi, ChuanXing; Jing, WenQian; Zhang, YongBin; Xu, Liang

    2015-02-01

    The conventional nearfield acoustic holography (NAH) is usually based on the assumption of free-field conditions, and it also requires that the measurement aperture should be larger than the actual source. This paper is to focus on the problem that neither of the above-mentioned requirements can be met, and to examine the feasibility of reconstructing the sound field radiated by partial source, based on double-layer pressure measurements made in a non-free field by using patch NAH combined with sound field separation technique. And also, the sensitivity of the reconstructed result to the measurement error is analyzed in detail. Two experiments involving two speakers in an exterior space and one speaker inside a car cabin are presented. The experimental results demonstrate that the patch NAH based on single-layer pressure measurement cannot obtain a satisfied result due to the influences of disturbing sources and reflections, while the patch NAH based on double-layer pressure measurements can successfully remove these influences and reconstruct the patch sound field effectively.

  11. An acoustic glottal source for vocal tract physical models

    NASA Astrophysics Data System (ADS)

    Hannukainen, Antti; Kuortti, Juha; Malinen, Jarmo; Ojalammi, Antti

    2017-11-01

    A sound source is proposed for the acoustic measurement of physical models of the human vocal tract. The physical models are produced by fast prototyping, based on magnetic resonance imaging during prolonged vowel production. The sound source, accompanied by custom signal processing algorithms, is used for two kinds of measurements from physical models of the vocal tract: (i) amplitude frequency response and resonant frequency measurements, and (ii) signal reconstructions at the source output according to a target pressure waveform with measurements at the mouth position. The proposed source and the software are validated by computational acoustics experiments and measurements on a physical model of the vocal tract corresponding to the vowels [] of a male speaker.

  12. Sound transmission in ducts containing nearly choked flows

    NASA Technical Reports Server (NTRS)

    Callegari, A. J.; Myers, M. K.

    1979-01-01

    The nonlinear theory previously developed by the authors (1977, 1978) is used to obtain numerical results for sound transmission through a nearly choked throat in a variable-area duct. Parametric studies are performed for different source locations, strengths and frequencies. It is shown that the nonlinear interactions in the throat region generate superharmonics of the fundamental (source) frequency throughout the duct. The amplitudes of these superharmonics increase as the source parameters (frequency and strength) are increased toward values leading to acoustic shocks. For a downstream source, superharmonics carry about 20% of the total acoustic power as shocking conditions are approached. For the source strength levels and frequencies considered, streaming effects are negligible.

  13. Emission of Sound from Turbulence Convected by a Parallel Mean Flow in the Presence of a Confining Duct

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.; Leib, Stewart J.

    1999-01-01

    An approximate method for calculating the noise generated by a turbulent flow within a semi-infinite duct of arbitrary cross section is developed. It is based on a previously derived high-frequency solution to Lilley's equation, which describes the sound propagation in a transversely-sheared mean flow. The source term is simplified by assuming the turbulence to be axisymmetric about the mean flow direction. Numerical results are presented for the special case of a ring source in a circular duct with an axisymmetric mean flow. They show that the internally generated noise is suppressed at sufficiently large upstream angles in a hard walled duct, and that acoustic liners can significantly reduce the sound radiated in both the upstream and downstream regions, depending upon the source location and Mach number of the flow.

  14. Emission of Sound From Turbulence Convected by a Parallel Mean Flow in the Presence of a Confining Duct

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.; Leib, Stewart J.

    1999-01-01

    An approximate method for calculating the noise generated by a turbulent flow within a semi-infinite duct of arbitrary cross section is developed. It is based on a previously derived high-frequency solution to Lilley's equation, which describes the sound propagation in transversely-sheared mean flow. The source term is simplified by assuming the turbulence to be axisymmetric about the mean flow direction. Numerical results are presented for the special case of a ring source in a circular duct with an axisymmetric mean flow. They show that the internally generated noise is suppressed at sufficiently large upstream angles in a hard walled duct, and that acoustic liners can significantly reduce the sound radiated in both the upstream and downstream regions, depending upon the source location and Mach number of the flow.

  15. Translation of an Object Using Phase-Controlled Sound Sources in Acoustic Levitation

    NASA Astrophysics Data System (ADS)

    Matsui, Takayasu; Ohdaira, Etsuzo; Masuzawa, Nobuyoshi; Ide, Masao

    1995-05-01

    Acoustic levitation is used for positioning materials in the development of new materials in space where there is no gravity. This technique is applicable to materials for which electromagnetic force cannot be used. If the levitation point of the materials can be controlled freely in this application, possibilities of new applications will be extended. In this paper we report on an experimental study on controlling the levitation point of the object in an acoustic levitation system. The system fabricated and tested in this study has two sound sources with vibrating plates facing each other. Translation of the object can be achieved by controlling the phase of the energizing electrical signal for one of the sound sources. It was found that the levitation point can be moved smoothly in proportion to the phase difference between the vibrating plates.

  16. Source Monitoring in Alzheimer's Disease

    ERIC Educational Resources Information Center

    El Haj, Mohamad; Fasotti, Luciano; Allain, Philippe

    2012-01-01

    Source monitoring is the process of making judgments about the origin of memories. There are three categories of source monitoring: reality monitoring (discrimination between self- versus other-generated sources), external monitoring (discrimination between several external sources), and internal monitoring (discrimination between two types of…

  17. An Auditory Illusion of Proximity of the Source Induced by Sonic Crystals

    PubMed Central

    Spiousas, Ignacio; Etchemendy, Pablo E.; Vergara, Ramiro O.; Calcagno, Esteban R.; Eguia, Manuel C.

    2015-01-01

    In this work we report an illusion of proximity of a sound source created by a sonic crystal placed between the source and a listener. This effect seems, at first, paradoxical to naïve listeners since the sonic crystal is an obstacle formed by almost densely packed cylindrical scatterers. Even when the singular acoustical properties of these periodic composite materials have been studied extensively (including band gaps, deaf bands, negative refraction, and birrefringence), the possible perceptual effects remain unexplored. The illusion reported here is studied through acoustical measurements and a psychophysical experiment. The results of the acoustical measurements showed that, for a certain frequency range and region in space where the focusing phenomenon takes place, the sonic crystal induces substantial increases in binaural intensity, direct-to-reverberant energy ratio and interaural cross-correlation values, all cues involved in the auditory perception of distance. Consistently, the results of the psychophysical experiment revealed that the presence of the sonic crystal between the sound source and the listener produces a significant reduction of the perceived relative distance to the sound source. PMID:26222281

  18. An Auditory Illusion of Proximity of the Source Induced by Sonic Crystals.

    PubMed

    Spiousas, Ignacio; Etchemendy, Pablo E; Vergara, Ramiro O; Calcagno, Esteban R; Eguia, Manuel C

    2015-01-01

    In this work we report an illusion of proximity of a sound source created by a sonic crystal placed between the source and a listener. This effect seems, at first, paradoxical to naïve listeners since the sonic crystal is an obstacle formed by almost densely packed cylindrical scatterers. Even when the singular acoustical properties of these periodic composite materials have been studied extensively (including band gaps, deaf bands, negative refraction, and birrefringence), the possible perceptual effects remain unexplored. The illusion reported here is studied through acoustical measurements and a psychophysical experiment. The results of the acoustical measurements showed that, for a certain frequency range and region in space where the focusing phenomenon takes place, the sonic crystal induces substantial increases in binaural intensity, direct-to-reverberant energy ratio and interaural cross-correlation values, all cues involved in the auditory perception of distance. Consistently, the results of the psychophysical experiment revealed that the presence of the sonic crystal between the sound source and the listener produces a significant reduction of the perceived relative distance to the sound source.

  19. Monaural Sound Localization Revisited

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Kistler, Doris J.

    1997-01-01

    Research reported during the past few decades has revealed the importance for human sound localization of the so-called 'monaural spectral cues.' These cues are the result of the direction-dependent filtering of incoming sound waves accomplished by the pinnae. One point of view about how these cues are extracted places great emphasis on the spectrum of the received sound at each ear individually. This leads to the suggestion that an effective way of studying the influence of these cues is to measure the ability of listeners to localize sounds when one of their ears is plugged. Numerous studies have appeared using this monaural localization paradigm. Three experiments are described here which are intended to clarify the results of the previous monaural localization studies and provide new data on how monaural spectral cues might be processed. Virtual sound sources are used in the experiments in order to manipulate and control the stimuli independently at the two ears. Two of the experiments deal with the consequences of the incomplete monauralization that may have contaminated previous work. The results suggest that even very low sound levels in the occluded ear provide access to interaural localization cues. The presence of these cues complicates the interpretation of the results of nominally monaural localization studies. The third experiment concerns the role of prior knowledge of the source spectrum, which is required if monaural cues are to be useful. The results of this last experiment demonstrate that extraction of monaural spectral cues can be severely disrupted by trial-to-trial fluctuations in the source spectrum. The general conclusion of the experiments is that, while monaural spectral cues are important, the monaural localization paradigm may not be the most appropriate way to study their role.

  20. Monaural sound localization revisited.

    PubMed

    Wightman, F L; Kistler, D J

    1997-02-01

    Research reported during the past few decades has revealed the importance for human sound localization of the so-called "monaural spectral cues." These cues are the result of the direction-dependent filtering of incoming sound waves accomplished by the pinnae. One point of view about how these cues are extracted places great emphasis on the spectrum of the received sound at each ear individually. This leads to the suggestion that an effective way of studying the influence of these cues is to measure the ability of listeners to localize sounds when one of their ears is plugged. Numerous studies have appeared using this monaural localization paradigm. Three experiments are described here which are intended to clarify the results of the previous monaural localization studies and provide new data on how monaural spectral cues might be processed. Virtual sound sources are used in the experiments in order to manipulate and control the stimuli independently at the two ears. Two of the experiments deal with the consequences of the incomplete monauralization that may have contaminated previous work. The results suggest that even very low sound levels in the occluded ear provide access to interaural localization cues. The presence of these cues complicates the interpretation of the results of nominally monaural localization studies. The third experiment concerns the role of prior knowledge of the source spectrum, which is required if monaural cues are to be useful. The results of this last experiment demonstrate that extraction of monaural spectral cues can be severely disrupted by trial-to-trial fluctuations in the source spectrum. The general conclusion of the experiments is that, while monaural spectral cues are important, the monaural localization paradigm may not be the most appropriate way to study their role.

Top