Sample records for dynamic sound localization

  1. Dynamic sound localization in cats

    PubMed Central

    Ruhland, Janet L.; Jones, Amy E.

    2015-01-01

    Sound localization in cats and humans relies on head-centered acoustic cues. Studies have shown that humans are able to localize sounds during rapid head movements that are directed toward the target or other objects of interest. We studied whether cats are able to utilize similar dynamic acoustic cues to localize acoustic targets delivered during rapid eye-head gaze shifts. We trained cats with visual-auditory two-step tasks in which we presented a brief sound burst during saccadic eye-head gaze shifts toward a prior visual target. No consistent or significant differences in accuracy or precision were found between this dynamic task (2-step saccade) and the comparable static task (single saccade when the head is stable) in either horizontal or vertical direction. Cats appear to be able to process dynamic auditory cues and execute complex motor adjustments to accurately localize auditory targets during rapid eye-head gaze shifts. PMID:26063772

  2. Dynamic Spatial Hearing by Human and Robot Listeners

    NASA Astrophysics Data System (ADS)

    Zhong, Xuan

    This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.

  3. Multiple sound source localization using gammatone auditory filtering and direct sound componence detection

    NASA Astrophysics Data System (ADS)

    Chen, Huaiyu; Cao, Li

    2017-06-01

    In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.

  4. Intercepting a sound without vision

    PubMed Central

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2017-01-01

    Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939

  5. Local dynamic stability of lower extremity joints in lower limb amputees during slope walking.

    PubMed

    Chen, Jin-Ling; Gu, Dong-Yun

    2013-01-01

    Lower limb amputees have a higher fall risk during slope walking compared with non-amputees. However, studies on amputees' slope walking were not well addressed. The aim of this study was to identify the difference of slope walking between amputees and non-amputees. Lyapunov exponents λS was used to estimate the local dynamic stability of 7 transtibial amputees' and 7 controls' lower extremity joint kinematics during uphill and downhill walking. Compared with the controls, amputees exhibited significantly lower λS in hip (P=0.04) and ankle (P=0.01) joints of the sound limb, and hip joints (P=0.01) of the prosthetic limb during uphill walking, while they exhibited significantly lower λS in knee (P=0.02) and ankle (P=0.03) joints of the sound limb, and hip joints (P=0.03) of the prosthetic limb during downhill walking. Compared with amputees level walking, they exhibited significantly lower λS in ankle joints of the sound limb during both uphill (P=0.01) and downhill walking (P=0.01). We hypothesized that the better local dynamic stability of amputees was caused by compensation strategy during slope walking.

  6. Source and listener directivity for interactive wave-based sound propagation.

    PubMed

    Mehra, Ravish; Antani, Lakulish; Kim, Sujeong; Manocha, Dinesh

    2014-04-01

    We present an approach to model dynamic, data-driven source and listener directivity for interactive wave-based sound propagation in virtual environments and computer games. Our directional source representation is expressed as a linear combination of elementary spherical harmonic (SH) sources. In the preprocessing stage, we precompute and encode the propagated sound fields due to each SH source. At runtime, we perform the SH decomposition of the varying source directivity interactively and compute the total sound field at the listener position as a weighted sum of precomputed SH sound fields. We propose a novel plane-wave decomposition approach based on higher-order derivatives of the sound field that enables dynamic HRTF-based listener directivity at runtime. We provide a generic framework to incorporate our source and listener directivity in any offline or online frequency-domain wave-based sound propagation algorithm. We have integrated our sound propagation system in Valve's Source game engine and use it to demonstrate realistic acoustic effects such as sound amplification, diffraction low-passing, scattering, localization, externalization, and spatial sound, generated by wave-based propagation of directional sources and listener in complex scenarios. We also present results from our preliminary user study.

  7. L-type calcium channels refine the neural population code of sound level

    PubMed Central

    Grimsley, Calum Alex; Green, David Brian

    2016-01-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536

  8. Localization of virtual sound at 4 Gz.

    PubMed

    Sandor, Patrick M B; McAnally, Ken I; Pellieux, Lionel; Martin, Russell L

    2005-02-01

    Acceleration directed along the body's z-axis (Gz) leads to misperception of the elevation of visual objects (the "elevator illusion"), most probably as a result of errors in the transformation from eye-centered to head-centered coordinates. We have investigated whether the location of sound sources is misperceived under increased Gz. Visually guided localization responses were made, using a remotely controlled laser pointer, to virtual auditory targets under conditions of 1 and 4 Gz induced in a human centrifuge. As these responses would be expected to be affected by the elevator illusion, we also measured the effect of Gz on the accuracy with which subjects could point to the horizon. Horizon judgments were lower at 4 Gz than at 1 Gz, so sound localization responses at 4 Gz were corrected for this error in the transformation from eye-centered to head-centered coordinates. We found that the accuracy and bias of sound localization are not significantly affected by increased Gz. The auditory modality is likely to provide a reliable means of conveying spatial information to operators in dynamic environments in which Gz can vary.

  9. Low-momentum dynamic structure factor of a strongly interacting Fermi gas at finite temperature: A two-fluid hydrodynamic description

    NASA Astrophysics Data System (ADS)

    Hu, Hui; Zou, Peng; Liu, Xia-Ji

    2018-02-01

    We provide a description of the dynamic structure factor of a homogeneous unitary Fermi gas at low momentum and low frequency, based on the dissipative two-fluid hydrodynamic theory. The viscous relaxation time is estimated and is used to determine the regime where the hydrodynamic theory is applicable and to understand the nature of sound waves in the density response near the superfluid phase transition. By collecting the best knowledge on the shear viscosity and thermal conductivity known so far, we calculate the various diffusion coefficients and obtain the damping width of the (first and second) sounds. We find that the damping width of the first sound is greatly enhanced across the superfluid transition and very close to the transition the second sound might be resolved in the density response for the transferred momentum up to half of Fermi momentum. Our work is motivated by the recent measurement of the local dynamic structure factor at low momentum at Swinburne University of Technology and the ongoing experiment on sound attenuation of a homogeneous unitary Fermi gas at Massachusetts Institute of Technology. We discuss how the measurement of the velocity and damping width of the sound modes in low-momentum dynamic structure factor may lead to an improved determination of the universal superfluid density, shear viscosity, and thermal conductivity of a unitary Fermi gas.

  10. L-type calcium channels refine the neural population code of sound level.

    PubMed

    Grimsley, Calum Alex; Green, David Brian; Sivaramakrishnan, Shobhana

    2016-12-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (Ca L : Ca V 1.1-1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of Ca L to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. Ca L is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, Ca L activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, Ca L boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, Ca L either suppresses or enhances firing at sound levels that evoke maximum firing. Ca L multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. Copyright © 2016 the American Physiological Society.

  11. 3-D inversion of airborne electromagnetic data parallelized and accelerated by local mesh and adaptive soundings

    NASA Astrophysics Data System (ADS)

    Yang, Dikun; Oldenburg, Douglas W.; Haber, Eldad

    2014-03-01

    Airborne electromagnetic (AEM) methods are highly efficient tools for assessing the Earth's conductivity structures in a large area at low cost. However, the configuration of AEM measurements, which typically have widely distributed transmitter-receiver pairs, makes the rigorous modelling and interpretation extremely time-consuming in 3-D. Excessive overcomputing can occur when working on a large mesh covering the entire survey area and inverting all soundings in the data set. We propose two improvements. The first is to use a locally optimized mesh for each AEM sounding for the forward modelling and calculation of sensitivity. This dedicated local mesh is small with fine cells near the sounding location and coarse cells far away in accordance with EM diffusion and the geometric decay of the signals. Once the forward problem is solved on the local meshes, the sensitivity for the inversion on the global mesh is available through quick interpolation. Using local meshes for AEM forward modelling avoids unnecessary computing on fine cells on a global mesh that are far away from the sounding location. Since local meshes are highly independent, the forward modelling can be efficiently parallelized over an array of processors. The second improvement is random and dynamic down-sampling of the soundings. Each inversion iteration only uses a random subset of the soundings, and the subset is reselected for every iteration. The number of soundings in the random subset, determined by an adaptive algorithm, is tied to the degree of model regularization. This minimizes the overcomputing caused by working with redundant soundings. Our methods are compared against conventional methods and tested with a synthetic example. We also invert a field data set that was previously considered to be too large to be practically inverted in 3-D. These examples show that our methodology can dramatically reduce the processing time of 3-D inversion to a practical level without losing resolution. Any existing modelling technique can be included into our framework of mesh decoupling and adaptive sampling to accelerate large-scale 3-D EM inversions.

  12. Sound localization by echolocating bats

    NASA Astrophysics Data System (ADS)

    Aytekin, Murat

    Echolocating bats emit ultrasonic vocalizations and listen to echoes reflected back from objects in the path of the sound beam to build a spatial representation of their surroundings. Important to understanding the representation of space through echolocation are detailed studies of the cues used for localization, the sonar emission patterns and how this information is assembled. This thesis includes three studies, one on the directional properties of the sonar receiver, one on the directional properties of the sonar transmitter, and a model that demonstrates the role of action in building a representation of auditory space. The general importance of this work to a broader understanding of spatial localization is discussed. Investigations of the directional properties of the sonar receiver reveal that interaural level difference and monaural spectral notch cues are both dependent on sound source azimuth and elevation. This redundancy allows flexibility that an echolocating bat may need when coping with complex computational demands for sound localization. Using a novel method to measure bat sonar emission patterns from freely behaving bats, I show that the sonar beam shape varies between vocalizations. Consequently, the auditory system of a bat may need to adapt its computations to accurately localize objects using changing acoustic inputs. Extra-auditory signals that carry information about pinna position and beam shape are required for auditory localization of sound sources. The auditory system must learn associations between extra-auditory signals and acoustic spatial cues. Furthermore, the auditory system must adapt to changes in acoustic input that occur with changes in pinna position and vocalization parameters. These demands on the nervous system suggest that sound localization is achieved through the interaction of behavioral control and acoustic inputs. A sensorimotor model demonstrates how an organism can learn space through auditory-motor contingencies. The model also reveals how different aspects of sound localization, such as experience-dependent acquisition, adaptation, and extra-auditory influences, can be brought together under a comprehensive framework. This thesis presents a foundation for understanding the representation of auditory space that builds upon acoustic cues, motor control, and learning dynamic associations between action and auditory inputs.

  13. Energy Flux in the Cochlea: Evidence Against Power Amplification of the Traveling Wave.

    PubMed

    van der Heijden, Marcel; Versteegh, Corstiaen P C

    2015-10-01

    Traveling waves in the inner ear exhibit an amplitude peak that shifts with frequency. The peaking is commonly believed to rely on motile processes that amplify the wave by inserting energy. We recorded the vibrations at adjacent positions on the basilar membrane in sensitive gerbil cochleae and tested the putative power amplification in two ways. First, we determined the energy flux of the traveling wave at its peak and compared it to the acoustic power entering the ear, thereby obtaining the net cochlear power gain. For soft sounds, the energy flux at the peak was 1 ± 0.6 dB less than the middle ear input power. For more intense sounds, increasingly smaller fractions of the acoustic power actually reached the peak region. Thus, we found no net power amplification of soft sounds and a strong net attenuation of intense sounds. Second, we analyzed local wave propagation on the basilar membrane. We found that the waves slowed down abruptly when approaching their peak, causing an energy densification that quantitatively matched the amplitude peaking, similar to the growth of sea waves approaching the beach. Thus, we found no local power amplification of soft sounds and strong local attenuation of intense sounds. The most parsimonious interpretation of these findings is that cochlear sensitivity is not realized by amplifying acoustic energy, but by spatially focusing it, and that dynamic compression is realized by adjusting the amount of dissipation to sound intensity.

  14. The unstaggered extension to GFDL's FV3 dynamical core on the cubed-sphere

    NASA Astrophysics Data System (ADS)

    Chen, X.; Lin, S. J.; Harris, L.

    2017-12-01

    Finite-volume schemes have become popular for atmospheric transport since they provide intrinsic mass conservation to constituent species. Many CFD codes use unstaggered discretizations for finite volume methods with an approximate Riemann solver. However, this approach is inefficient for geophysical flows due to the complexity of the Riemann solver. We introduce a Low Mach number Approximate Riemann Solver (LMARS) simplified using assumptions appropriate for atmospheric flows: the wind speed is much slower than the sound speed, weak discontinuities, and locally uniform sound wave velocity. LMARS makes possible a Riemann-solver-based dynamical core comparable in computational efficiency to many current dynamical cores. We will present a 3D finite-volume dynamical core using LMARS in a cubed-sphere geometry with a vertically Lagrangian discretization. Results from standard idealized test cases will be discussed.

  15. Sound velocity in five-component air mixtures of various densities

    NASA Astrophysics Data System (ADS)

    Bogdanova, N. V.; Rydalevskaya, M. A.

    2018-05-01

    The local equilibrium flows of five-component air mixtures are considered. Gas dynamic equations are derived from the kinetic equations for aggregate values of collision invariants. It is shown that the traditional formula for sound velocity is true in air mixtures considered with the chemical reactions and the internal degrees of freedom. This formula connects the square of sound velocity with pressure and density. However, the adiabatic coefficient is not constant under existing conditions. The analytical expression for this coefficient is obtained. The examples of its calculation in air mixtures of various densities are presented.

  16. Scaling of membrane-type locally resonant acoustic metamaterial arrays.

    PubMed

    Naify, Christina J; Chang, Chia-Ming; McKnight, Geoffrey; Nutt, Steven R

    2012-10-01

    Metamaterials have emerged as promising solutions for manipulation of sound waves in a variety of applications. Locally resonant acoustic materials (LRAM) decrease sound transmission by 500% over acoustic mass law predictions at peak transmission loss (TL) frequencies with minimal added mass, making them appealing for weight-critical applications such as aerospace structures. In this study, potential issues associated with scale-up of the structure are addressed. TL of single-celled and multi-celled LRAM was measured using an impedance tube setup with systematic variation in geometric parameters to understand the effects of each parameter on acoustic response. Finite element analysis was performed to predict TL as a function of frequency for structures with varying complexity, including stacked structures and multi-celled arrays. Dynamic response of the array structures under discrete frequency excitation was investigated using laser vibrometry to verify negative dynamic mass behavior.

  17. Characteristics of stereo reproduction with parametric loudspeakers

    NASA Astrophysics Data System (ADS)

    Aoki, Shigeaki; Toba, Masayoshi; Tsujita, Norihisa

    2012-05-01

    A parametric loudspeaker utilizes nonlinearity of a medium and is known as a super-directivity loudspeaker. The parametric loudspeaker is one of the prominent applications of nonlinear ultrasonics. So far, the applications have been limited monaural reproduction sound system for public address in museum, station and street etc. In this paper, we discussed characteristics of stereo reproduction with two parametric loudspeakers by comparing with those with two ordinary dynamic loudspeakers. In subjective tests, three typical listening positions were selected to investigate the possibility of correct sound localization in a wide listening area. The binaural information was ILD (Interaural Level Difference) or ITD (Interaural Time Delay). The parametric loudspeaker was an equilateral hexagon. The inner and outer diameters were 99 and 112 mm, respectively. Signals were 500 Hz, 1 kHz, 2 kHz and 4 kHz pure tones and pink noise. Three young males listened to test signals 10 times in each listening condition. Subjective test results showed that listeners at the three typical listening positions perceived correct sound localization of all signals using the parametric loudspeakers. It was almost similar to those using the ordinary dynamic loudspeakers, however, except for the case of sinusoidal waves with ITD. It was determined the parametric loudspeaker could exclude the contradiction between the binaural information ILD and ITD that occurred in stereo reproduction with ordinary dynamic loudspeakers because the super directivity of parametric loudspeaker suppressed the cross talk components.

  18. Effectiveness of Interaural Delays Alone as Cues During Dynamic Sound Localization

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The contribution of interaural time differences (ITDs) to the localization of virtual sound sources with and without head motion was examined. Listeners estimated the apparent azimuth, elevation and distance of virtual sources presented over headphones. Stimuli (3 sec., white noise) were synthesized from minimum-phase representations of nonindividualized head-related transfer functions (HRTFs); binaural magnitude spectra were derived from the minimum phase estimates and ITDs were represented as a pure delay. During dynamic conditions, listeners were encouraged to move their heads; head position was tracked and stimuli were synthesized in real time using a Convolvotron to simulate a stationary external sound source. Two synthesis conditions were tested: (1) both interaural level differences (ILDs) and ITDs correctly correlated with source location and head motion, (2) ITDs correct, no ILDs (flat magnitude spectrum). Head movements reduced azimuth confusions primarily when interaural cues were correctly correlated, although a smaller effect was also seen for ITDs alone. Externalization was generally poor for ITD-only conditions and was enhanced by head motion only for normal HRTFs. Overall the data suggest that, while ITDs alone can provide a significant cue for azimuth, the errors most commonly associated with virtual sources are reduced by location-dependent magnitude cues.

  19. Temporal processing and adaptation in the songbird auditory forebrain.

    PubMed

    Nagel, Katherine I; Doupe, Allison J

    2006-09-21

    Songbird auditory neurons must encode the dynamics of natural sounds at many volumes. We investigated how neural coding depends on the distribution of stimulus intensities. Using reverse-correlation, we modeled responses to amplitude-modulated sounds as the output of a linear filter and a nonlinear gain function, then asked how filters and nonlinearities depend on the stimulus mean and variance. Filter shape depended strongly on mean amplitude (volume): at low mean, most neurons integrated sound over many milliseconds, while at high mean, neurons responded more to local changes in amplitude. Increasing the variance (contrast) of amplitude modulations had less effect on filter shape but decreased the gain of firing in most cells. Both filter and gain changes occurred rapidly after a change in statistics, suggesting that they represent nonlinearities in processing. These changes may permit neurons to signal effectively over a wider dynamic range and are reminiscent of findings in other sensory systems.

  20. Hi-C First Results

    NASA Technical Reports Server (NTRS)

    Cirtain, Jonathan

    2013-01-01

    Hi-C obtained the highest spatial and temporal resolution observatoins ever taken in the solar corona. Hi-C reveals dynamics and structure at the limit of its temporal and spatial resolution. Hi-C observed ubiquitous fine-scale flows consistent with the local sound speed.

  1. Factors regulating early life history dispersal of Atlantic cod (Gadus morhua) from coastal Newfoundland.

    PubMed

    Stanley, Ryan R E; deYoung, Brad; Snelgrove, Paul V R; Gregory, Robert S

    2013-01-01

    To understand coastal dispersal dynamics of Atlantic cod (Gadus morhua), we examined spatiotemporal egg and larval abundance patterns in coastal Newfoundland. In recent decades, Smith Sound, Trinity Bay has supported the largest known overwintering spawning aggregation of Atlantic cod in the region. We estimated spawning and dispersal characteristics for the Smith Sound-Trinity Bay system by fitting ichthyoplankton abundance data to environmentally-driven, simplified box models. Results show protracted spawning, with sharply increased egg production in early July, and limited dispersal from the Sound. The model for the entire spawning season indicates egg export from Smith Sound is 13%•day(-1) with a net mortality of 27%•day(-1). Eggs and larvae are consistently found in western Trinity Bay with little advection from the system. These patterns mirror particle tracking models that suggest residence times of 10-20 days, and circulation models indicating local gyres in Trinity Bay that act in concert with upwelling dynamics to retain eggs and larvae. Our results are among the first quantitative dispersal estimates from Smith Sound, linking this spawning stock to the adjacent coastal waters. These results illustrate the biophysical interplay regulating dispersal and connectivity originating from inshore spawning of coastal northwest Atlantic.

  2. Acoustic-tactile rendering of visual information

    NASA Astrophysics Data System (ADS)

    Silva, Pubudu Madhawa; Pappas, Thrasyvoulos N.; Atkins, Joshua; West, James E.; Hartmann, William M.

    2012-03-01

    In previous work, we have proposed a dynamic, interactive system for conveying visual information via hearing and touch. The system is implemented with a touch screen that allows the user to interrogate a two-dimensional (2-D) object layout by active finger scanning while listening to spatialized auditory feedback. Sound is used as the primary source of information for object localization and identification, while touch is used both for pointing and for kinesthetic feedback. Our previous work considered shape and size perception of simple objects via hearing and touch. The focus of this paper is on the perception of a 2-D layout of simple objects with identical size and shape. We consider the selection and rendition of sounds for object identification and localization. We rely on the head-related transfer function for rendering sound directionality, and consider variations of sound intensity and tempo as two alternative approaches for rendering proximity. Subjective experiments with visually-blocked subjects are used to evaluate the effectiveness of the proposed approaches. Our results indicate that intensity outperforms tempo as a proximity cue, and that the overall system for conveying a 2-D layout is quite promising.

  3. The Relative Contribution of Interaural Time and Magnitude Cues to Dynamic Sound Localization

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    This paper presents preliminary data from a study examining the relative contribution of interaural time differences (ITDs) and interaural level differences (ILDs) to the localization of virtual sound sources both with and without head motion. The listeners' task was to estimate the apparent direction and distance of virtual sources (broadband noise) presented over headphones. Stimuli were synthesized from minimum phase representations of nonindividualized directional transfer functions; binaural magnitude spectra were derived from the minimum phase estimates and ITDs were represented as a pure delay. During dynamic conditions, listeners were encouraged to move their heads; the position of the listener's head was tracked and the stimuli were synthesized in real time using a Convolvotron to simulate a stationary external sound source. ILDs and ITDs were either correctly or incorrectly correlated with head motion: (1) both ILDs and ITDs correctly correlated, (2) ILDs correct, ITD fixed at 0 deg azimuth and 0 deg elevation, (3) ITDs correct, ILDs fixed at 0 deg, 0 deg. Similar conditions were run for static conditions except that none of the cues changed with head motion. The data indicated that, compared to static conditions, head movements helped listeners to resolve confusions primarily when ILDs were correctly correlated, although a smaller effect was also seen for correct ITDs. Together with the results for static conditions, the data suggest that localization tends to be dominated by the cue that is most reliable or consistent, when reliability is defined by consistency over time as well as across frequency bands.

  4. Evaluation of Speech Intelligibility and Sound Localization Abilities with Hearing Aids Using Binaural Wireless Technology.

    PubMed

    Ibrahim, Iman; Parsa, Vijay; Macpherson, Ewan; Cheesman, Margaret

    2013-01-02

    Wireless synchronization of the digital signal processing (DSP) features between two hearing aids in a bilateral hearing aid fitting is a fairly new technology. This technology is expected to preserve the differences in time and intensity between the two ears by co-ordinating the bilateral DSP features such as multichannel compression, noise reduction, and adaptive directionality. The purpose of this study was to evaluate the benefits of wireless communication as implemented in two commercially available hearing aids. More specifically, this study measured speech intelligibility and sound localization abilities of normal hearing and hearing impaired listeners using bilateral hearing aids with wireless synchronization of multichannel Wide Dynamic Range Compression (WDRC). Twenty subjects participated; 8 had normal hearing and 12 had bilaterally symmetrical sensorineural hearing loss. Each individual completed the Hearing in Noise Test (HINT) and a sound localization test with two types of stimuli. No specific benefit from wireless WDRC synchronization was observed for the HINT; however, hearing impaired listeners had better localization with the wireless synchronization. Binaural wireless technology in hearing aids may improve localization abilities although the possible effect appears to be small at the initial fitting. With adaptation, the hearing aids with synchronized signal processing may lead to an improvement in localization and speech intelligibility. Further research is required to demonstrate the effect of adaptation to the hearing aids with synchronized signal processing on different aspects of auditory performance.

  5. 75 FR 34634 - Special Local Regulation; Swim Across the Sound, Long Island Sound, Port Jefferson, NY to Captain...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-18

    ...-AA08 Special Local Regulation; Swim Across the Sound, Long Island Sound, Port Jefferson, NY to Captain... Guard is establishing a permanent Special Local Regulation on the navigable waters of Long Island Sound... Sound event. This special local regulation is necessary to provide for the safety of life by protecting...

  6. Factors Regulating Early Life History Dispersal of Atlantic Cod (Gadus morhua) from Coastal Newfoundland

    PubMed Central

    Stanley, Ryan R. E.; deYoung, Brad; Snelgrove, Paul V. R.; Gregory, Robert S.

    2013-01-01

    To understand coastal dispersal dynamics of Atlantic cod (Gadus morhua), we examined spatiotemporal egg and larval abundance patterns in coastal Newfoundland. In recent decades, Smith Sound, Trinity Bay has supported the largest known overwintering spawning aggregation of Atlantic cod in the region. We estimated spawning and dispersal characteristics for the Smith Sound-Trinity Bay system by fitting ichthyoplankton abundance data to environmentally-driven, simplified box models. Results show protracted spawning, with sharply increased egg production in early July, and limited dispersal from the Sound. The model for the entire spawning season indicates egg export from Smith Sound is 13%•day−1 with a net mortality of 27%•day–1. Eggs and larvae are consistently found in western Trinity Bay with little advection from the system. These patterns mirror particle tracking models that suggest residence times of 10–20 days, and circulation models indicating local gyres in Trinity Bay that act in concert with upwelling dynamics to retain eggs and larvae. Our results are among the first quantitative dispersal estimates from Smith Sound, linking this spawning stock to the adjacent coastal waters. These results illustrate the biophysical interplay regulating dispersal and connectivity originating from inshore spawning of coastal northwest Atlantic. PMID:24058707

  7. Contaminant distribution and accumulation in the surface sediments of Long Island Sound

    USGS Publications Warehouse

    Mecray, E.L.; Buchholtz ten Brink, Marilyn R.

    2000-01-01

    The distribution of contaminants in surface sediments has been measured and mapped as part of a U.S. Geological Survey study of the sediment quality and dynamics of Long Island Sound. Surface samples from 219 stations were analyzed for trace (Ag, Ba, Cd, Cr, Cu, Hg, Ni, Pb, V, Zn and Zr) and major (Al, Fe, Mn, Ca, and Ti) elements, grain size, and Clostridium perfringens spores. Principal Components Analysis was used to identify metals that may covary as a function of common sources or geochemistry. The metallic elements generally have higher concentrations in fine-grained deposits, and their transport and depositional patterns mimic those of small particles. Fine-grained particles are remobilized and transported from areas of high bottom energy and deposited in less dynamic regions of the Sound. Metal concentrations in bottom sediments are high in the western part of the Sound and low in the bottom-scoured regions of the eastern Sound. The sediment chemistry was compared to model results (Signell et al., 1998) and maps of sedimentary environments (Knebel et al., 1999) to better understand the processes responsible for contaminant distribution across the Sound. Metal concentrations were normalized to grain-size and the resulting ratios are uniform in the depositional basins of the Sound and show residual signals in the eastern end as well as in some local areas. The preferential transport of fine-grained material from regions of high bottom stress is probably the dominant factor controlling the metal concentrations in different regions of Long Island Sound. This physical redistribution has implications for environmental management in the region.

  8. Interaural Level Difference Dependent Gain Control and Synaptic Scaling Underlying Binaural Computation

    PubMed Central

    Xiong, Xiaorui R.; Liang, Feixue; Li, Haifu; Mesik, Lukas; Zhang, Ke K.; Polley, Daniel B.; Tao, Huizhong W.; Xiao, Zhongju; Zhang, Li I.

    2013-01-01

    Binaural integration in the central nucleus of inferior colliculus (ICC) plays a critical role in sound localization. However, its arithmetic nature and underlying synaptic mechanisms remain unclear. Here, we showed in mouse ICC neurons that the contralateral dominance is created by a “push-pull”-like mechanism, with contralaterally dominant excitation and more bilaterally balanced inhibition. Importantly, binaural spiking response is generated apparently from an ipsilaterally-mediated scaling of contralateral response, leaving frequency tuning unchanged. This scaling effect is attributed to a divisive attenuation of contralaterally-evoked synaptic excitation onto ICC neurons with their inhibition largely unaffected. Thus, a gain control mediates the linear transformation from monaural to binaural spike responses. The gain value is modulated by interaural level difference (ILD) primarily through scaling excitation to different levels. The ILD-dependent synaptic scaling and gain adjustment allow ICC neurons to dynamically encode interaural sound localization cues while maintaining an invariant representation of other independent sound attributes. PMID:23972599

  9. Object localization using a biosonar beam: how opening your mouth improves localization.

    PubMed

    Arditi, G; Weiss, A J; Yovel, Y

    2015-08-01

    Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions.

  10. Object localization using a biosonar beam: how opening your mouth improves localization

    PubMed Central

    Arditi, G.; Weiss, A. J.; Yovel, Y.

    2015-01-01

    Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions. PMID:26361552

  11. Timing matters: sonar call groups facilitate target localization in bats.

    PubMed

    Kothari, Ninad B; Wohlgemuth, Melville J; Hulgard, Katrine; Surlykke, Annemarie; Moss, Cynthia F

    2014-01-01

    To successfully negotiate a cluttered environment, an echolocating bat must control the timing of motor behaviors in response to dynamic sensory information. Here we detail the big brown bat's adaptive temporal control over sonar call production for tracking prey, moving predictably or unpredictably, under different experimental conditions. We studied the adaptive control of vocal-motor behaviors in free-flying big brown bats, Eptesicus fuscus, as they captured tethered and free-flying insects, in open and cluttered environments. We also studied adaptive sonar behavior in bats trained to track moving targets from a resting position. In each of these experiments, bats adjusted the features of their calls to separate target and clutter. Under many task conditions, flying bats produced prominent sonar sound groups identified as clusters of echolocation pulses with relatively stable intervals, surrounded by longer pulse intervals. In experiments where bats tracked approaching targets from a resting position, bats also produced sonar sound groups, and the prevalence of these sonar sound groups increased when motion of the target was unpredictable. We hypothesize that sonar sound groups produced during flight, and the sonar call doublets produced by a bat tracking a target from a resting position, help the animal resolve dynamic target location and represent the echo scene in greater detail. Collectively, our data reveal adaptive temporal control over sonar call production that allows the bat to negotiate a complex and dynamic environment.

  12. Timing matters: sonar call groups facilitate target localization in bats

    PubMed Central

    Kothari, Ninad B.; Wohlgemuth, Melville J.; Hulgard, Katrine; Surlykke, Annemarie; Moss, Cynthia F.

    2014-01-01

    To successfully negotiate a cluttered environment, an echolocating bat must control the timing of motor behaviors in response to dynamic sensory information. Here we detail the big brown bat's adaptive temporal control over sonar call production for tracking prey, moving predictably or unpredictably, under different experimental conditions. We studied the adaptive control of vocal-motor behaviors in free-flying big brown bats, Eptesicus fuscus, as they captured tethered and free-flying insects, in open and cluttered environments. We also studied adaptive sonar behavior in bats trained to track moving targets from a resting position. In each of these experiments, bats adjusted the features of their calls to separate target and clutter. Under many task conditions, flying bats produced prominent sonar sound groups identified as clusters of echolocation pulses with relatively stable intervals, surrounded by longer pulse intervals. In experiments where bats tracked approaching targets from a resting position, bats also produced sonar sound groups, and the prevalence of these sonar sound groups increased when motion of the target was unpredictable. We hypothesize that sonar sound groups produced during flight, and the sonar call doublets produced by a bat tracking a target from a resting position, help the animal resolve dynamic target location and represent the echo scene in greater detail. Collectively, our data reveal adaptive temporal control over sonar call production that allows the bat to negotiate a complex and dynamic environment. PMID:24860509

  13. Directional Hearing and Sound Source Localization in Fishes.

    PubMed

    Sisneros, Joseph A; Rogers, Peter H

    2016-01-01

    Evidence suggests that the capacity for sound source localization is common to mammals, birds, reptiles, and amphibians, but surprisingly it is not known whether fish locate sound sources in the same manner (e.g., combining binaural and monaural cues) or what computational strategies they use for successful source localization. Directional hearing and sound source localization in fishes continues to be important topics in neuroethology and in the hearing sciences, but the empirical and theoretical work on these topics have been contradictory and obscure for decades. This chapter reviews the previous behavioral work on directional hearing and sound source localization in fishes including the most recent experiments on sound source localization by the plainfin midshipman fish (Porichthys notatus), which has proven to be an exceptional species for fish studies of sound localization. In addition, the theoretical models of directional hearing and sound source localization for fishes are reviewed including a new model that uses a time-averaged intensity approach for source localization that has wide applicability with regard to source type, acoustic environment, and time waveform.

  14. Effects of dynamic range compression on spatial selective auditory attention in normal-hearing listeners.

    PubMed

    Schwartz, Andrew H; Shinn-Cunningham, Barbara G

    2013-04-01

    Many hearing aids introduce compressive gain to accommodate the reduced dynamic range that often accompanies hearing loss. However, natural sounds produce complicated temporal dynamics in hearing aid compression, as gain is driven by whichever source dominates at a given moment. Moreover, independent compression at the two ears can introduce fluctuations in interaural level differences (ILDs) important for spatial perception. While independent compression can interfere with spatial perception of sound, it does not always interfere with localization accuracy or speech identification. Here, normal-hearing listeners reported a target message played simultaneously with two spatially separated masker messages. We measured the amount of spatial separation required between the target and maskers for subjects to perform at threshold in this task. Fast, syllabic compression that was independent at the two ears increased the required spatial separation, but linking the compressors to provide identical gain to both ears (preserving ILDs) restored much of the deficit caused by fast, independent compression. Effects were less clear for slower compression. Percent-correct performance was lower with independent compression, but only for small spatial separations. These results may help explain differences in previous reports of the effect of compression on spatial perception of sound.

  15. Cortical processing of dynamic sound envelope transitions.

    PubMed

    Zhou, Yi; Wang, Xiaoqin

    2010-12-08

    Slow envelope fluctuations in the range of 2-20 Hz provide important segmental cues for processing communication sounds. For a successful segmentation, a neural processor must capture envelope features associated with the rise and fall of signal energy, a process that is often challenged by the interference of background noise. This study investigated the neural representations of slowly varying envelopes in quiet and in background noise in the primary auditory cortex (A1) of awake marmoset monkeys. We characterized envelope features based on the local average and rate of change of sound level in envelope waveforms and identified envelope features to which neurons were selective by reverse correlation. Our results showed that envelope feature selectivity of A1 neurons was correlated with the degree of nonmonotonicity in their static rate-level functions. Nonmonotonic neurons exhibited greater feature selectivity than monotonic neurons in quiet and in background noise. The diverse envelope feature selectivity decreased spike-timing correlation among A1 neurons in response to the same envelope waveforms. As a result, the variability, but not the average, of the ensemble responses of A1 neurons represented more faithfully the dynamic transitions in low-frequency sound envelopes both in quiet and in background noise.

  16. Trading of dynamic interaural time and level difference cues and its effect on the auditory motion-onset response measured with electroencephalography.

    PubMed

    Altmann, Christian F; Ueda, Ryuhei; Bucher, Benoit; Furukawa, Shigeto; Ono, Kentaro; Kashino, Makio; Mima, Tatsuya; Fukuyama, Hidenao

    2017-10-01

    Interaural time (ITD) and level differences (ILD) constitute the two main cues for sound localization in the horizontal plane. Despite extensive research in animal models and humans, the mechanism of how these two cues are integrated into a unified percept is still far from clear. In this study, our aim was to test with human electroencephalography (EEG) whether integration of dynamic ITD and ILD cues is reflected in the so-called motion-onset response (MOR), an evoked potential elicited by moving sound sources. To this end, ITD and ILD trajectories were determined individually by cue trading psychophysics. We then measured EEG while subjects were presented with either static click-trains or click-trains that contained a dynamic portion at the end. The dynamic part was created by combining ITD with ILD either congruently to elicit the percept of a right/leftward moving sound, or incongruently to elicit the percept of a static sound. In two experiments that differed in the method to derive individual dynamic cue trading stimuli, we observed an MOR with at least a change-N1 (cN1) component for both the congruent and incongruent conditions at about 160-190 ms after motion-onset. A significant change-P2 (cP2) component for both the congruent and incongruent ITD/ILD combination was found only in the second experiment peaking at about 250 ms after motion onset. In sum, this study shows that a sound which - by a combination of counter-balanced ITD and ILD cues - induces a static percept can still elicit a motion-onset response, indicative of independent ITD and ILD processing at the level of the MOR - a component that has been proposed to be, at least partly, generated in non-primary auditory cortex. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Evaluation of Speech Intelligibility and Sound Localization Abilities with Hearing Aids Using Binaural Wireless Technology

    PubMed Central

    Ibrahim, Iman; Parsa, Vijay; Macpherson, Ewan; Cheesman, Margaret

    2012-01-01

    Wireless synchronization of the digital signal processing (DSP) features between two hearing aids in a bilateral hearing aid fitting is a fairly new technology. This technology is expected to preserve the differences in time and intensity between the two ears by co-ordinating the bilateral DSP features such as multichannel compression, noise reduction, and adaptive directionality. The purpose of this study was to evaluate the benefits of wireless communication as implemented in two commercially available hearing aids. More specifically, this study measured speech intelligibility and sound localization abilities of normal hearing and hearing impaired listeners using bilateral hearing aids with wireless synchronization of multichannel Wide Dynamic Range Compression (WDRC). Twenty subjects participated; 8 had normal hearing and 12 had bilaterally symmetrical sensorineural hearing loss. Each individual completed the Hearing in Noise Test (HINT) and a sound localization test with two types of stimuli. No specific benefit from wireless WDRC synchronization was observed for the HINT; however, hearing impaired listeners had better localization with the wireless synchronization. Binaural wireless technology in hearing aids may improve localization abilities although the possible effect appears to be small at the initial fitting. With adaptation, the hearing aids with synchronized signal processing may lead to an improvement in localization and speech intelligibility. Further research is required to demonstrate the effect of adaptation to the hearing aids with synchronized signal processing on different aspects of auditory performance. PMID:26557339

  18. 75 FR 16700 - Special Local Regulation, Swim Across the Sound, Long Island Sound, Port Jefferson, NY to Captain...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-02

    ...-AA08 Special Local Regulation, Swim Across the Sound, Long Island Sound, Port Jefferson, NY to Captain... permanent Special Local Regulation on the navigable waters of Long Island Sound between Port Jefferson, NY and Captain's Cove Seaport, Bridgeport, CT due to the annual Swim Across the Sound event. The proposed...

  19. Dynamic Temporal Processing of Nonspeech Acoustic Information by Children with Specific Language Impairment.

    ERIC Educational Resources Information Center

    Visto, Jane C.; And Others

    1996-01-01

    Ten children (ages 12-16) with specific language impairments (SLI) and controls matched for chronological or language age were tested with measures of complex sound localization involving the precedence effect phenomenon. SLI children exhibited tracking skills similar to language-age matched controls, indicating impairment in their ability to use…

  20. Sound source localization identification accuracy: Envelope dependencies.

    PubMed

    Yost, William A

    2017-07-01

    Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.

  1. An evaluation of talker localization based on direction of arrival estimation and statistical sound source identification

    NASA Astrophysics Data System (ADS)

    Nishiura, Takanobu; Nakamura, Satoshi

    2002-11-01

    It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.

  2. Material sound source localization through headphones

    NASA Astrophysics Data System (ADS)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  3. SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization

    PubMed Central

    Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah

    2014-01-01

    Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431

  4. Effects of head movement and proprioceptive feedback in training of sound localization

    PubMed Central

    Honda, Akio; Shibata, Hiroshi; Hidaka, Souta; Gyoba, Jiro; Iwaya, Yukio; Suzuki, Yôiti

    2013-01-01

    We investigated the effects of listeners' head movements and proprioceptive feedback during sound localization practice on the subsequent accuracy of sound localization performance. The effects were examined under both restricted and unrestricted head movement conditions in the practice stage. In both cases, the participants were divided into two groups: a feedback group performed a sound localization drill with accurate proprioceptive feedback; a control group conducted it without the feedback. Results showed that (1) sound localization practice, while allowing for free head movement, led to improvement in sound localization performance and decreased actual angular errors along the horizontal plane, and that (2) proprioceptive feedback during practice decreased actual angular errors in the vertical plane. Our findings suggest that unrestricted head movement and proprioceptive feedback during sound localization training enhance perceptual motor learning by enabling listeners to use variable auditory cues and proprioceptive information. PMID:24349686

  5. Spiking Models for Level-Invariant Encoding

    PubMed Central

    Brette, Romain

    2012-01-01

    Levels of ecological sounds vary over several orders of magnitude, but the firing rate and membrane potential of a neuron are much more limited in range. In binaural neurons of the barn owl, tuning to interaural delays is independent of level differences. Yet a monaural neuron with a fixed threshold should fire earlier in response to louder sounds, which would disrupt the tuning of these neurons. How could spike timing be independent of input level? Here I derive theoretical conditions for a spiking model to be insensitive to input level. The key property is a dynamic change in spike threshold. I then show how level invariance can be physiologically implemented, with specific ionic channel properties. It appears that these ingredients are indeed present in monaural neurons of the sound localization pathway of birds and mammals. PMID:22291634

  6. Behavioral and modeling studies of sound localization in cats: effects of stimulus level and duration

    PubMed Central

    Ruhland, Janet L.; Yin, Tom C. T.; Tollin, Daniel J.

    2013-01-01

    Sound localization accuracy in elevation can be affected by sound spectrum alteration. Correspondingly, any stimulus manipulation that causes a change in the peripheral representation of the spectrum may degrade localization ability in elevation. The present study examined the influence of sound duration and level on localization performance in cats with the head unrestrained. Two cats were trained using operant conditioning to indicate the apparent location of a sound via gaze shift, which was measured with a search-coil technique. Overall, neither sound level nor duration had a notable effect on localization accuracy in azimuth, except at near-threshold levels. In contrast, localization accuracy in elevation improved as sound duration increased, and sound level also had a large effect on localization in elevation. For short-duration noise, the performance peaked at intermediate levels and deteriorated at low and high levels; for long-duration noise, this “negative level effect” at high levels was not observed. Simulations based on an auditory nerve model were used to explain the above observations and to test several hypotheses. Our results indicated that neither the flatness of sound spectrum (before the sound reaches the inner ear) nor the peripheral adaptation influences spectral coding at the periphery for localization in elevation, whereas neural computation that relies on “multiple looks” of the spectral analysis is critical in explaining the effect of sound duration, but not level. The release of negative level effect observed for long-duration sound could not be explained at the periphery and, therefore, is likely a result of processing at higher centers. PMID:23657278

  7. A non-local model of fractional heat conduction in rigid bodies

    NASA Astrophysics Data System (ADS)

    Borino, G.; di Paola, M.; Zingales, M.

    2011-03-01

    In recent years several applications of fractional differential calculus have been proposed in physics, chemistry as well as in engineering fields. Fractional order integrals and derivatives extend the well-known definitions of integer-order primitives and derivatives of the ordinary differential calculus to real-order operators. Engineering applications of fractional operators spread from viscoelastic models, stochastic dynamics as well as with thermoelasticity. In this latter field one of the main actractives of fractional operators is their capability to interpolate between the heat flux and its time-rate of change, that is related to the well-known second sound effect. In other recent studies a fractional, non-local thermoelastic model has been proposed as a particular case of the non-local, integral, thermoelasticity introduced at the mid of the seventies. In this study the autors aim to introduce a different non-local model of extended irreverible thermodynamics to account for second sound effect. Long-range heat flux is defined and it involves the integral part of the spatial Marchaud fractional derivatives of the temperature field whereas the second-sound effect is accounted for introducing time-derivative of the heat flux in the transport equation. It is shown that the proposed model does not suffer of the pathological problems of non-homogenoeus boundary conditions. Moreover the proposed model coalesces with the Povstenko fractional models in unbounded domains.

  8. Method of sound synthesis

    DOEpatents

    Miner, Nadine E.; Caudell, Thomas P.

    2004-06-08

    A sound synthesis method for modeling and synthesizing dynamic, parameterized sounds. The sound synthesis method yields perceptually convincing sounds and provides flexibility through model parameterization. By manipulating model parameters, a variety of related, but perceptually different sounds can be generated. The result is subtle changes in sounds, in addition to synthesis of a variety of sounds, all from a small set of models. The sound models can change dynamically according to changes in the simulation environment. The method is applicable to both stochastic (impulse-based) and non-stochastic (pitched) sounds.

  9. Calibration of the R/V Marcus G. Langseth Seismic Array in shallow Cascadia waters using the Multi-Channel Streamer

    NASA Astrophysics Data System (ADS)

    Crone, T. J.; Tolstoy, M.; Carton, H. D.

    2013-12-01

    In the summer of 2012, two multi-channel seismic (MCS) experiments, Cascadia Open-Access Seismic Transects (COAST) and Ridge2Trench, were conducted in the offshore Cascadia region. An area of growing environmental concern with active source seismic experiments is the potential impact of the received sound on marine mammals, but data relating to this issue is limited. For these surveys sound level 'mitigation radii' are established for the protection of marine mammals, based on direct arrival modeling and previous calibration experiments. Propagation of sound from seismic arrays can be accurately modeled in deep-water environments, but in shallow and sloped environments the complexity of local geology and bathymetry can make it difficult to predict sound levels as a function of distance from the source array. One potential solution to this problem is to measure the received levels in real-time using the ship's streamer (Diebold et al., 2010), which would allow the dynamic determination of suitable mitigation radii. We analyzed R/V Langseth streamer data collected on the shelf and slope off the Washington coast during the COAST experiment to measure received levels in situ up to 8 km away from the ship. Our analysis shows that water depth and bathymetric features can affect received levels in shallow water environments. The establishment of dynamic mitigation radii based on local conditions may help maximize the safety of marine mammals while also maximizing the ability of scientists to conduct seismic research. With increasing scientific and societal focus on subduction zone environments, a better understanding of shallow water sound propagation is essential for allowing seismic exploration of these hazardous environments to continue. Diebold, J. M., M. Tolstoy, L. Doermann, S. Nooner, S. Webb, and T. J. Crone (2010) R/V Marcus G. Langseth Seismic Source: Modeling and Calibration. Geochemistry, Geophysics, Geosystems, 11, Q12012, doi:10.1029/2010GC003216.

  10. Integrating terrestrial and marine records of the LGM in McMurdo Sound, Antarctica: implications for grounded ice expansion, ice flow, and deglaciation of the Ross Sea Embayment

    NASA Astrophysics Data System (ADS)

    Christ, A. J.; Marchant, D. R.

    2017-12-01

    During the LGM, grounded glacier ice filled the Ross Embayment and deposited glacial drift on volcanic islands and peninsulas in McMurdo Sound, as well as along coastal regions of the Transantarctic Mountains (TAM), including the McMurdo Dry Valleys and Royal Society Range. The flow geometry and retreat history of this ice remains debated, with contrasting views yielding divergent implications for both the fundamental cause of Antarctic ice expansion as well as the interaction and behavior of ice derived from East and West Antarctica during late Quaternary time. We present terrestrial geomorphologic evidence that enables the reconstruction of former ice elevations, ice-flow paths, and ice-marginal environments in McMurdo Sound. Radiocarbon dates of fossil algae interbedded with ice-marginal sediments provide a coherent timeline for local ice retreat. These data are integrated with marine-sediment records and multi-beam data to reconstruct late glacial dynamics of grounded ice in McMurdo Sound and the western Ross Sea. The combined dataset suggest a dominance of ice flow toward the TAM in McMurdo Sound during all phases of glaciation, with thick, grounded ice at or near its maximum extent between 19.6 and 12.3 calibrated thousands of years before present (cal. ka). Our data show no significant advance of locally derived ice from the TAM into McMurdo Sound, consistent with the assertion that Late Pleistocene expansion of grounded ice in McMurdo Sound, and throughout the wider Ross Embayment, occurs in response to lower eustatic sea level and the resulting advance of marine-based outlet glaciers and ice streams (and perhaps also reduced oceanic heat flux), rather than local increases in precipitation and ice accumulation. Finally, when combined with allied data across the wider Ross Embayment, which show that widespread deglaciation outside McMurdo Sound did not commence until 13.1 ka, the implication is that retreat of grounded glacier ice in the Ross Embayment did not add significantly to SLR during Meltwater Pulse 1a (14.0-14.5 ka).

  11. Relation of sound intensity and accuracy of localization.

    PubMed

    Farrimond, T

    1989-08-01

    Tests were carried out on 17 subjects to determine the accuracy of monaural sound localization when the head is not free to turn toward the sound source. Maximum accuracy of localization for a constant-volume sound source coincided with the position for maximum perceived intensity of the sound in the front quadrant. There was a tendency for sounds to be perceived more often as coming from a position directly toward the ear. That is, for sounds in the front quadrant, errors of localization tended to be predominantly clockwise (i.e., biased toward a line directly facing the ear). Errors for sounds occurring in the rear quadrant tended to be anticlockwise. The pinna's differential effect on sound intensity between front and rear quadrants would assist in identifying the direction of movement of objects, for example an insect, passing the ear.

  12. The cochlea as a smart structure

    NASA Astrophysics Data System (ADS)

    Elliott, Stephen J.; Shera, Christopher A.

    2012-06-01

    The cochlea is part of the inner ear and its mechanical response provides us with many aspects of our amazingly sensitive and selective hearing. The human cochlea is a coiled tube, with two main fluid chambers running along its length, separated by a 35 mm-long flexible partition that has its own internal dynamics. A dispersive wave can propagate along the cochlea due to the interaction between the inertia of the fluid and the dynamics of the partition. This partition includes about 12 000 outer hair cells, which have different structures, on a micrometre and a nanometre scale, and act both as motional sensors and as motional actuators. The local feedback action of all these cells amplifies the motion inside the inner ear by more than 40 dB at low sound pressure levels. The feedback loops become saturated at higher sound pressure levels, however, so that the feedback gain is reduced, leading to a compression of the dynamic range in the cochlear amplifier. This helps the sensory cells, with a dynamic range of only about 30 dB, to respond to sounds with a dynamic range of more than 120 dB. The active and nonlinear nature of the dynamics within the cochlea give rise to a number of other phenomena, such as otoacoustic emissions, which can be used as a diagnostic test for hearing problems in newborn children, for example. In this paper we view the mechanical action of the cochlea as a smart structure. In particular a simplified wave model of the cochlear dynamics is reviewed that represents its essential features. This can be used to predict the motion along the cochlea when the cochlea is passive, at high levels, and also the effect of the cochlear amplifier, at low levels.

  13. Localizing the sources of two independent noises: Role of time varying amplitude differences

    PubMed Central

    Yost, William A.; Brown, Christopher A.

    2013-01-01

    Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region. PMID:23556597

  14. Localizing the sources of two independent noises: role of time varying amplitude differences.

    PubMed

    Yost, William A; Brown, Christopher A

    2013-04-01

    Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region.

  15. Sound localization and auditory response capabilities in round goby (Neogobius melanostomus)

    NASA Astrophysics Data System (ADS)

    Rollo, Audrey K.; Higgs, Dennis M.

    2005-04-01

    A fundamental role in vertebrate auditory systems is determining the direction of a sound source. While fish show directional responses to sound, sound localization remains in dispute. The species used in the current study, Neogobius melanostomus (round goby) uses sound in reproductive contexts, with both male and female gobies showing directed movement towards a calling male. The two-choice laboratory experiment was used (active versus quiet speaker) to analyze behavior of gobies in response to sound stimuli. When conspecific male spawning sounds were played, gobies moved in a direct path to the active speaker, suggesting true localization to sound. Of the animals that responded to conspecific sounds, 85% of the females and 66% of the males moved directly to the sound source. Auditory playback of natural and synthetic sounds showed differential behavioral specificity. Of gobies that responded, 89% were attracted to the speaker playing Padogobius martensii sounds, 87% to 100 Hz tone, 62% to white noise, and 56% to Gobius niger sounds. Swimming speed, as well as mean path angle to the speaker, will be presented during the presentation. Results suggest a strong localization of the round goby to a sound source, with some differential sound specificity.

  16. Audio Spatial Representation Around the Body

    PubMed Central

    Aggius-Vella, Elena; Campus, Claudio; Finocchietti, Sara; Gori, Monica

    2017-01-01

    Studies have found that portions of space around our body are differently coded by our brain. Numerous works have investigated visual and auditory spatial representation, focusing mostly on the spatial representation of stimuli presented at head level, especially in the frontal space. Only few studies have investigated spatial representation around the entire body and its relationship with motor activity. Moreover, it is still not clear whether the space surrounding us is represented as a unitary dimension or whether it is split up into different portions, differently shaped by our senses and motor activity. To clarify these points, we investigated audio localization of dynamic and static sounds at different body levels. In order to understand the role of a motor action in auditory space representation, we asked subjects to localize sounds by pointing with the hand or the foot, or by giving a verbal answer. We found that the audio sound localization was different depending on the body part considered. Moreover, a different pattern of response was observed when subjects were asked to make actions with respect to the verbal responses. These results suggest that the audio space around our body is split in various spatial portions, which are perceived differently: front, back, around chest, and around foot, suggesting that these four areas could be differently modulated by our senses and our actions. PMID:29249999

  17. Modelling of human low frequency sound localization acuity demonstrates dominance of spatial variation of interaural time difference and suggests uniform just-noticeable differences in interaural time difference.

    PubMed

    Smith, Rosanna C G; Price, Stephen R

    2014-01-01

    Sound source localization is critical to animal survival and for identification of auditory objects. We investigated the acuity with which humans localize low frequency, pure tone sounds using timing differences between the ears. These small differences in time, known as interaural time differences or ITDs, are identified in a manner that allows localization acuity of around 1° at the midline. Acuity, a relative measure of localization ability, displays a non-linear variation as sound sources are positioned more laterally. All species studied localize sounds best at the midline and progressively worse as the sound is located out towards the side. To understand why sound localization displays this variation with azimuthal angle, we took a first-principles, systemic, analytical approach to model localization acuity. We calculated how ITDs vary with sound frequency, head size and sound source location for humans. This allowed us to model ITD variation for previously published experimental acuity data and determine the distribution of just-noticeable differences in ITD. Our results suggest that the best-fit model is one whereby just-noticeable differences in ITDs are identified with uniform or close to uniform sensitivity across the physiological range. We discuss how our results have several implications for neural ITD processing in different species as well as development of the auditory system.

  18. Flooding dynamics on the lower Amazon floodplain

    NASA Astrophysics Data System (ADS)

    Rudorff, C.; Melack, J. M.; Bates, P. D.

    2013-05-01

    We analyzed flooding dynamics of a large floodplain lake in the lower reach of the Amazon River for the period between 1995 through 2010. Floodplain inundation was simulated using the LISFLOOD-FP model, which combines one-dimensional river routing with two-dimensional overland flow, and a local hydrological model. Accurate representation of floodplain flows and inundation extent depends on the quality of the digital elevation model (DEM). We combined digital topography (derived from the Shuttle Radar Topography Mission) with extensive floodplain echo-sounding data to generate a hydraulically sound DEM. Analysis of daily water balances revealed that the dominant source of inflow alternated seasonally among direct rain and local runoff (October through January), Amazon River (March through August), and seepage (September). As inflows from the Amazon River increase during the rising limb of the hydrograph, regional floodwaters encounter the floodplain partially inundated from local hydrological inputs. At peak flow the floodplain routes, on average, 2.5% of the total discharge for this reach. The falling limb of the hydrograph coincides with the locally dry period, allowing seepage of water stored in sediments to become a dominant source. The average annual inflow from the Amazon River was 58.8 km3 (SD = 33.5), representing more than three thirds (80%) of inputs from all sources, with substantial inter-annual variability. The average annual net export of water from the floodplain to the Amazon River was 7.9 km3 (SD = 2.7).

  19. The effect of brain lesions on sound localization in complex acoustic environments.

    PubMed

    Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg

    2014-05-01

    Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.

  20. Detecting Rotational Superradiance in Fluid Laboratories

    NASA Astrophysics Data System (ADS)

    Cardoso, Vitor; Coutant, Antonin; Richartz, Mauricio; Weinfurtner, Silke

    2016-12-01

    Rotational superradiance was predicted theoretically decades ago, and is chiefly responsible for a number of important effects and phenomenology in black-hole physics. However, rotational superradiance has never been observed experimentally. Here, with the aim of probing superradiance in the lab, we investigate the behavior of sound and surface waves in fluids resting in a circular basin at the center of which a rotating cylinder is placed. We show that with a suitable choice for the material of the cylinder, surface and sound waves are amplified. Two types of instabilities are studied: one sets in whenever superradiant modes are confined near the rotating cylinder and the other, which does not rely on confinement, corresponds to a local excitation of the cylinder. Our findings are experimentally testable in existing fluid laboratories and, hence, offer experimental exploration and comparison of dynamical instabilities arising from rapidly rotating boundary layers in astrophysical as well as in fluid dynamical systems.

  1. Sound source localization method in an environment with flow based on Amiet-IMACS

    NASA Astrophysics Data System (ADS)

    Wei, Long; Li, Min; Qin, Sheng; Fu, Qiang; Yang, Debin

    2017-05-01

    A sound source localization method is proposed to localize and analyze the sound source in an environment with airflow. It combines the improved mapping of acoustic correlated sources (IMACS) method and Amiet's method, and is called Amiet-IMACS. It can localize uncorrelated and correlated sound sources with airflow. To implement this approach, Amiet's method is used to correct the sound propagation path in 3D, which improves the accuracy of the array manifold matrix and decreases the position error of the localized source. Then, the mapping of acoustic correlated sources (MACS) method, which is as a high-resolution sound source localization algorithm, is improved by self-adjusting the constraint parameter at each irritation process to increase convergence speed. A sound source localization experiment using a pair of loud speakers in an anechoic wind tunnel under different flow speeds is conducted. The experiment exhibits the advantage of Amiet-IMACS in localizing a more accurate sound source position compared with implementing IMACS alone in an environment with flow. Moreover, the aerodynamic noise produced by a NASA EPPLER 862 STRUT airfoil model in airflow with a velocity of 80 m/s is localized using the proposed method, which further proves its effectiveness in a flow environment. Finally, the relationship between the source position of this airfoil model and its frequency, along with its generation mechanism, is determined and interpreted.

  2. Spatial hearing in Cope’s gray treefrog: I. Open and closed loop experiments on sound localization in the presence and absence of noise

    PubMed Central

    Caldwell, Michael S.; Bee, Mark A.

    2014-01-01

    The ability to reliably locate sound sources is critical to anurans, which navigate acoustically complex breeding choruses when choosing mates. Yet, the factors influencing sound localization performance in frogs remain largely unexplored. We applied two complementary methodologies, open and closed loop playback trials, to identify influences on localization abilities in Cope’s gray treefrog, Hyla chrysoscelis. We examined localization acuity and phonotaxis behavior of females in response to advertisement calls presented from 12 azimuthal angles, at two signal levels, in the presence and absence of noise, and at two noise levels. Orientation responses were consistent with precise localization of sound sources, rather than binary discrimination between sources on either side of the body (lateralization). Frogs were unable to discriminate between sounds arriving from forward and rearward directions, and accurate localization was limited to forward sound presentation angles. Within this region, sound presentation angle had little effect on localization acuity. The presence of noise and low signal-to-noise ratios also did not strongly impair localization ability in open loop trials, but females exhibited reduced phonotaxis performance consistent with impaired localization during closed loop trials. We discuss these results in light of previous work on spatial hearing in anurans. PMID:24504182

  3. Computational Methods for Nonlinear Dynamics Problems in Solid and Structural Mechanics: Models of Dynamic Frictional Phenomena in Metallic Structures.

    DTIC Science & Technology

    1986-03-31

    Martins, J.A.C. and Campos , L.T. [1986], "Existence and Local Uniqueness of Solutions to Contact Problems in Elasticity with Nonlinear Friction...noisy and ttoubl esome vibt.t4ons. If the sound generated by the friction-induced oscillations of Rviolin strings may be the delight of all music lovers...formulation. See 0den and Martins - [1985] and Rabier, Martins, Oden and Campos [1986]. - It is now simple to show, in a 6o’uman manner, that, for

  4. Horizontal sound localization in cochlear implant users with a contralateral hearing aid.

    PubMed

    Veugen, Lidwien C E; Hendrikse, Maartje M E; van Wanrooij, Marc M; Agterberg, Martijn J H; Chalupper, Josef; Mens, Lucas H M; Snik, Ad F M; John van Opstal, A

    2016-06-01

    Interaural differences in sound arrival time (ITD) and in level (ILD) enable us to localize sounds in the horizontal plane, and can support source segregation and speech understanding in noisy environments. It is uncertain whether these cues are also available to hearing-impaired listeners who are bimodally fitted, i.e. with a cochlear implant (CI) and a contralateral hearing aid (HA). Here, we assessed sound localization behavior of fourteen bimodal listeners, all using the same Phonak HA and an Advanced Bionics CI processor, matched with respect to loudness growth. We aimed to determine the availability and contribution of binaural (ILDs, temporal fine structure and envelope ITDs) and monaural (loudness, spectral) cues to horizontal sound localization in bimodal listeners, by systematically varying the frequency band, level and envelope of the stimuli. The sound bandwidth had a strong effect on the localization bias of bimodal listeners, although localization performance was typically poor for all conditions. Responses could be systematically changed by adjusting the frequency range of the stimulus, or by simply switching the HA and CI on and off. Localization responses were largely biased to one side, typically the CI side for broadband and high-pass filtered sounds, and occasionally to the HA side for low-pass filtered sounds. HA-aided thresholds better than 45 dB HL in the frequency range of the stimulus appeared to be a prerequisite, but not a guarantee, for the ability to indicate sound source direction. We argue that bimodal sound localization is likely based on ILD cues, even at frequencies below 1500 Hz for which the natural ILDs are small. These cues are typically perturbed in bimodal listeners, leading to a biased localization percept of sounds. The high accuracy of some listeners could result from a combination of sufficient spectral overlap and loudness balance in bimodal hearing. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Auditory and visual localization accuracy in young children and adults.

    PubMed

    Martin, Karen; Johnstone, Patti; Hedrick, Mark

    2015-06-01

    This study aimed to measure and compare sound and light source localization ability in young children and adults who have normal hearing and normal/corrected vision in order to determine the extent to which age, type of stimuli, and stimulus order affects sound localization accuracy. Two experiments were conducted. The first involved a group of adults only. The second involved a group of 30 children aged 3 to 5 years. Testing occurred in a sound-treated booth containing a semi-circular array of 15 loudspeakers set at 10° intervals from -70° to 70° azimuth. Each loudspeaker had a tiny light bulb and a small picture fastened underneath. Seven of the loudspeakers were used to randomly test sound and light source identification. The sound stimulus was the word "baseball". The light stimulus was a flashing of a light bulb triggered by the digital signal of the word "baseball". Each participant was asked to face 0° azimuth, and identify the location of the test stimulus upon presentation. Adults used a computer mouse to click on an icon; children responded by verbally naming or walking toward the picture underneath the corresponding loudspeaker or light. A mixed experimental design using repeated measures was used to determine the effect of age and stimulus type on localization accuracy in children and adults. A mixed experimental design was used to compare the effect of stimulus order (light first/last) and varying or fixed intensity sound on localization accuracy in children and adults. Localization accuracy was significantly better for light stimuli than sound stimuli for children and adults. Children, compared to adults, showed significantly greater localization errors for audition. Three-year-old children had significantly greater sound localization errors compared to 4- and 5-year olds. Adults performed better on the sound localization task when the light localization task occurred first. Young children can understand and attend to localization tasks, but show poorer localization accuracy than adults in sound localization. This may be a reflection of differences in sensory modality development and/or central processes in young children, compared to adults. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Frequency Dynamics of the First Heart Sound

    NASA Astrophysics Data System (ADS)

    Wood, John Charles

    Cardiac auscultation is a fundamental clinical tool but first heart sound origins and significance remain controversial. Previous clinical studies have implicated resonant vibrations of both the myocardium and the valves. Accordingly, the goals of this thesis were threefold, (1) to characterize the frequency dynamics of the first heart sound, (2) to determine the relative contribution of the myocardium and the valves in determining first heart sound frequency, and (3) to develop new tools for non-stationary signal analysis. A resonant origin for first heart sound generation was tested through two studies in an open-chest canine preparation. Heart sounds were recorded using ultralight acceleration transducers cemented directly to the epicardium. The first heart sound was observed to be non-stationary and multicomponent. The most dominant feature was a powerful, rapidly-rising frequency component that preceded mitral valve closure. Two broadband components were observed; the first coincided with mitral valve closure while the second significantly preceded aortic valve opening. The spatial frequency of left ventricular vibrations was both high and non-stationary which indicated that the left ventricle was not vibrating passively in response to intracardiac pressure fluctuations but suggested instead that the first heart sound is a propagating transient. In the second study, regional myocardial ischemia was induced by left coronary circumflex arterial occlusion. Acceleration transducers were placed on the ischemic and non-ischemic myocardium to determine whether ischemia produced local or global changes in first heart sound amplitude and frequency. The two zones exhibited disparate amplitude and frequency behavior indicating that the first heart sound is not a resonant phenomenon. To objectively quantify the presence and orientation of signal components, Radon transformation of the time -frequency plane was performed and found to have considerable potential for pattern classification. Radon transformation of the Wigner spectrum (Radon-Wigner transform) was derived to be equivalent to dechirping in the time and frequency domains. Based upon this representation, an analogy between time-frequency estimation and computed tomography was drawn. Cohen's class of time-frequency representations was subsequently shown to result from simple changes in reconstruction filtering parameters. Time-varying filtering, adaptive time-frequency transformation and linear signal synthesis were also performed from the Radon-Wigner representation.

  7. Towards clinical computed ultrasound tomography in echo-mode: Dynamic range artefact reduction.

    PubMed

    Jaeger, Michael; Frenz, Martin

    2015-09-01

    Computed ultrasound tomography in echo-mode (CUTE) allows imaging the speed of sound inside tissue using hand-held pulse-echo ultrasound. This technique is based on measuring the changing local phase of beamformed echoes when changing the transmit beam steering angle. Phantom results have shown a spatial resolution and contrast that could qualify CUTE as a promising novel diagnostic modality in combination with B-mode ultrasound. Unfortunately, the large intensity range of several tens of dB that is encountered in clinical images poses difficulties to echo phase tracking and results in severe artefacts. In this paper we propose a modification to the original technique by which more robust echo tracking can be achieved, and we demonstrate in phantom experiments that dynamic range artefacts are largely eliminated. Dynamic range artefact reduction also allowed for the first time a clinical implementation of CUTE with sufficient contrast to reproducibly distinguish the different speed of sound in different tissue layers of the abdominal wall and the neck. Copyright © 2015. Published by Elsevier B.V.

  8. Auditory Localization: An Annotated Bibliography

    DTIC Science & Technology

    1983-11-01

    tranverse plane, natural sound localization in ,-- both horizontal and vertical planes can be performed with nearly the same accuracy as real sound sources...important for unscrambling the competing sounds which so often occur in natural environments. A workable sound sensor has been constructed and empirical

  9. Hearing in three dimensions: Sound localization

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Kistler, Doris J.

    1990-01-01

    The ability to localize a source of sound in space is a fundamental component of the three dimensional character of the sound of audio. For over a century scientists have been trying to understand the physical and psychological processes and physiological mechanisms that subserve sound localization. This research has shown that important information about sound source position is provided by interaural differences in time of arrival, interaural differences in intensity and direction-dependent filtering provided by the pinnae. Progress has been slow, primarily because experiments on localization are technically demanding. Control of stimulus parameters and quantification of the subjective experience are quite difficult problems. Recent advances, such as the ability to simulate a three dimensional sound field over headphones, seem to offer potential for rapid progress. Research using the new techniques has already produced new information. It now seems that interaural time differences are a much more salient and dominant localization cue than previously believed.

  10. Effects of high sound speed confiners on ANFO detonations

    NASA Astrophysics Data System (ADS)

    Kiyanda, Charles; Jackson, Scott; Short, Mark

    2011-06-01

    The interaction between high explosive (HE) detonations and high sound speed confiners, where the confiner sound speed exceeds the HE's detonation speed, has not been thoroughly studied. The subsonic nature of the flow in the confiner allows stress waves to travel ahead of the main detonation front and influence the upstream HE state. The interaction between the detonation wave and the confiner is also no longer a local interaction, so that the confiner thickness now plays a significant role in the detonation dynamics. We report here on larger scale experiments in which a mixture of ammonium nitrate and fuel oil (ANFO) is detonated in aluminium confiners with varying charge diameter and confiner thickness. The results of these large-scale experiments are compared with previous large-scale ANFO experiments in cardboard, as well as smaller-scale aluminium confined ANFO experiments, to characterize the effects of confiner thickness.

  11. On the estimation of sound speed in two-dimensional Yukawa fluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Semenov, I. L., E-mail: Igor.Semenov@dlr.de; Thomas, H. M.; Khrapak, S. A.

    2015-11-15

    The longitudinal sound speed in two-dimensional Yukawa fluids is estimated using the conventional hydrodynamic expression supplemented by appropriate thermodynamic functions proposed recently by Khrapak et al. [Phys. Plasmas 22, 083706 (2015)]. In contrast to the existing approaches, such as quasi-localized charge approximation (QLCA) and molecular dynamics simulations, our model provides a relatively simple estimate for the sound speed over a wide range of parameters of interest. At strong coupling, our results are shown to be in good agreement with the results obtained using the QLCA approach and those derived from the phonon spectrum for the triangular lattice. On the othermore » hand, our model is also expected to remain accurate at moderate values of the coupling strength. In addition, the obtained results are used to discuss the influence of the strong coupling effects on the adiabatic index of two-dimensional Yukawa fluids.« less

  12. 3-D localization of virtual sound sources: effects of visual environment, pointing method, and training.

    PubMed

    Majdak, Piotr; Goupell, Matthew J; Laback, Bernhard

    2010-02-01

    The ability to localize sound sources in three-dimensional space was tested in humans. In Experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE; darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In Experiment 2, subjects were provided with sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of Experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies.

  13. Potential sound production by a deep-sea fish

    NASA Astrophysics Data System (ADS)

    Mann, David A.; Jarvis, Susan M.

    2004-05-01

    Swimbladder sonic muscles of deep-sea fishes were described over 35 years ago. Until now, no recordings of probable deep-sea fish sounds have been published. A sound likely produced by a deep-sea fish has been isolated and localized from an analysis of acoustic recordings made at the AUTEC test range in the Tongue of the Ocean, Bahamas, from four deep-sea hydrophones. This sound is typical of a fish sound in that it is pulsed and relatively low frequency (800-1000 Hz). Using time-of-arrival differences, the sound was localized to 548-696-m depth, where the bottom was 1620 m. The ability to localize this sound in real-time on the hydrophone range provides a great advantage for being able to identify the sound-producer using a remotely operated vehicle.

  14. Ambient Sound-Based Collaborative Localization of Indeterministic Devices

    PubMed Central

    Kamminga, Jacob; Le, Duc; Havinga, Paul

    2016-01-01

    Localization is essential in wireless sensor networks. To our knowledge, no prior work has utilized low-cost devices for collaborative localization based on only ambient sound, without the support of local infrastructure. The reason may be the fact that most low-cost devices are indeterministic and suffer from uncertain input latencies. This uncertainty makes accurate localization challenging. Therefore, we present a collaborative localization algorithm (Cooperative Localization on Android with ambient Sound Sources (CLASS)) that simultaneously localizes the position of indeterministic devices and ambient sound sources without local infrastructure. The CLASS algorithm deals with the uncertainty by splitting the devices into subsets so that outliers can be removed from the time difference of arrival values and localization results. Since Android is indeterministic, we select Android devices to evaluate our approach. The algorithm is evaluated with an outdoor experiment and achieves a mean Root Mean Square Error (RMSE) of 2.18 m with a standard deviation of 0.22 m. Estimated directions towards the sound sources have a mean RMSE of 17.5° and a standard deviation of 2.3°. These results show that it is feasible to simultaneously achieve a relative positioning of both devices and sound sources with sufficient accuracy, even when using non-deterministic devices and platforms, such as Android. PMID:27649176

  15. Judging sound rotation when listeners and sounds rotate: Sound source localization is a multisystem process.

    PubMed

    Yost, William A; Zhong, Xuan; Najam, Anbar

    2015-11-01

    In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.

  16. Difference in precedence effect between children and adults signifies development of sound localization abilities in complex listening tasks

    PubMed Central

    Litovsky, Ruth Y.; Godar, Shelly P.

    2010-01-01

    The precedence effect refers to the fact that humans are able to localize sound in reverberant environments, because the auditory system assigns greater weight to the direct sound (lead) than the later-arriving sound (lag). In this study, absolute sound localization was studied for single source stimuli and for dual source lead-lag stimuli in 4–5 year old children and adults. Lead-lag delays ranged from 5–100 ms. Testing was conducted in free field, with pink noise bursts emitted from loudspeakers positioned on a horizontal arc in the frontal field. Listeners indicated how many sounds were heard and the perceived location of the first- and second-heard sounds. Results suggest that at short delays (up to 10 ms), the lead dominates sound localization strongly at both ages, and localization errors are similar to those with single-source stimuli. At longer delays errors can be large, stemming from over-integration of the lead and lag, interchanging of perceived locations of the first-heard and second-heard sounds due to temporal order confusion, and dominance of the lead over the lag. The errors are greater for children than adults. Results are discussed in the context of maturation of auditory and non-auditory factors. PMID:20968369

  17. Sound Source Localization and Speech Understanding in Complex Listening Environments by Single-sided Deaf Listeners After Cochlear Implantation.

    PubMed

    Zeitler, Daniel M; Dorman, Michael F; Natale, Sarah J; Loiselle, Louise; Yost, William A; Gifford, Rene H

    2015-09-01

    To assess improvements in sound source localization and speech understanding in complex listening environments after unilateral cochlear implantation for single-sided deafness (SSD). Nonrandomized, open, prospective case series. Tertiary referral center. Nine subjects with a unilateral cochlear implant (CI) for SSD (SSD-CI) were tested. Reference groups for the task of sound source localization included young (n = 45) and older (n = 12) normal-hearing (NH) subjects and 27 bilateral CI (BCI) subjects. Unilateral cochlear implantation. Sound source localization was tested with 13 loudspeakers in a 180 arc in front of the subject. Speech understanding was tested with the subject seated in an 8-loudspeaker sound system arrayed in a 360-degree pattern. Directionally appropriate noise, originally recorded in a restaurant, was played from each loudspeaker. Speech understanding in noise was tested using the Azbio sentence test and sound source localization quantified using root mean square error. All CI subjects showed poorer-than-normal sound source localization. SSD-CI subjects showed a bimodal distribution of scores: six subjects had scores near the mean of those obtained by BCI subjects, whereas three had scores just outside the 95th percentile of NH listeners. Speech understanding improved significantly in the restaurant environment when the signal was presented to the side of the CI. Cochlear implantation for SSD can offer improved speech understanding in complex listening environments and improved sound source localization in both children and adults. On tasks of sound source localization, SSD-CI patients typically perform as well as BCI patients and, in some cases, achieve scores at the upper boundary of normal performance.

  18. Sound source localization and segregation with internally coupled ears: the treefrog model

    PubMed Central

    Christensen-Dalsgaard, Jakob

    2016-01-01

    Acoustic signaling plays key roles in mediating many of the reproductive and social behaviors of anurans (frogs and toads). Moreover, acoustic signaling often occurs at night, in structurally complex habitats, such as densely vegetated ponds, and in dense breeding choruses characterized by high levels of background noise and acoustic clutter. Fundamental to anuran behavior is the ability of the auditory system to determine accurately the location from where sounds originate in space (sound source localization) and to assign specific sounds in the complex acoustic milieu of a chorus to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating directions for future research on these animals that will require the collaborative efforts of biologists, physicists, and roboticists. PMID:27730384

  19. The Effect of Microphone Placement on Interaural Level Differences and Sound Localization Across the Horizontal Plane in Bilateral Cochlear Implant Users.

    PubMed

    Jones, Heath G; Kan, Alan; Litovsky, Ruth Y

    2016-01-01

    This study examined the effect of microphone placement on the interaural level differences (ILDs) available to bilateral cochlear implant (BiCI) users, and the subsequent effects on horizontal-plane sound localization. Virtual acoustic stimuli for sound localization testing were created individually for eight BiCI users by making acoustic transfer function measurements for microphones placed in the ear (ITE), behind the ear (BTE), and on the shoulders (SHD). The ILDs across source locations were calculated for each placement to analyze their effect on sound localization performance. Sound localization was tested using a repeated-measures, within-participant design for the three microphone placements. The ITE microphone placement provided significantly larger ILDs compared to BTE and SHD placements, which correlated with overall localization errors. However, differences in localization errors across the microphone conditions were small. The BTE microphones worn by many BiCI users in everyday life do not capture the full range of acoustic ILDs available, and also reduce the change in cue magnitudes for sound sources across the horizontal plane. Acute testing with an ITE placement reduced sound localization errors along the horizontal plane compared to the other placements in some patients. Larger improvements may be observed if patients had more experience with the new ILD cues provided by an ITE placement.

  20. Sound-localization experiments with barn owls in virtual space: influence of broadband interaural level different on head-turning behavior.

    PubMed

    Poganiatz, I; Wagner, H

    2001-04-01

    Interaural level differences play an important role for elevational sound localization in barn owls. The changes of this cue with sound location are complex and frequency dependent. We exploited the opportunities offered by the virtual space technique to investigate the behavioral relevance of the overall interaural level difference by fixing this parameter in virtual stimuli to a constant value or introducing additional broadband level differences to normal virtual stimuli. Frequency-specific monaural cues in the stimuli were not manipulated. We observed an influence of the broadband interaural level differences on elevational, but not on azimuthal sound localization. Since results obtained with our manipulations explained only part of the variance in elevational turning angle, we conclude that frequency-specific cues are also important. The behavioral consequences of changes of the overall interaural level difference in a virtual sound depended on the combined interaural time difference contained in the stimulus, indicating an indirect influence of temporal cues on elevational sound localization as well. Thus, elevational sound localization is influenced by a combination of many spatial cues including frequency-dependent and temporal features.

  1. Ultrasound-contrast-agent dispersion and velocity imaging for prostate cancer localization.

    PubMed

    van Sloun, Ruud Jg; Demi, Libertario; Postema, Arnoud W; de la Rosette, Jean Jmch; Wijkstra, Hessel; Mischi, Massimo

    2017-01-01

    Prostate cancer (PCa) is the second-leading cause of cancer death in men; however, reliable tools for detection and localization are still lacking. Dynamic Contrast Enhanced UltraSound (DCE-US) is a diagnostic tool that is suitable for analysis of vascularization, by imaging an intravenously injected microbubble bolus. The localization of angiogenic vascularization associated with the development of tumors is of particular interest. Recently, methods for the analysis of the bolus convective dispersion process have shown promise to localize angiogenesis. However, independent estimation of dispersion was not possible due to the ambiguity between convection and dispersion. Therefore, in this study we propose a new method that considers the vascular network as a dynamic linear system, whose impulse response can be locally identified. To this end, model-based parameter estimation is employed, that permits extraction of the apparent dispersion coefficient (D), velocity (v), and Péclet number (Pe) of the system. Clinical evaluation using data recorded from 25 patients shows that the proposed method can be applied effectively to DCE-US, and is able to locally characterize the hemodynamics, yielding promising results (receiver-operating-characteristic curve area of 0.84) for prostate cancer localization. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Theory of spin and lattice wave dynamics excited by focused laser pulses

    NASA Astrophysics Data System (ADS)

    Shen, Ka; Bauer, Gerrit E. W.

    2018-06-01

    We develop a theory of spin wave dynamics excited by ultrafast focused laser pulses in a magnetic film. We take into account both the volume and surface spin wave modes in the presence of applied, dipolar and magnetic anisotropy fields and include the dependence on laser spot exposure size and magnetic damping. We show that the sound waves generated by local heating by an ultrafast focused laser pulse can excite a wide spectrum of spin waves (on top of a dominant magnon–phonon contribution). Good agreement with recent experiments supports the validity of the model.

  3. A telescopic cinema sound camera for observing high altitude aerospace vehicles

    NASA Astrophysics Data System (ADS)

    Slater, Dan

    2014-09-01

    Rockets and other high altitude aerospace vehicles produce interesting visual and aural phenomena that can be remotely observed from long distances. This paper describes a compact, passive and covert remote sensing system that can produce high resolution sound movies at >100 km viewing distances. The telescopic high resolution camera is capable of resolving and quantifying space launch vehicle dynamics including plume formation, staging events and payload fairing jettison. Flight vehicles produce sounds and vibrations that modulate the local electromagnetic environment. These audio frequency modulations can be remotely sensed by passive optical and radio wave detectors. Acousto-optic sensing methods were primarily used but an experimental radioacoustic sensor using passive micro-Doppler radar techniques was also tested. The synchronized combination of high resolution flight vehicle imagery with the associated vehicle sounds produces a cinema like experience that that is useful in both an aerospace engineering and a Hollywood film production context. Examples of visual, aural and radar observations of the first SpaceX Falcon 9 v1.1 rocket launch are shown and discussed.

  4. Granular metamaterials for vibration mitigation

    NASA Astrophysics Data System (ADS)

    Gantzounis, G.; Serra-Garcia, M.; Homma, K.; Mendoza, J. M.; Daraio, C.

    2013-09-01

    Acoustic metamaterials that allow low-frequency band gaps are interesting for many practical engineering applications, where vibration control and sound insulation are necessary. In most prior studies, the mechanical response of these structures has been described using linear continuum approximations. In this work, we experimentally and theoretically address the formation of low-frequency band gaps in locally resonant granular crystals, where the dynamics of the system is governed by discrete equations. We investigate the quasi-linear behavior of such structures. The analysis shows that a stopband can be introduced at about one octave lower frequency than in materials without local resonances. Broadband and multi-frequency stopband characteristics can also be achieved by strategically tailoring the non-uniform local resonance parameters.

  5. Spatial localization deficits and auditory cortical dysfunction in schizophrenia

    PubMed Central

    Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.

    2014-01-01

    Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608

  6. Accurate Sound Localization in Reverberant Environments is Mediated by Robust Encoding of Spatial Cues in the Auditory Midbrain

    PubMed Central

    Devore, Sasha; Ihlefeld, Antje; Hancock, Kenneth; Shinn-Cunningham, Barbara; Delgutte, Bertrand

    2009-01-01

    In reverberant environments, acoustic reflections interfere with the direct sound arriving at a listener’s ears, distorting the spatial cues for sound localization. Yet, human listeners have little difficulty localizing sounds in most settings. Because reverberant energy builds up over time, the source location is represented relatively faithfully during the early portion of a sound, but this representation becomes increasingly degraded later in the stimulus. We show that the directional sensitivity of single neurons in the auditory midbrain of anesthetized cats follows a similar time course, although onset dominance in temporal response patterns results in more robust directional sensitivity than expected, suggesting a simple mechanism for improving directional sensitivity in reverberation. In parallel behavioral experiments, we demonstrate that human lateralization judgments are consistent with predictions from a population rate model decoding the observed midbrain responses, suggesting a subcortical origin for robust sound localization in reverberant environments. PMID:19376072

  7. Bone Conduction: Anatomy, Physiology, and Communication

    DTIC Science & Technology

    2007-05-01

    78 7.2 Human Localization Capabilities ..................................................................................84...main functions of the pinna are to direct incoming sound toward the EAC and to aid in sound localization . Some animals (e.g., dogs) can move their...pinnae to aid in sound localization , 9 but humans do not typically have this ability. People who may possess the ability to move their pinnae do

  8. Dynamics of metastable breathers in nonlinear chains in acoustic vacuum

    NASA Astrophysics Data System (ADS)

    Sen, Surajit; Mohan, T. R. Krishna

    2009-03-01

    The study of the dynamics of one-dimensional chains with both harmonic and nonlinear interactions, as in the Fermi-Pasta-Ulam and related problems, has played a central role in efforts to identify the broad consequences of nonlinearity in these systems. Nevertheless, little is known about the dynamical behavior of purely nonlinear chains where there is a complete absence of the harmonic term, and hence sound propagation is not admissible, i.e., under conditions of “acoustic vacuum.” Here we study the dynamics of highly localized excitations, or breathers, which are known to be initiated by the quasistatic stretching of the bonds between adjacent particles. We show via detailed particle-dynamics-based studies that many low-energy pulses also form in the vicinity of the perturbation, and the breathers that form are “fragile” in the sense that they can be easily delocalized by scattering events in the system. We show that the localized excitations eventually disperse, allowing the system to attain an equilibrium-like state that is realizable in acoustic vacuum. We conclude with a discussion of how the dynamics is affected by the presence of acoustic oscillations.

  9. Sound localization skills in children who use bilateral cochlear implants and in children with normal acoustic hearing

    PubMed Central

    Grieco-Calub, Tina M.; Litovsky, Ruth Y.

    2010-01-01

    Objectives To measure sound source localization in children who have sequential bilateral cochlear implants (BICIs); to determine if localization accuracy correlates with performance on a right-left discrimination task (i.e., spatial acuity); to determine if there is a measurable bilateral benefit on a sound source identification task (i.e., localization accuracy) by comparing performance under bilateral and unilateral listening conditions; to determine if sound source localization continues to improve with longer durations of bilateral experience. Design Two groups of children participated in this study: a group of 21 children who received BICIs in sequential procedures (5–14 years old) and a group of 7 typically-developing children with normal acoustic hearing (5 years old). Testing was conducted in a large sound-treated booth with loudspeakers positioned on a horizontal arc with a radius of 1.2 m. Children participated in two experiments that assessed spatial hearing skills. Spatial hearing acuity was assessed with a discrimination task in which listeners determined if a sound source was presented on the right or left side of center; the smallest angle at which performance on this task was reliably above chance is the minimum audible angle. Sound localization accuracy was assessed with a sound source identification task in which children identified the perceived position of the sound source from a multi-loudspeaker array (7 or 15); errors are quantified using the root-mean-square (RMS) error. Results Sound localization accuracy was highly variable among the children with BICIs, with RMS errors ranging from 19°–56°. Performance of the NH group, with RMS errors ranging from 9°–29° was significantly better. Within the BICI group, in 11/21 children RMS errors were smaller in the bilateral vs. unilateral listening condition, indicating bilateral benefit. There was a significant correlation between spatial acuity and sound localization accuracy (R2=0.68, p<0.01), suggesting that children who achieve small RMS errors tend to have the smallest MAAs. Although there was large intersubject variability, testing of 11 children in the BICI group at two sequential visits revealed a subset of children who show improvement in spatial hearing skills over time. Conclusions A subset of children who use sequential BICIs can acquire sound localization abilities, even after long intervals between activation of hearing in the first- and second-implanted ears. This suggests that children with activation of the second implant later in life may be capable of developing spatial hearing abilities. The large variability in performance among the children with BICIs suggests that maturation of sound localization abilities in children with BICIs may be dependent on various individual subject factors such as age of implantation and chronological age. PMID:20592615

  10. Dimensional analysis of acoustically propagated signals

    NASA Technical Reports Server (NTRS)

    Hansen, Scott D.; Thomson, Dennis W.

    1993-01-01

    Traditionally, long term measurements of atmospherically propagated sound signals have consisted of time series of multiminute averages. Only recently have continuous measurements with temporal resolution corresponding to turbulent time scales been available. With modern digital data acquisition systems we now have the capability to simultaneously record both acoustical and meteorological parameters with sufficient temporal resolution to allow us to examine in detail relationships between fluctuating sound and the meteorological variables, particularly wind and temperature, which locally determine the acoustic refractive index. The atmospheric acoustic propagation medium can be treated as a nonlinear dynamical system, a kind of signal processor whose innards depend on thermodynamic and turbulent processes in the atmosphere. The atmosphere is an inherently nonlinear dynamical system. In fact one simple model of atmospheric convection, the Lorenz system, may well be the most widely studied of all dynamical systems. In this paper we report some results of our having applied methods used to characterize nonlinear dynamical systems to study the characteristics of acoustical signals propagated through the atmosphere. For example, we investigate whether or not it is possible to parameterize signal fluctuations in terms of fractal dimensions. For time series one such parameter is the limit capacity dimension. Nicolis and Nicolis were among the first to use the kind of methods we have to study the properties of low dimension global attractors.

  11. Embedded System Implementation of Sound Localization in Proximal Region

    NASA Astrophysics Data System (ADS)

    Iwanaga, Nobuyuki; Matsumura, Tomoya; Yoshida, Akihiro; Kobayashi, Wataru; Onoye, Takao

    A sound localization method in the proximal region is proposed, which is based on a low-cost 3D sound localization algorithm with the use of head-related transfer functions (HRTFs). The auditory parallax model is applied to the current algorithm so that more accurate HRTFs can be used for sound localization in the proximal region. In addition, head-shadowing effects based on rigid-sphere model are reproduced in the proximal region by means of a second-order IIR filter. A subjective listening test demonstrates the effectiveness of the proposed method. Embedded system implementation of the proposed method is also described claiming that the proposed method improves sound effects in the proximal region only with 5.1% increase of memory capacity and 8.3% of computational costs.

  12. Adjustment of interaural time difference in head related transfer functions based on listeners' anthropometry and its effect on sound localization

    NASA Astrophysics Data System (ADS)

    Suzuki, Yôiti; Watanabe, Kanji; Iwaya, Yukio; Gyoba, Jiro; Takane, Shouichi

    2005-04-01

    Because the transfer functions governing subjective sound localization (HRTFs) show strong individuality, sound localization systems based on synthesis of HRTFs require suitable HRTFs for individual listeners. However, it is impractical to obtain HRTFs for all listeners based on measurements. Improving sound localization by adjusting non-individualized HRTFs to a specific listener based on that listener's anthropometry might be a practical method. This study first developed a new method to estimate interaural time differences (ITDs) using HRTFs. Then correlations between ITDs and anthropometric parameters were analyzed using the canonical correlation method. Results indicated that parameters relating to head size, and shoulder and ear positions are significant. Consequently, it was attempted to express ITDs based on listener's anthropometric data. In this process, the change of ITDs as a function of azimuth angle was parameterized as a sum of sine functions. Then the parameters were analyzed using multiple regression analysis, in which the anthropometric parameters were used as explanatory variables. The predicted or individualized ITDs were installed in the nonindividualized HRTFs to evaluate sound localization performance. Results showed that individualization of ITDs improved horizontal sound localization.

  13. A Functional Neuroimaging Study of Sound Localization: Visual Cortex Activity Predicts Performance in Early-Blind Individuals

    PubMed Central

    Gougoux, Frédéric; Zatorre, Robert J; Lassonde, Maryse; Voss, Patrice

    2005-01-01

    Blind individuals often demonstrate enhanced nonvisual perceptual abilities. However, the neural substrate that underlies this improved performance remains to be fully understood. An earlier behavioral study demonstrated that some early-blind people localize sounds more accurately than sighted controls using monaural cues. In order to investigate the neural basis of these behavioral differences in humans, we carried out functional imaging studies using positron emission tomography and a speaker array that permitted pseudo-free-field presentations within the scanner. During binaural sound localization, a sighted control group showed decreased cerebral blood flow in the occipital lobe, which was not seen in early-blind individuals. During monaural sound localization (one ear plugged), the subgroup of early-blind subjects who were behaviorally superior at sound localization displayed two activation foci in the occipital cortex. This effect was not seen in blind persons who did not have superior monaural sound localization abilities, nor in sighted individuals. The degree of activation of one of these foci was strongly correlated with sound localization accuracy across the entire group of blind subjects. The results show that those blind persons who perform better than sighted persons recruit occipital areas to carry out auditory localization under monaural conditions. We therefore conclude that computations carried out in the occipital cortex specifically underlie the enhanced capacity to use monaural cues. Our findings shed light not only on intermodal compensatory mechanisms, but also on individual differences in these mechanisms and on inhibitory patterns that differ between sighted individuals and those deprived of vision early in life. PMID:15678166

  14. Adaptation in sound localization processing induced by interaural time difference in amplitude envelope at high frequencies.

    PubMed

    Kawashima, Takayuki; Sato, Takao

    2012-01-01

    When a second sound follows a long first sound, its location appears to be perceived away from the first one (the localization/lateralization aftereffect). This aftereffect has often been considered to reflect an efficient neural coding of sound locations in the auditory system. To understand determinants of the localization aftereffect, the current study examined whether it is induced by an interaural temporal difference (ITD) in the amplitude envelope of high frequency transposed tones (over 2 kHz), which is known to function as a sound localization cue. In Experiment 1, participants were required to adjust the position of a pointer to the perceived location of test stimuli before and after adaptation. Test and adapter stimuli were amplitude modulated (AM) sounds presented at high frequencies and their positional differences were manipulated solely by the envelope ITD. Results showed that the adapter's ITD systematically affected the perceived position of test sounds to the directions expected from the localization/lateralization aftereffect when the adapter was presented at ±600 µs ITD; a corresponding significant effect was not observed for a 0 µs ITD adapter. In Experiment 2, the observed adapter effect was confirmed using a forced-choice task. It was also found that adaptation to the AM sounds at high frequencies did not significantly change the perceived position of pure-tone test stimuli in the low frequency region (128 and 256 Hz). The findings in the current study indicate that ITD in the envelope at high frequencies induces the localization aftereffect. This suggests that ITD in the high frequency region is involved in adaptive plasticity of auditory localization processing.

  15. Underwater hearing and sound localization with and without an air interface.

    PubMed

    Shupak, Avi; Sharoni, Zohara; Yanir, Yoav; Keynan, Yoav; Alfie, Yechezkel; Halpern, Pinchas

    2005-01-01

    Underwater hearing acuity and sound localization are improved by the presence of an air interface around the pinnae and inside the external ear canals. Hearing threshold and the ability to localize sound sources are reduced underwater. The resonance frequency of the external ear is lowered when the external ear canal is filled with water, and the impedance-matching ability of the middle ear is significantly reduced due to elevation of the ambient pressure, the water-mass load on the tympanic membrane, and the addition of a fluid-air interface during submersion. Sound lateralization on land is largely explained by the mechanisms of interaural intensity differences and interaural temporal or phase differences. During submersion, these differences are largely lost due to the increase in underwater sound velocity and cancellation of the head's acoustic shadow effect because of the similarity between the impedance of the skull and the surrounding water. Ten scuba divers wearing a regular opaque face mask or an opaque ProEar 2000 (Safe Dive, Ltd., Hofit, Israel) mask that enables the presence of air at ambient pressure in and around the ear made a dive to a depth of 3 m in the open sea. Four underwater speakers arranged on the horizontal plane at 90-degree intervals and at a distance of 5 m from the diver were used for testing pure-tone hearing thresholds (PTHT), the reception threshold for the recorded sound of a rubber-boat engine, and sound localization. For sound localization, the sound of the rubber boat's engine was randomly delivered by one speaker at a time at 40 dB HL above the recorded sound of a rubber-boat engine, and the diver was asked to point to the sound source. The azimuth was measured by the diver's companion using a navigation board. Underwater PTHT with both masks were significantly higher for frequencies of 250 to 6000 Hz when compared with the thresholds on land (p <0.0001). No differences were found in the PTHT or the reception threshold for the recorded sound of a rubber-boat engine for dry or wet ear conditions. There was no difference in the sound localization error between the regular mask and the ProEar 2000 mask. The presence of air around the pinna and inside the external ear canal did not improve underwater hearing sensitivity or sound localization. These results support the argument that bone conduction plays the main role in underwater hearing.

  16. Geometric Constraints on Human Speech Sound Inventories

    PubMed Central

    Dunbar, Ewan; Dupoux, Emmanuel

    2016-01-01

    We investigate the idea that the languages of the world have developed coherent sound systems in which having one sound increases or decreases the chances of having certain other sounds, depending on shared properties of those sounds. We investigate the geometries of sound systems that are defined by the inherent properties of sounds. We document three typological tendencies in sound system geometries: economy, a tendency for the differences between sounds in a system to be definable on a relatively small number of independent dimensions; local symmetry, a tendency for sound systems to have relatively large numbers of pairs of sounds that differ only on one dimension; and global symmetry, a tendency for sound systems to be relatively balanced. The finding of economy corroborates previous results; the two symmetry properties have not been previously documented. We also investigate the relation between the typology of inventory geometries and the typology of individual sounds, showing that the frequency distribution with which individual sounds occur across languages works in favor of both local and global symmetry. PMID:27462296

  17. Creating wavelet-based models for real-time synthesis of perceptually convincing environmental sounds

    NASA Astrophysics Data System (ADS)

    Miner, Nadine Elizabeth

    1998-09-01

    This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.

  18. Dynamic Assessment of Phonological Awareness for Children with Speech Sound Disorders

    ERIC Educational Resources Information Center

    Gillam, Sandra Laing; Ford, Mikenzi Bentley

    2012-01-01

    The current study was designed to examine the relationships between performance on a nonverbal phoneme deletion task administered in a dynamic assessment format with performance on measures of phoneme deletion, word-level reading, and speech sound production that required verbal responses for school-age children with speech sound disorders (SSDs).…

  19. Issues in Humanoid Audition and Sound Source Localization by Active Audition

    NASA Astrophysics Data System (ADS)

    Nakadai, Kazuhiro; Okuno, Hiroshi G.; Kitano, Hiroaki

    In this paper, we present an active audition system which is implemented on the humanoid robot "SIG the humanoid". The audition system for highly intelligent humanoids localizes sound sources and recognizes auditory events in the auditory scene. Active audition reported in this paper enables SIG to track sources by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. However, such an active head movement inevitably creates motor noises.The system adaptively cancels motor noises using motor control signals and the cover acoustics. The experimental result demonstrates that active audition by integration of audition, vision, and motor control attains sound source tracking in variety of conditions.onditions.

  20. The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl.

    PubMed

    Baxter, Caitlin S; Nelson, Brian S; Takahashi, Terry T

    2013-02-01

    Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643-655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map.

  1. Perception of music dynamics in concert hall acoustics.

    PubMed

    Pätynen, Jukka; Lokki, Tapio

    2016-11-01

    Dynamics is one of the principal means of expressivity in Western classical music. Still, preceding research on room acoustics has mostly neglected the contribution of music dynamics to the acoustic perception. This study investigates how the different concert hall acoustics influence the perception of varying music dynamics. An anechoic orchestra signal, containing a step in music dynamics, was rendered in the measured acoustics of six concert halls at three seats in each. Spatial sound was reproduced through a loudspeaker array. By paired comparison, naive subjects selected the stimuli that they considered to change more during the music. Furthermore, the subjects described their foremost perceptual criteria for each selection. The most distinct perceptual factors differentiating the rendering of music dynamics between halls include the dynamic range, and varying width of sound and reverberance. The results confirm the hypothesis that the concert halls render the performed music dynamics differently, and with various perceptual aspects. The analysis against objective room acoustic parameters suggests that the perceived dynamic contrasts are pronounced by acoustics that provide stronger sound and more binaural incoherence by a lateral sound field. Concert halls that enhance the dynamics have been found earlier to elicit high subjective preference.

  2. Monaural Sound Localization Revisited

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Kistler, Doris J.

    1997-01-01

    Research reported during the past few decades has revealed the importance for human sound localization of the so-called 'monaural spectral cues.' These cues are the result of the direction-dependent filtering of incoming sound waves accomplished by the pinnae. One point of view about how these cues are extracted places great emphasis on the spectrum of the received sound at each ear individually. This leads to the suggestion that an effective way of studying the influence of these cues is to measure the ability of listeners to localize sounds when one of their ears is plugged. Numerous studies have appeared using this monaural localization paradigm. Three experiments are described here which are intended to clarify the results of the previous monaural localization studies and provide new data on how monaural spectral cues might be processed. Virtual sound sources are used in the experiments in order to manipulate and control the stimuli independently at the two ears. Two of the experiments deal with the consequences of the incomplete monauralization that may have contaminated previous work. The results suggest that even very low sound levels in the occluded ear provide access to interaural localization cues. The presence of these cues complicates the interpretation of the results of nominally monaural localization studies. The third experiment concerns the role of prior knowledge of the source spectrum, which is required if monaural cues are to be useful. The results of this last experiment demonstrate that extraction of monaural spectral cues can be severely disrupted by trial-to-trial fluctuations in the source spectrum. The general conclusion of the experiments is that, while monaural spectral cues are important, the monaural localization paradigm may not be the most appropriate way to study their role.

  3. Monaural sound localization revisited.

    PubMed

    Wightman, F L; Kistler, D J

    1997-02-01

    Research reported during the past few decades has revealed the importance for human sound localization of the so-called "monaural spectral cues." These cues are the result of the direction-dependent filtering of incoming sound waves accomplished by the pinnae. One point of view about how these cues are extracted places great emphasis on the spectrum of the received sound at each ear individually. This leads to the suggestion that an effective way of studying the influence of these cues is to measure the ability of listeners to localize sounds when one of their ears is plugged. Numerous studies have appeared using this monaural localization paradigm. Three experiments are described here which are intended to clarify the results of the previous monaural localization studies and provide new data on how monaural spectral cues might be processed. Virtual sound sources are used in the experiments in order to manipulate and control the stimuli independently at the two ears. Two of the experiments deal with the consequences of the incomplete monauralization that may have contaminated previous work. The results suggest that even very low sound levels in the occluded ear provide access to interaural localization cues. The presence of these cues complicates the interpretation of the results of nominally monaural localization studies. The third experiment concerns the role of prior knowledge of the source spectrum, which is required if monaural cues are to be useful. The results of this last experiment demonstrate that extraction of monaural spectral cues can be severely disrupted by trial-to-trial fluctuations in the source spectrum. The general conclusion of the experiments is that, while monaural spectral cues are important, the monaural localization paradigm may not be the most appropriate way to study their role.

  4. Sound Source Localization Using Non-Conformal Surface Sound Field Transformation Based on Spherical Harmonic Wave Decomposition

    PubMed Central

    Zhang, Lanyue; Ding, Dandan; Yang, Desen; Wang, Jia; Shi, Jie

    2017-01-01

    Spherical microphone arrays have been paid increasing attention for their ability to locate a sound source with arbitrary incident angle in three-dimensional space. Low-frequency sound sources are usually located by using spherical near-field acoustic holography. The reconstruction surface and holography surface are conformal surfaces in the conventional sound field transformation based on generalized Fourier transform. When the sound source is on the cylindrical surface, it is difficult to locate by using spherical surface conformal transform. The non-conformal sound field transformation by making a transfer matrix based on spherical harmonic wave decomposition is proposed in this paper, which can achieve the transformation of a spherical surface into a cylindrical surface by using spherical array data. The theoretical expressions of the proposed method are deduced, and the performance of the method is simulated. Moreover, the experiment of sound source localization by using a spherical array with randomly and uniformly distributed elements is carried out. Results show that the non-conformal surface sound field transformation from a spherical surface to a cylindrical surface is realized by using the proposed method. The localization deviation is around 0.01 m, and the resolution is around 0.3 m. The application of the spherical array is extended, and the localization ability of the spherical array is improved. PMID:28489065

  5. Physiological correlates of sound localization in a parasitoid fly, Ormia ochracea

    NASA Astrophysics Data System (ADS)

    Oshinsky, Michael Lee

    A major focus of research in the nervous system is the investigation of neural circuits. The question of how neurons connect to form functional units has driven modern neuroscience research from its inception. From the beginning, the neural circuits of the auditory system and specifically sound localization were used as a model system for investigating neural connectivity and computation. Sound localization lends itself to this task because there is no mapping of spatial information on a receptor sheet as in vision. With only one eye, an animal would still have positional information for objects. Since the receptor sheet in the ear is frequency oriented and not spatially oriented, positional information for a sound source does not exist with only one ear. The nervous system computes the location of a sound source based on differences in the physiology of the two ears. In this study, I investigated the neural circuits for sound localization in a fly, Ormia ochracea (Diptera, Tachinidae, Ormiini), which is a parasitoid of crickets. This fly possess a unique mechanically coupled hearing organ. The two ears are contained in one air sac and a cuticular bridge, that has a flexible spring-like structure at its center, connects them. This mechanical coupling preprocesses the sound before it is detected by the nervous system and provides the fly with directional information. The subject of this study is the neural coding of the location of sound stimuli by a mechanically coupled auditory system. In chapter 1, I present the natural history of an acoustic parasitoid and I review the peripheral processing of sound by the Ormian ear. In chapter 2, I describe the anatomy and physiology of the auditory afferents. I present this physiology in the context of sound localization. In chapter 3, I describe the directional dependent physiology for the thoracic local and ascending acoustic interneurons. In chapter 4, I quantify the threshold and I detail the kinematics of the phonotactic walking behavior in Ormia ochracea. I also quantify the angular resolution of the phonotactic turning behavior. Using a model, I show that the temporal coding properties of the afferents provide most of the information required by the fly to localize a singing cricket.

  6. Localization of sound sources in a room with one microphone

    NASA Astrophysics Data System (ADS)

    Peić Tukuljac, Helena; Lissek, Hervé; Vandergheynst, Pierre

    2017-08-01

    Estimation of the location of sound sources is usually done using microphone arrays. Such settings provide an environment where we know the difference between the received signals among different microphones in the terms of phase or attenuation, which enables localization of the sound sources. In our solution we exploit the properties of the room transfer function in order to localize a sound source inside a room with only one microphone. The shape of the room and the position of the microphone are assumed to be known. The design guidelines and limitations of the sensing matrix are given. Implementation is based on the sparsity in the terms of voxels in a room that are occupied by a source. What is especially interesting about our solution is that we provide localization of the sound sources not only in the horizontal plane, but in the terms of the 3D coordinates inside the room.

  7. How does experience modulate auditory spatial processing in individuals with blindness?

    PubMed

    Tao, Qian; Chan, Chetwyn C H; Luo, Yue-jia; Li, Jian-jun; Ting, Kin-hung; Wang, Jun; Lee, Tatia M C

    2015-05-01

    Comparing early- and late-onset blindness in individuals offers a unique model for studying the influence of visual experience on neural processing. This study investigated how prior visual experience would modulate auditory spatial processing among blind individuals. BOLD responses of early- and late-onset blind participants were captured while performing a sound localization task. The task required participants to listen to novel "Bat-ears" sounds, analyze the spatial information embedded in the sounds, and specify out of 15 locations where the sound would have been emitted. In addition to sound localization, participants were assessed on visuospatial working memory and general intellectual abilities. The results revealed common increases in BOLD responses in the middle occipital gyrus, superior frontal gyrus, precuneus, and precentral gyrus during sound localization for both groups. Between-group dissociations, however, were found in the right middle occipital gyrus and left superior frontal gyrus. The BOLD responses in the left superior frontal gyrus were significantly correlated with accuracy on sound localization and visuospatial working memory abilities among the late-onset blind participants. In contrast, the accuracy on sound localization only correlated with BOLD responses in the right middle occipital gyrus among the early-onset counterpart. The findings support the notion that early-onset blind individuals rely more on the occipital areas as a result of cross-modal plasticity for auditory spatial processing, while late-onset blind individuals rely more on the prefrontal areas which subserve visuospatial working memory.

  8. [Application of the computer-based respiratory sound analysis system based on Mel-frequency cepstral coefficient and dynamic time warping in healthy children].

    PubMed

    Yan, W Y; Li, L; Yang, Y G; Lin, X L; Wu, J Z

    2016-08-01

    We designed a computer-based respiratory sound analysis system to identify pediatric normal lung sound. To verify the validity of the computer-based respiratory sound analysis system. First we downloaded the standard lung sounds from the network database (website: http: //www.easyauscultation.com/lung-sounds-reference-guide) and recorded 3 samples of abnormal loud sound (rhonchi, wheeze and crackles) from three patients of The Department of Pediatrics, the First Affiliated Hospital of Xiamen University. We regarded such lung sounds as"reference lung sounds". The"test lung sounds"were recorded from 29 children form Kindergarten of Xiamen University. we recorded lung sound by portable electronic stethoscope and valid lung sounds were selected by manual identification. We introduced Mel-frequency cepstral coefficient (MFCC) to extract lung sound features and dynamic time warping (DTW) for signal classification. We had 39 standard lung sounds, recorded 58 test lung sounds. This computer-based respiratory sound analysis system was carried out in 58 lung sound recognition, correct identification of 52 times, error identification 6 times. Accuracy was 89.7%. Based on MFCC and DTW, our computer-based respiratory sound analysis system can effectively identify healthy lung sounds of children (accuracy can reach 89.7%), fully embodies the reliability of the lung sounds analysis system.

  9. Shedding Synchrotron Light on a Puzzle of Glasses

    ScienceCinema

    Chumakov, Aleksandr [European Synchrotron Radiation Facility, Grenoble, France

    2017-12-09

    Vibrational dynamics of glasses remains a point of controversial discussions. In particular, the density of vibrational states (DOS) reveals an excess of states above the Debye model called "boson peak." Despite the fact that this universal feature for all glasses has been known for more than 35 years, the nature of the boson peak is still not understood. The application of nuclear inelastic scattering via synchrotron radiation perhaps provides a clearer, more consistent picture of the subject. The distinguishing features of nuclear inelastic scattering relative to, e.g., neutron inelastic scattering, are ideal momentum integration and exact scaling of the DOS in absolute units. This allows for reliable comparison to data from other techniques such as Brillouin light scattering. Another strong point is ideal isotope selectivity: the DOS is measured for a single isotope with a specific low-energy nuclear transition. This allows for special "design" of an experiment to study, for instance, the dynamics of only center-of-mass motions. Recently, we have investigated the transformation of the DOS as a function of several key parameters such as temperature, cooling rate, and density. In all cases the transformation of the DOS is sufficiently well described by a transformation of the continuous medium, in particular, by changes of the macroscopic density and the sound velocity. These results suggest a collective sound-like nature of vibrational dynamics in glasses and cast doubts on microscopic models of glass dynamics. Further insight can be obtained in combined studies of glass with nuclear inelastic and inelastic neutron scattering. Applying two techniques, we have measured the energy dependence of the characteristic correlation length of atomic motions. The data do not reveal localization of atomic vibrations at the energy of the boson peak. Once again, the results suggest that special features of glass dynamics are related to extended motions and not to local models.

  10. Acoustic localization at large scales: a promising method for grey wolf monitoring.

    PubMed

    Papin, Morgane; Pichenot, Julian; Guérold, François; Germain, Estelle

    2018-01-01

    The grey wolf ( Canis lupus ) is naturally recolonizing its former habitats in Europe where it was extirpated during the previous two centuries. The management of this protected species is often controversial and its monitoring is a challenge for conservation purposes. However, this elusive carnivore can disperse over long distances in various natural contexts, making its monitoring difficult. Moreover, methods used for collecting signs of presence are usually time-consuming and/or costly. Currently, new acoustic recording tools are contributing to the development of passive acoustic methods as alternative approaches for detecting, monitoring, or identifying species that produce sounds in nature, such as the grey wolf. In the present study, we conducted field experiments to investigate the possibility of using a low-density microphone array to localize wolves at a large scale in two contrasting natural environments in north-eastern France. For scientific and social reasons, the experiments were based on a synthetic sound with similar acoustic properties to howls. This sound was broadcast at several sites. Then, localization estimates and the accuracy were calculated. Finally, linear mixed-effects models were used to identify the factors that influenced the localization accuracy. Among 354 nocturnal broadcasts in total, 269 were recorded by at least one autonomous recorder, thereby demonstrating the potential of this tool. Besides, 59 broadcasts were recorded by at least four microphones and used for acoustic localization. The broadcast sites were localized with an overall mean accuracy of 315 ± 617 (standard deviation) m. After setting a threshold for the temporal error value associated with the estimated coordinates, some unreliable values were excluded and the mean accuracy decreased to 167 ± 308 m. The number of broadcasts recorded was higher in the lowland environment, but the localization accuracy was similar in both environments, although it varied significantly among different nights in each study area. Our results confirm the potential of using acoustic methods to localize wolves with high accuracy, in different natural environments and at large spatial scales. Passive acoustic methods are suitable for monitoring the dynamics of grey wolf recolonization and so, will contribute to enhance conservation and management plans.

  11. Position-dependent hearing in three species of bushcrickets (Tettigoniidae, Orthoptera)

    PubMed Central

    Lakes-Harlan, Reinhard; Scherberich, Jan

    2015-01-01

    A primary task of auditory systems is the localization of sound sources in space. Sound source localization in azimuth is usually based on temporal or intensity differences of sounds between the bilaterally arranged ears. In mammals, localization in elevation is possible by transfer functions at the ear, especially the pinnae. Although insects are able to locate sound sources, little attention is given to the mechanisms of acoustic orientation to elevated positions. Here we comparatively analyse the peripheral hearing thresholds of three species of bushcrickets in respect to sound source positions in space. The hearing thresholds across frequencies depend on the location of a sound source in the three-dimensional hearing space in front of the animal. Thresholds differ for different azimuthal positions and for different positions in elevation. This position-dependent frequency tuning is species specific. Largest differences in thresholds between positions are found in Ancylecha fenestrata. Correspondingly, A. fenestrata has a rather complex ear morphology including cuticular folds covering the anterior tympanal membrane. The position-dependent tuning might contribute to sound source localization in the habitats. Acoustic orientation might be a selective factor for the evolution of morphological structures at the bushcricket ear and, speculatively, even for frequency fractioning in the ear. PMID:26543574

  12. Position-dependent hearing in three species of bushcrickets (Tettigoniidae, Orthoptera).

    PubMed

    Lakes-Harlan, Reinhard; Scherberich, Jan

    2015-06-01

    A primary task of auditory systems is the localization of sound sources in space. Sound source localization in azimuth is usually based on temporal or intensity differences of sounds between the bilaterally arranged ears. In mammals, localization in elevation is possible by transfer functions at the ear, especially the pinnae. Although insects are able to locate sound sources, little attention is given to the mechanisms of acoustic orientation to elevated positions. Here we comparatively analyse the peripheral hearing thresholds of three species of bushcrickets in respect to sound source positions in space. The hearing thresholds across frequencies depend on the location of a sound source in the three-dimensional hearing space in front of the animal. Thresholds differ for different azimuthal positions and for different positions in elevation. This position-dependent frequency tuning is species specific. Largest differences in thresholds between positions are found in Ancylecha fenestrata. Correspondingly, A. fenestrata has a rather complex ear morphology including cuticular folds covering the anterior tympanal membrane. The position-dependent tuning might contribute to sound source localization in the habitats. Acoustic orientation might be a selective factor for the evolution of morphological structures at the bushcricket ear and, speculatively, even for frequency fractioning in the ear.

  13. Wisconsinan and early Holocene glacial dynamics of Cumberland Peninsula, Baffin Island, Arctic Canada

    NASA Astrophysics Data System (ADS)

    Margreth, Annina; Gosse, John C.; Dyke, Arthur S.

    2017-07-01

    Three glacier systems-an ice sheet with a large marine-based ice stream, an ice cap, and an alpine glacier complex-coalesced on Cumberland Peninsula during the Late Wisconsinan. We combine high-resolution mapping of glacial deposits with new cosmogenic nuclide and radiocarbon age determinations to constrain the history and dynamics of each system. During the Middle Wisconsinan (Oxygen Isotope Stage 3, OIS-3) the Cumberland Sound Ice Stream of the Laurentide Ice Sheet retreated well back into Cumberland Sound and the alpine ice retreated at least to fiord-head positions, a more significant recession than previously documented. The advance to maximal OIS-2 ice positions beyond the mouth of Cumberland Sound and beyond most stretches of coastline remains undated. Partial preservation of an over-ridden OIS-3 glaciomarine delta in a fiord-side position suggests that even fiord ice was weakly erosive in places. Moraines formed during deglaciation represent stillstands and re-advances during three major cold events: H-1 (14.6 ka), Younger Dryas (12.9-11.7 ka), and Cockburn (9.5 ka). Distinctly different responses of the three glacial systems are evident, with the alpine system responding most sensitively to Bølling-Allerød warming whereas the larger systems retreated mainly during Pre-Boreal warming. While the larger ice masses were mainly influenced by internal dynamics, the smaller alpine glacier system responded sensitively to local climate effects. Asymmetrical recession of the alpine glacier complex indicates topoclimatic control on deglaciation and perhaps migration of the accumulation area toward moisture source.

  14. Hybrid local piezoelectric and conductive functions for high performance airborne sound absorption

    NASA Astrophysics Data System (ADS)

    Rahimabady, Mojtaba; Statharas, Eleftherios Christos; Yao, Kui; Sharifzadeh Mirshekarloo, Meysam; Chen, Shuting; Tay, Francis Eng Hock

    2017-12-01

    A concept of hybrid local piezoelectric and electrical conductive functions for improving airborne sound absorption is proposed and demonstrated in composite foam made of porous polar polyvinylidene fluoride (PVDF) mixed with conductive single-walled carbon nanotube (SWCNT). According to our hybrid material function design, the local piezoelectric effect in the PVDF matrix with the polar structure and the electrical resistive loss of SWCNT enhanced sound energy conversion to electrical energy and subsequently to thermal energy, respectively, in addition to the other known sound absorption mechanisms in a porous material. It is found that the overall energy conversion and hence the sound absorption performance are maximized when the concentration of the SWCNT is around the conductivity percolation threshold. For the optimal composition of PVDF/5 wt. % SWCNT, a sound reduction coefficient of larger than 0.58 has been obtained, with a high sound absorption coefficient higher than 50% at 600 Hz, showing their great values for passive noise mitigation even at a low frequency.

  15. An Alexandrium Spp. Cyst Record from Sequim Bay, Washington State, USA, and its Relation to Past Climate Variability(1).

    PubMed

    Feifel, Kirsten M; Moore, Stephanie K; Horner, Rita A

    2012-06-01

    Since the 1970s, Puget Sound, Washington State, USA, has experienced an increase in detections of paralytic shellfish toxins (PSTs) in shellfish due to blooms of the harmful dinoflagellate Alexandrium. Natural patterns of climate variability, such as the Pacific Decadal Oscillation (PDO), and changes in local environmental factors, such as sea surface temperature (SST) and air temperature, have been linked to the observed increase in PSTs. However, the lack of observations of PSTs in shellfish prior to the 1950s has inhibited statistical assessments of longer-term trends in climate and environmental conditions on Alexandrium blooms. After a bloom, Alexandrium cells can enter a dormant cyst stage, which settles on the seafloor and then becomes entrained into the sedimentary record. In this study, we created a record of Alexandrium spp. cysts from a sediment core obtained from Sequim Bay, Puget Sound. Cyst abundances ranged from 0 to 400 cysts · cm(-3) and were detected down-core to a depth of 100 cm, indicating that Alexandrium has been present in Sequim Bay since at least the late 1800s. The cyst record allowed us to statistically examine relationships with available environmental parameters over the past century. Local air temperature and sea surface temperature were positively and significantly correlated with cyst abundances from the late 1800s to 2005; no significant relationship was found between PDO and cyst abundances. This finding suggests that local environmental variations more strongly influence Alexandrium population dynamics in Puget Sound when compared to large-scale changes. © 2012 Phycological Society of America.

  16. The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl

    PubMed Central

    Baxter, Caitlin S.; Takahashi, Terry T.

    2013-01-01

    Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643–655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map. PMID:23175801

  17. Do top predators cue on sound production by mesopelagic prey?

    NASA Astrophysics Data System (ADS)

    Baumann-Pickering, S.; Checkley, D. M., Jr.; Demer, D. A.

    2016-02-01

    Deep-scattering layer (DSL) organisms, comprising a variety of mesopelagic fishes, and squids, siphonophores, crustaceans, and other invertebrates, are preferred prey for numerous large marine predators, e.g. cetaceans, seabirds, and fishes. Some of the DSL species migrate from depth during daylight to feed near the surface at night, transitioning during dusk and dawn. We investigated if any DSL organisms create sound, particularly during the crepuscular periods. Over several nights in summer 2015, underwater sound was recorded in the San Diego Trough using a high-frequency acoustic recording package (HARP, 10 Hz to 100 kHz), suspended from a drifting surface float. Acoustic backscatter from the DSL was monitored nearby using a calibrated multiple-frequency (38, 70, 120, and 200 kHz) split-beam echosounder (Simrad EK60) on a small boat. DSL organisms produced sound, between 300 and 1000 Hz, and the received levels were highest when the animals migrated past the recorder during ascent and descent. The DSL are globally present, so the observed acoustic phenomenon, if also ubiquitous, has wide-reaching implications. Sound travels farther than light or chemicals and thus can be sensed at greater distances by predators, prey, and mates. If sound is a characteristic feature of pelagic ecosystems, it likely plays a role in predator-prey relationships and overall ecosystem dynamics. Our new finding inspires numerous questions such as: Which, how, and why have DSL organisms evolved to create sound, for what do they use it and under what circumstances? Is sound production by DSL organisms truly ubiquitous, or does it depend on the local environment and species composition? How may sound production and perception be adapted to a changing environment? Do predators react to changes in sound? Can sound be used to quantify the composition of mixed-species assemblages, component densities and abundances, and hence be used in stock assessment or predictive modeling?

  18. Neuromagnetic recordings reveal the temporal dynamics of auditory spatial processing in the human cortex.

    PubMed

    Tiitinen, Hannu; Salminen, Nelli H; Palomäki, Kalle J; Mäkinen, Ville T; Alku, Paavo; May, Patrick J C

    2006-03-20

    In an attempt to delineate the assumed 'what' and 'where' processing streams, we studied the processing of spatial sound in the human cortex by using magnetoencephalography in the passive and active recording conditions and two kinds of spatial stimuli: individually constructed, highly realistic spatial (3D) stimuli and stimuli containing interaural time difference (ITD) cues only. The auditory P1m, N1m, and P2m responses of the event-related field were found to be sensitive to the direction of sound source in the azimuthal plane. In general, the right-hemispheric responses to spatial sounds were more prominent than the left-hemispheric ones. The right-hemispheric P1m and N1m responses peaked earlier for sound sources in the contralateral than for sources in the ipsilateral hemifield and the peak amplitudes of all responses reached their maxima for contralateral sound sources. The amplitude of the right-hemispheric P2m response reflected the degree of spatiality of sound, being twice as large for the 3D than ITD stimuli. The results indicate that the right hemisphere is specialized in the processing of spatial cues in the passive recording condition. Minimum current estimate (MCE) localization revealed that temporal areas were activated both in the active and passive condition. This initial activation, taking place at around 100 ms, was followed by parietal and frontal activity at 180 and 200 ms, respectively. The latter activations, however, were specific to attentional engagement and motor responding. This suggests that parietal activation reflects active responding to a spatial sound rather than auditory spatial processing as such.

  19. Bubble dynamics in a standing sound field: the bubble habitat.

    PubMed

    Koch, P; Kurz, T; Parlitz, U; Lauterborn, W

    2011-11-01

    Bubble dynamics is investigated numerically with special emphasis on the static pressure and the positional stability of the bubble in a standing sound field. The bubble habitat, made up of not dissolving, positionally and spherically stable bubbles, is calculated in the parameter space of the bubble radius at rest and sound pressure amplitude for different sound field frequencies, static pressures, and gas concentrations of the liquid. The bubble habitat grows with static pressure and shrinks with sound field frequency. The range of diffusionally stable bubble oscillations, found at positive slopes of the habitat-diffusion border, can be increased substantially with static pressure.

  20. Characteristic sounds facilitate visual search.

    PubMed

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  1. Spherical loudspeaker array for local active control of sound.

    PubMed

    Rafaely, Boaz

    2009-05-01

    Active control of sound has been employed to reduce noise levels around listeners' head using destructive interference from noise-canceling sound sources. Recently, spherical loudspeaker arrays have been studied as multiple-channel sound sources, capable of generating sound fields with high complexity. In this paper, the potential use of a spherical loudspeaker array for local active control of sound is investigated. A theoretical analysis of the primary and secondary sound fields around a spherical sound source reveals that the natural quiet zones for the spherical source have a shell-shape. Using numerical optimization, quiet zones with other shapes are designed, showing potential for quiet zones with extents that are significantly larger than the well-known limit of a tenth of a wavelength for monopole sources. The paper presents several simulation examples showing quiet zones in various configurations.

  2. Simulation and testing of a multichannel system for 3D sound localization

    NASA Astrophysics Data System (ADS)

    Matthews, Edward Albert

    Three-dimensional (3D) audio involves the ability to localize sound anywhere in a three-dimensional space. 3D audio can be used to provide the listener with the perception of moving sounds and can provide a realistic listening experience for applications such as gaming, video conferencing, movies, and concerts. The purpose of this research is to simulate and test 3D audio by incorporating auditory localization techniques in a multi-channel speaker system. The objective is to develop an algorithm that can place an audio event in a desired location by calculating and controlling the gain factors of each speaker. A MATLAB simulation displays the location of the speakers and perceived sound, which is verified through experimentation. The scenario in which the listener is not equidistant from each of the speakers is also investigated and simulated. This research is envisioned to lead to a better understanding of human localization of sound, and will contribute to a more realistic listening experience.

  3. An integrated system for dynamic control of auditory perspective in a multichannel sound field

    NASA Astrophysics Data System (ADS)

    Corey, Jason Andrew

    An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to create a perceptually strong impression of source location and movement within a simulated space.

  4. Input-Specific Gain Modulation by Local Sensory Context Shapes Cortical and Thalamic Responses to Complex Sounds.

    PubMed

    Williamson, Ross S; Ahrens, Misha B; Linden, Jennifer F; Sahani, Maneesh

    2016-07-20

    Sensory neurons are customarily characterized by one or more linearly weighted receptive fields describing sensitivity in sensory space and time. We show that in auditory cortical and thalamic neurons, the weight of each receptive field element depends on the pattern of sound falling within a local neighborhood surrounding it in time and frequency. Accounting for this change in effective receptive field with spectrotemporal context improves predictions of both cortical and thalamic responses to stationary complex sounds. Although context dependence varies among neurons and across brain areas, there are strong shared qualitative characteristics. In a spectrotemporally rich soundscape, sound elements modulate neuronal responsiveness more effectively when they coincide with sounds at other frequencies, and less effectively when they are preceded by sounds at similar frequencies. This local-context-driven lability in the representation of complex sounds-a modulation of "input-specific gain" rather than "output gain"-may be a widespread motif in sensory processing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Underwater auditory localization by a swimming harbor seal (Phoca vitulina).

    PubMed

    Bodson, Anais; Miersch, Lars; Mauck, Bjoern; Dehnhardt, Guido

    2006-09-01

    The underwater sound localization acuity of a swimming harbor seal (Phoca vitulina) was measured in the horizontal plane at 13 different positions. The stimulus was either a double sound (two 6-kHz pure tones lasting 0.5 s separated by an interval of 0.2 s) or a single continuous sound of 1.2 s. Testing was conducted in a 10-m-diam underwater half circle arena with hidden loudspeakers installed at the exterior perimeter. The animal was trained to swim along the diameter of the half circle and to change its course towards the sound source as soon as the signal was given. The seal indicated the sound source by touching its assumed position at the board of the half circle. The deviation of the seals choice from the actual sound source was measured by means of video analysis. In trials with the double sound the seal localized the sound sources with a mean deviation of 2.8 degrees and in trials with the single sound with a mean deviation of 4.5 degrees. In a second experiment minimum audible angles of the stationary animal were found to be 9.8 degrees in front and 9.7 degrees in the back of the seal's head.

  6. Boson peak and Ioffe-Regel criterion in amorphous siliconlike materials: The effect of bond directionality.

    PubMed

    Beltukov, Y M; Fusco, C; Parshin, D A; Tanguy, A

    2016-02-01

    The vibrational properties of model amorphous materials are studied by combining complete analysis of the vibration modes, dynamical structure factor, and energy diffusivity with exact diagonalization of the dynamical matrix and the kernel polynomial method, which allows a study of very large system sizes. Different materials are studied that differ only by the bending rigidity of the interactions in a Stillinger-Weber modelization used to describe amorphous silicon. The local bending rigidity can thus be used as a control parameter, to tune the sound velocity together with local bonds directionality. It is shown that for all the systems studied, the upper limit of the Boson peak corresponds to the Ioffe-Regel criterion for transverse waves, as well as to a minimum of the diffusivity. The Boson peak is followed by a diffusivity's increase supported by longitudinal phonons. The Ioffe-Regel criterion for transverse waves corresponds to a common characteristic mean-free path of 5-7 Å (which is slightly bigger for longitudinal phonons), while the fine structure of the vibrational density of states is shown to be sensitive to the local bending rigidity.

  7. Nonreciprocal acoustics and dynamics in the in-plane oscillations of a geometrically nonlinear lattice.

    PubMed

    Zhang, Zhen; Koroleva, I; Manevitch, L I; Bergman, L A; Vakakis, A F

    2016-09-01

    We study the dynamics and acoustics of a nonlinear lattice with fixed boundary conditions composed of a finite number of particles coupled by linear springs, undergoing in-plane oscillations. The source of the strongly nonlinearity of this lattice is geometric effects generated by the in-plane stretching of the coupling linear springs. It has been shown that in the limit of low energy the lattice gives rise to a strongly nonlinear acoustic vacuum, which is a medium with zero speed of sound as defined in classical acoustics. The acoustic vacuum possesses strongly nonlocal coupling effects and an orthogonal set of nonlinear standing waves [or nonlinear normal modes (NNMs)] with mode shapes identical to those of the corresponding linear lattice; in contrast to the linear case, however, all NNMs except the one with the highest wavelength are unstable. In addition, the lattice supports two types of waves, namely, nearly linear sound waves (termed "L waves") corresponding to predominantly axial oscillations of the particles and strongly nonlinear localized propagating pulses (termed "NL pulses") corresponding to predominantly transverse oscillating wave packets of the particles with localized envelopes. We show the existence of nonlinear nonreciprocity phenomena in the dynamics and acoustics of the lattice. Two opposite cases are examined in the limit of low energy. The first gives rise to nonreciprocal dynamics and corresponds to collective, spatially extended transverse loading of the lattice leading to the excitation of individual, predominantly transverse NNMs, whereas the second case gives rise to nonreciprocal acoutics by considering the response of the lattice to spatially localized, transverse impulse or displacement excitations. We demonstrate intense and recurring energy exchanges between a directly excited NNM and other NNMs with higher wave numbers, so that nonreciprocal energy exchanges from small-to-large wave numbers are established. Moreover, we show the existence of nonreciprocal wave interaction phenomena in the form of irreversible targeted energy transfers from L waves to NL pulses during collisions of these two types of waves. Additional nonreciprocal acoustics are found in the form of complex "cascading processes, as well as nonreciprocal interactions between L waves and stationary discrete breathers. The computational studies confirm the theoretically predicted transition of the lattice dynamics to a low-energy state of nonlinear acoustic vacum with strong nonlocality.

  8. Neuromorphic audio-visual sensor fusion on a sound-localizing robot.

    PubMed

    Chan, Vincent Yue-Sek; Jin, Craig T; van Schaik, André

    2012-01-01

    This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment.

  9. Characteristic sounds facilitate visual search

    PubMed Central

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2009-01-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing “meow” did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds. PMID:18567253

  10. Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners With Bilateral and With Hearing-Preservation Cochlear Implants.

    PubMed

    Loiselle, Louise H; Dorman, Michael F; Yost, William A; Cook, Sarah J; Gifford, Rene H

    2016-08-01

    To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Eleven bilateral listeners with MED-EL (Durham, NC) CIs and 8 listeners with hearing-preservation CIs with symmetrical low frequency, acoustic hearing using the MED-EL or Cochlear device were evaluated using 2 tests designed to task binaural hearing, localization, and a simulated cocktail party. Access to interaural cues for localization was constrained by the use of low-pass, high-pass, and wideband noise stimuli. Sound-source localization accuracy for listeners with bilateral CIs in response to the high-pass noise stimulus and sound-source localization accuracy for the listeners with hearing-preservation CIs in response to the low-pass noise stimulus did not differ significantly. Speech understanding in a cocktail party listening environment improved for all listeners when interaural cues, either interaural time difference or interaural level difference, were available. The findings of the current study indicate that similar degrees of benefit to sound-source localization and speech understanding in complex listening environments are possible with 2 very different rehabilitation strategies: the provision of bilateral CIs and the preservation of hearing.

  11. Modeling the utility of binaural cues for underwater sound localization.

    PubMed

    Schneider, Jennifer N; Lloyd, David R; Banks, Patchouly N; Mercado, Eduardo

    2014-06-01

    The binaural cues used by terrestrial animals for sound localization in azimuth may not always suffice for accurate sound localization underwater. The purpose of this research was to examine the theoretical limits of interaural timing and level differences available underwater using computational and physical models. A paired-hydrophone system was used to record sounds transmitted underwater and recordings were analyzed using neural networks calibrated to reflect the auditory capabilities of terrestrial mammals. Estimates of source direction based on temporal differences were most accurate for frequencies between 0.5 and 1.75 kHz, with greater resolution toward the midline (2°), and lower resolution toward the periphery (9°). Level cues also changed systematically with source azimuth, even at lower frequencies than expected from theoretical calculations, suggesting that binaural mechanical coupling (e.g., through bone conduction) might, in principle, facilitate underwater sound localization. Overall, the relatively limited ability of the model to estimate source position using temporal and level difference cues underwater suggests that animals such as whales may use additional cues to accurately localize conspecifics and predators at long distances. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Sound localization in common vampire bats: Acuity and use of the binaural time cue by a small mammal

    PubMed Central

    Heffner, Rickye S.; Koay, Gimseong; Heffner, Henry E.

    2015-01-01

    Passive sound-localization acuity and the ability to use binaural time and intensity cues were determined for the common vampire bat (Desmodus rotundus). The bats were tested using a conditioned suppression/avoidance procedure in which they drank defibrinated blood from a spout in the presence of sounds from their right, but stopped drinking (i.e., broke contact with the spout) whenever a sound came from their left, thereby avoiding a mild shock. The mean minimum audible angle for three bats for a 100-ms noise burst was 13.1°—within the range of thresholds for other bats and near the mean for mammals. Common vampire bats readily localized pure tones of 20 kHz and higher, indicating they could use interaural intensity-differences. They could also localize pure tones of 5 kHz and lower, thereby demonstrating the use of interaural time-differences, despite their very small maximum interaural distance of 60 μs. A comparison of the use of locus cues among mammals suggests several implications for the evolution of sound localization and its underlying anatomical and physiological mechanisms. PMID:25618037

  13. Sound Source Localization through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network.

    PubMed

    Beck, Christoph; Garreau, Guillaume; Georgiou, Julius

    2016-01-01

    Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.

  14. Time course of dynamic range adaptation in the auditory nerve

    PubMed Central

    Wang, Grace I.; Dean, Isabel; Delgutte, Bertrand

    2012-01-01

    Auditory adaptation to sound-level statistics occurs as early as in the auditory nerve (AN), the first stage of neural auditory processing. In addition to firing rate adaptation characterized by a rate decrement dependent on previous spike activity, AN fibers show dynamic range adaptation, which is characterized by a shift of the rate-level function or dynamic range toward the most frequently occurring levels in a dynamic stimulus, thereby improving the precision of coding of the most common sound levels (Wen B, Wang GI, Dean I, Delgutte B. J Neurosci 29: 13797–13808, 2009). We investigated the time course of dynamic range adaptation by recording from AN fibers with a stimulus in which the sound levels periodically switch from one nonuniform level distribution to another (Dean I, Robinson BL, Harper NS, McAlpine D. J Neurosci 28: 6430–6438, 2008). Dynamic range adaptation occurred rapidly, but its exact time course was difficult to determine directly from the data because of the concomitant firing rate adaptation. To characterize the time course of dynamic range adaptation without the confound of firing rate adaptation, we developed a phenomenological “dual adaptation” model that accounts for both forms of AN adaptation. When fitted to the data, the model predicts that dynamic range adaptation occurs as rapidly as firing rate adaptation, over 100–400 ms, and the time constants of the two forms of adaptation are correlated. These findings suggest that adaptive processing in the auditory periphery in response to changes in mean sound level occurs rapidly enough to have significant impact on the coding of natural sounds. PMID:22457465

  15. Binaural Processing of Multiple Sound Sources

    DTIC Science & Technology

    2016-08-18

    Sound Source Localization Identification, and Sound Source Localization When Listeners Move. The CI research was also supported by an NIH grant...8217Cochlear Implant Performance in Realistic Listening Environments,’ Dr. Michael Dorman, Principal Investigator, Dr. William Yost unpaid advisor. The other... Listeners Move. The CI research was also supported by an NIH grant (“Cochlear Implant Performance in Realistic Listening Environments,” Dr. Michael Dorman

  16. What is that mysterious booming sound?

    USGS Publications Warehouse

    Hill, David P.

    2011-01-01

    The residents of coastal North Carolina are occasionally treated to sequences of booming sounds of unknown origin. The sounds are often energetic enough to rattle windows and doors. A recent sequence occurred in early January 2011 during clear weather with no evidence of local thunder storms. Queries by a local reporter (Colin Hackman of the NBC affiliate WETC in Wilmington, North Carolina, personal communication 2011) seemed to eliminate common anthropogenic sources such as sonic booms or quarry blasts. So the commonly asked question, “What's making these booming sounds?” remained (and remains) unanswered.

  17. Monaural Sound Localization Based on Structure-Induced Acoustic Resonance

    PubMed Central

    Kim, Keonwook; Kim, Youngwoong

    2015-01-01

    A physical structure such as a cylindrical pipe controls the propagated sound spectrum in a predictable way that can be used to localize the sound source. This paper designs a monaural sound localization system based on multiple pyramidal horns around a single microphone. The acoustic resonance within the horn provides a periodicity in the spectral domain known as the fundamental frequency which is inversely proportional to the radial horn length. Once the system accurately estimates the fundamental frequency, the horn length and corresponding angle can be derived by the relationship. The modified Cepstrum algorithm is employed to evaluate the fundamental frequency. In an anechoic chamber, localization experiments over azimuthal configuration show that up to 61% of the proper signal is recognized correctly with 30% misfire. With a speculated detection threshold, the system estimates direction 52% in positive-to-positive and 34% in negative-to-positive decision rate, on average. PMID:25668214

  18. Sound localization in the alligator.

    PubMed

    Bierman, Hilary S; Carr, Catherine E

    2015-11-01

    In early tetrapods, it is assumed that the tympana were acoustically coupled through the pharynx and therefore inherently directional, acting as pressure difference receivers. The later closure of the middle ear cavity in turtles, archosaurs, and mammals is a derived condition, and would have changed the ear by decoupling the tympana. Isolation of the middle ears would then have led to selection for structural and neural strategies to compute sound source localization in both archosaurs and mammalian ancestors. In the archosaurs (birds and crocodilians) the presence of air spaces in the skull provided connections between the ears that have been exploited to improve directional hearing, while neural circuits mediating sound localization are well developed. In this review, we will focus primarily on directional hearing in crocodilians, where vocalization and sound localization are thought to be ecologically important, and indicate important issues still awaiting resolution. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Neural Correlates of Sound Localization in Complex Acoustic Environments

    PubMed Central

    Zündorf, Ida C.; Lewald, Jörg; Karnath, Hans-Otto

    2013-01-01

    Listening to and understanding people in a “cocktail-party situation” is a remarkable feature of the human auditory system. Here we investigated the neural correlates of the ability to localize a particular sound among others in an acoustically cluttered environment with healthy subjects. In a sound localization task, five different natural sounds were presented from five virtual spatial locations during functional magnetic resonance imaging (fMRI). Activity related to auditory stream segregation was revealed in posterior superior temporal gyrus bilaterally, anterior insula, supplementary motor area, and frontoparietal network. Moreover, the results indicated critical roles of left planum temporale in extracting the sound of interest among acoustical distracters and the precuneus in orienting spatial attention to the target sound. We hypothesized that the left-sided lateralization of the planum temporale activation is related to the higher specialization of the left hemisphere for analysis of spectrotemporal sound features. Furthermore, the precuneus − a brain area known to be involved in the computation of spatial coordinates across diverse frames of reference for reaching to objects − seems to be also a crucial area for accurately determining locations of auditory targets in an acoustically complex scene of multiple sound sources. The precuneus thus may not only be involved in visuo-motor processes, but may also subserve related functions in the auditory modality. PMID:23691185

  20. Seasonal and ontogenetic changes in movement patterns of sixgill sharks.

    PubMed

    Andrews, Kelly S; Williams, Greg D; Levin, Phillip S

    2010-09-08

    Understanding movement patterns is fundamental to population and conservation biology. The way an animal moves through its environment influences the dynamics of local populations and will determine how susceptible it is to natural or anthropogenic perturbations. It is of particular interest to understand the patterns of movement for species which are susceptible to human activities (e.g. fishing), or that exert a large influence on community structure, such as sharks. We monitored the patterns of movement of 34 sixgill sharks Hexanchus griseus using two large-scale acoustic arrays inside and outside Puget Sound, Washington, USA. Sixgill sharks were residents in Puget Sound for up to at least four years before making large movements out of the estuary. Within Puget Sound, sixgills inhabited sites for several weeks at a time and returned to the same sites annually. Across four years, sixgills had consistent seasonal movements in which they moved to the north from winter to spring and moved to the south from summer to fall. Just prior to leaving Puget Sound, sixgills altered their behavior and moved twice as fast among sites. Nineteen of the thirty-four sixgills were detected leaving Puget Sound for the outer coast. Three of these sharks returned to Puget Sound. For most large marine predators, we have a limited understanding of how they move through their environment, and this clouds our ability to successfully manage their populations and their communities. With detailed movement information, such as that being uncovered with acoustic monitoring, we can begin to quantify the spatial and temporal impacts of large predators within the framework of their ecosystems.

  1. Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants

    PubMed Central

    Zheng, Yi; Godar, Shelly P.; Litovsky, Ruth Y.

    2015-01-01

    Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs) sound localization is known to improve when bilateral CIs (BiCIs) are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users. PMID:26288142

  2. Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants.

    PubMed

    Zheng, Yi; Godar, Shelly P; Litovsky, Ruth Y

    2015-01-01

    Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs) sound localization is known to improve when bilateral CIs (BiCIs) are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users.

  3. A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS

    NASA Astrophysics Data System (ADS)

    Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto

    At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.

  4. A Sound Therapy-Based Intervention to Expand the Auditory Dynamic Range for Loudness among Persons with Sensorineural Hearing Losses: Case Evidence Showcasing Treatment Efficacy

    PubMed Central

    Formby, Craig; Sherlock, LaGuinn P.; Hawley, Monica L.; Gold, Susan L.

    2017-01-01

    Case evidence is presented that highlights the clinical relevance and significance of a novel sound therapy-based treatment. This intervention has been shown to be efficacious in a randomized controlled trial for promoting expansion of the dynamic range for loudness and increased sound tolerance among persons with sensorineural hearing losses. Prior to treatment, these individuals were unable to use aided sound effectively because of their limited dynamic ranges. These promising treatment effects are shown in this article to be functionally significant, giving rise to improved speech understanding and enhanced hearing aid benefit and satisfaction, and, in turn, to enhanced quality of life posttreatment. These posttreatment sound therapy effects also are shown to be sustained, in whole or part, with aided environmental sound and to be dependent on specialized counseling to maximize treatment benefit. Importantly, the treatment appears to be efficacious for hearing-impaired persons with primary hyperacusis (i.e., abnormally reduced loudness discomfort levels [LDLs]) and for persons with loudness recruitment (i.e., LDLs within the typical range), which suggests the intervention should generalize across most individuals with reduced dynamic ranges owing to sensorineural hearing loss. An exception presented in this article is for a person describing the perceptual experience of pronounced loudness adaptation, which apparently rendered the sound therapy inaudible and ineffectual for this individual. Ultimately, these case examples showcase the enormous potential of a surprisingly simple sound therapy intervention, which has utility for virtually all audiologists to master and empower the adaptive plasticity of the auditory system to achieve remarkable treatment benefits for large numbers of individuals with sensorineural hearing losses. PMID:28286368

  5. Cumulative phase delay imaging for contrast-enhanced ultrasound tomography

    NASA Astrophysics Data System (ADS)

    Demi, Libertario; van Sloun, Ruud J. G.; Wijkstra, Hessel; Mischi, Massimo

    2015-11-01

    Standard dynamic-contrast enhanced ultrasound (DCE-US) imaging detects and estimates ultrasound-contrast-agent (UCA) concentration based on the amplitude of the nonlinear (harmonic) components generated during ultrasound (US) propagation through UCAs. However, harmonic components generation is not specific to UCAs, as it also occurs for US propagating through tissue. Moreover, nonlinear artifacts affect standard DCE-US imaging, causing contrast to tissue ratio reduction, and resulting in possible misclassification of tissue and misinterpretation of UCA concentration. Furthermore, no contrast-specific modality exists for DCE-US tomography; in particular speed-of-sound changes due to UCAs are well within those caused by different tissue types. Recently, a new marker for UCAs has been introduced. A cumulative phase delay (CPD) between the second harmonic and fundamental component is in fact observable for US propagating through UCAs, and is absent in tissue. In this paper, tomographic US images based on CPD are for the first time presented and compared to speed-of-sound US tomography. Results show the applicability of this marker for contrast specific US imaging, with cumulative phase delay imaging (CPDI) showing superior capabilities in detecting and localizing UCA, as compared to speed-of-sound US tomography. Cavities (filled with UCA) which were down to 1 mm in diameter were clearly detectable. Moreover, CPDI is free of the above mentioned nonlinear artifacts. These results open important possibilities to DCE-US tomography, with potential applications to breast imaging for cancer localization.

  6. Cumulative phase delay imaging for contrast-enhanced ultrasound tomography.

    PubMed

    Demi, Libertario; van Sloun, Ruud J G; Wijkstra, Hessel; Mischi, Massimo

    2015-11-07

    Standard dynamic-contrast enhanced ultrasound (DCE-US) imaging detects and estimates ultrasound-contrast-agent (UCA) concentration based on the amplitude of the nonlinear (harmonic) components generated during ultrasound (US) propagation through UCAs. However, harmonic components generation is not specific to UCAs, as it also occurs for US propagating through tissue. Moreover, nonlinear artifacts affect standard DCE-US imaging, causing contrast to tissue ratio reduction, and resulting in possible misclassification of tissue and misinterpretation of UCA concentration. Furthermore, no contrast-specific modality exists for DCE-US tomography; in particular speed-of-sound changes due to UCAs are well within those caused by different tissue types. Recently, a new marker for UCAs has been introduced. A cumulative phase delay (CPD) between the second harmonic and fundamental component is in fact observable for US propagating through UCAs, and is absent in tissue. In this paper, tomographic US images based on CPD are for the first time presented and compared to speed-of-sound US tomography. Results show the applicability of this marker for contrast specific US imaging, with cumulative phase delay imaging (CPDI) showing superior capabilities in detecting and localizing UCA, as compared to speed-of-sound US tomography. Cavities (filled with UCA) which were down to 1 mm in diameter were clearly detectable. Moreover, CPDI is free of the above mentioned nonlinear artifacts. These results open important possibilities to DCE-US tomography, with potential applications to breast imaging for cancer localization.

  7. Caviton dynamics in strong Langmuir turbulence

    NASA Astrophysics Data System (ADS)

    Dubois, Don; Rose, Harvey A.; Russell, David

    Recent studies based on long time computer simulations of Langmuir turbulence as described by Zakharov's model will be reviewed. These show that for strong to moderate ion sound samping the turbulent energy is dominantly in nonlinear caviton excitations which are localized in space and time. A local caviton model will be presented which accounts for the nucleation collapse burnout cycles of individual cavitons as well as their space-time correlations. This model is in detailed agreement with many features of the electron density fluctuation spectra in the ionosphere modified by powerful HF waves as measured by incoherent scatter radar. Recently such observations have verified a prediction of the theory that free Langmuir waves are emitted in the caviton collapse process. These observations and theoretical considerations also strongly imply that cavitons in the heated ionosphere, under certain conditions, evolve to states in which they are ordered in space and time. The sensitivity of the high frequency Langmuir field dynamics to the low frequency ion density fluctuations and the related caviton nucleation process will be discussed.

  8. Caviton dynamics in strong Langmuir turbulence

    NASA Astrophysics Data System (ADS)

    DuBois, Don; Rose, Harvey A.; Russell, David

    1990-01-01

    Recent studies based on long time computer simulations of Langmuir turbulence as described by Zakharov's model will be reviewed. These show that for strong to moderate ion sound damping the turbulent energy is dominantly in non-linear "caviton" excitations which are localized in space and time. A local caviton model will be presented which accounts for the nucleation-collapse-burnout cycles of individual cavitons as well as their space-time correlations. This model is in detailed agreement with many features of the electron density fluctuation spectra in the ionosphere modified by powerful HF waves as measured by incoherent scatter radar. Recently such observations have verified a prediction of the theory that "free" Langmuir waves are emitted in the caviton collapse process. These observations and theoretical considerations also strongly imply that cavitons in the heated ionosphere, under certain conditions, evolve to states in which they are ordered in space and time. The sensitivity of the high frequency Langmuir field dynamics to the low frequency ion density fluctuations and the related caviton nucleation process will be discussed.

  9. Glycinergic inhibition tunes coincidence detection in the auditory brainstem

    PubMed Central

    Myoga, Michael H.; Lehnert, Simon; Leibold, Christian; Felmy, Felix; Grothe, Benedikt

    2014-01-01

    Neurons in the medial superior olive (MSO) detect microsecond differences in the arrival time of sounds between the ears (interaural time differences or ITDs), a crucial binaural cue for sound localization. Synaptic inhibition has been implicated in tuning ITD sensitivity, but the cellular mechanisms underlying its influence on coincidence detection are debated. Here we determine the impact of inhibition on coincidence detection in adult Mongolian gerbil MSO brain slices by testing precise temporal integration of measured synaptic responses using conductance-clamp. We find that inhibition dynamically shifts the peak timing of excitation, depending on its relative arrival time, which in turn modulates the timing of best coincidence detection. Inhibitory control of coincidence detection timing is consistent with the diversity of ITD functions observed in vivo and is robust under physiologically relevant conditions. Our results provide strong evidence that temporal interactions between excitation and inhibition on microsecond timescales are critical for binaural processing. PMID:24804642

  10. Mechanical heterogeneity in ionic liquids

    NASA Astrophysics Data System (ADS)

    Veldhorst, Arno A.; Ribeiro, Mauro C. C.

    2018-05-01

    Molecular dynamics (MD) simulations of five ionic liquids based on 1-alkyl-3-methylimidazolium cations, [CnC1im]+, have been performed in order to calculate high-frequency elastic moduli and to evaluate heterogeneity of local elastic moduli. The MD simulations of [CnC1im][NO3], n = 2, 4, 6, and 8, assessed the effect of domain segregation when the alkyl chain length increases, and [C8C1im][PF6] assessed the effect of strength of anion-cation interaction. Dispersion curves of excitation energies of longitudinal and transverse acoustic, LA and TA, modes were obtained from time correlation functions of mass currents at different wavevectors. High-frequency sound velocity of LA modes depends on the alkyl chain length, but sound velocity for TA modes does not. High-frequency bulk and shear moduli, K∞ and G∞, depend on the alkyl chain length because of a density effect. Both K∞ and G∞ are strongly dependent on the anion. The calculation of local bulk and shear moduli was accomplished by performing bulk and shear deformations of the systems cooled to 0 K. The simulations showed a clear connection between structural and elastic modulus heterogeneities. The development of nano-heterogeneous structure with increasing length of the alkyl chain in [CnC1im][NO3] implies lower values for local bulk and shear moduli in the non-polar domains. The mean value and the standard deviations of distributions of local elastic moduli decrease when [NO3]- is replaced by the less coordinating [PF6]- anion.

  11. Structural analysis of stratocumulus convection

    NASA Technical Reports Server (NTRS)

    Siems, S. T.; Baker, M. B.; Bretherton, C. S.

    1990-01-01

    The 1 and 20 Hz data are examined from the Electra flights made on July 5, 1987. The flight legs consisted of seven horizontal turbulent legs at the inversion, midcloud, and below clouds, plus 4 soundings made within the same period. The Rosemont temperature sensor and the top and bottom dewpoint sensors were used to measure temperature and humidity at 1 Hz. Inversion structure and entrainment; local dynamics and large scale forcing; convective elements; and decoupling of cloud and subcloud are discussed in relationship to the results of the Electra flight.

  12. Psychophysical investigation of an auditory spatial illusion in cats: the precedence effect.

    PubMed

    Tollin, Daniel J; Yin, Tom C T

    2003-10-01

    The precedence effect (PE) describes several spatial perceptual phenomena that occur when similar sounds are presented from two different locations and separated by a delay. The mechanisms that produce the effect are thought to be responsible for the ability to localize sounds in reverberant environments. Although the physiological bases for the PE have been studied, little is known about how these sounds are localized by species other than humans. Here we used the search coil technique to measure the eye positions of cats trained to saccade to the apparent locations of sounds. To study the PE, brief broadband stimuli were presented from two locations, with a delay between their onsets; the delayed sound meant to simulate a single reflection. Although the cats accurately localized single sources, the apparent locations of the paired sources depended on the delay. First, the cats exhibited summing localization, the perception of a "phantom" sound located between the sources, for delays < +/-400 micros for sources positioned in azimuth along the horizontal plane, but not for sources positioned in elevation along the sagittal plane. Second, consistent with localization dominance, for delays from 400 micros to about 10 ms, the cats oriented toward the leading source location only, with little influence of the lagging source, both for horizontally and vertically placed sources. Finally, the echo threshold was reached for delays >10 ms, where the cats first began to orient to the lagging source on some trials. These data reveal that cats experience the PE phenomena similarly to humans.

  13. Matter bounce cosmology with a generalized single field: non-Gaussianity and an extended no-go theorem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yu-Bin; Cai, Yi-Fu; Quintin, Jerome

    We extend the matter bounce scenario to a more general theory in which the background dynamics and cosmological perturbations are generated by a k -essence scalar field with an arbitrary sound speed. When the sound speed is small, the curvature perturbation is enhanced, and the tensor-to-scalar ratio, which is excessively large in the original model, can be sufficiently suppressed to be consistent with observational bounds. Then, we study the primordial three-point correlation function generated during the matter-dominated contraction stage and find that it only depends on the sound speed parameter. Similar to the canonical case, the shape of the bispectrummore » is mainly dominated by a local form, though for some specific sound speed values a new shape emerges and the scaling behaviour changes. Meanwhile, a small sound speed also results in a large amplitude of non-Gaussianities, which is disfavored by current observations. As a result, it does not seem possible to suppress the tensor-to-scalar ratio without amplifying the production of non-Gaussianities beyond current observational constraints (and vice versa). This suggests an extension of the previously conjectured no-go theorem in single field nonsingular matter bounce cosmologies, which rules out a large class of models. However, the non-Gaussianity results remain as a distinguishable signature of matter bounce cosmology and have the potential to be detected by observations in the near future.« less

  14. A longitudinal study of the bilateral benefit in children with bilateral cochlear implants.

    PubMed

    Asp, Filip; Mäki-Torkko, Elina; Karltorp, Eva; Harder, Henrik; Hergils, Leif; Eskilsson, Gunnar; Stenfelt, Stefan

    2015-02-01

    To study the development of the bilateral benefit in children using bilateral cochlear implants by measurements of speech recognition and sound localization. Bilateral and unilateral speech recognition in quiet, in multi-source noise, and horizontal sound localization was measured at three occasions during a two-year period, without controlling for age or implant experience. Longitudinal and cross-sectional analyses were performed. Results were compared to cross-sectional data from children with normal hearing. Seventy-eight children aged 5.1-11.9 years, with a mean bilateral cochlear implant experience of 3.3 years and a mean age of 7.8 years, at inclusion in the study. Thirty children with normal hearing aged 4.8-9.0 years provided normative data. For children with cochlear implants, bilateral and unilateral speech recognition in quiet was comparable whereas a bilateral benefit for speech recognition in noise and sound localization was found at all three test occasions. Absolute performance was lower than in children with normal hearing. Early bilateral implantation facilitated sound localization. A bilateral benefit for speech recognition in noise and sound localization continues to exist over time for children with bilateral cochlear implants, but no relative improvement is found after three years of bilateral cochlear implant experience.

  15. Local inhibition of GABA affects precedence effect in the inferior colliculus

    PubMed Central

    Wang, Yanjun; Wang, Ningyu; Wang, Dan; Jia, Jun; Liu, Jinfeng; Xie, Yan; Wen, Xiaohui; Li, Xiaoting

    2014-01-01

    The precedence effect is a prerequisite for faithful sound localization in a complex auditory environment, and is a physiological phenomenon in which the auditory system selectively suppresses the directional information from echoes. Here we investigated how neurons in the inferior colliculus respond to the paired sounds that produce precedence-effect illusions, and whether their firing behavior can be modulated through inhibition with gamma-aminobutyric acid (GABA). We recorded extracellularly from 36 neurons in rat inferior colliculus under three conditions: no injection, injection with saline, and injection with gamma-aminobutyric acid. The paired sounds that produced precedence effects were two identical 4-ms noise bursts, which were delivered contralaterally or ipsilaterally to the recording site. The normalized neural responses were measured as a function of different inter-stimulus delays and half-maximal interstimulus delays were acquired. Neuronal responses to the lagging sounds were weak when the inter-stimulus delay was short, but increased gradually as the delay was lengthened. Saline injection produced no changes in neural responses, but after local gamma-aminobutyric acid application, responses to the lagging stimulus were suppressed. Application of gamma-aminobutyric acid affected the normalized response to lagging sounds, independently of whether they or the paired sounds were contralateral or ipsilateral to the recording site. These observations suggest that local inhibition by gamma-aminobutyric acid in the rat inferior colliculus shapes the neural responses to lagging sounds, and modulates the precedence effect. PMID:25206830

  16. Sound source tracking device for telematic spatial sound field reproduction

    NASA Astrophysics Data System (ADS)

    Cardenas, Bruno

    This research describes an algorithm that localizes sound sources for use in telematic applications. The localization algorithm is based on amplitude differences between various channels of a microphone array of directional shotgun microphones. The amplitude differences will be used to locate multiple performers and reproduce their voices, which were recorded at close distance with lavalier microphones, spatially corrected using a loudspeaker rendering system. In order to track multiple sound sources in parallel the information gained from the lavalier microphones will be utilized to estimate the signal-to-noise ratio between each performer and the concurrent performers.

  17. Active localization of virtual sounds

    NASA Technical Reports Server (NTRS)

    Loomis, Jack M.; Hebert, C.; Cicinelli, J. G.

    1991-01-01

    We describe a virtual sound display built around a 12 MHz 80286 microcomputer and special purpose analog hardware. The display implements most of the primary cues for sound localization in the ear-level plane. Static information about direction is conveyed by interaural time differences and, for frequencies above 1800 Hz, by head sound shadow (interaural intensity differences) and pinna sound shadow. Static information about distance is conveyed by variation in sound pressure (first power law) for all frequencies, by additional attenuation in the higher frequencies (simulating atmospheric absorption), and by the proportion of direct to reverberant sound. When the user actively locomotes, the changing angular position of the source occasioned by head rotations provides further information about direction and the changing angular velocity produced by head translations (motion parallax) provides further information about distance. Judging both from informal observations by users and from objective data obtained in an experiment on homing to virtual and real sounds, we conclude that simple displays such as this are effective in creating the perception of external sounds to which subjects can home with accuracy and ease.

  18. Computational fluid dynamics simulation of sound propagation through a blade row.

    PubMed

    Zhao, Lei; Qiao, Weiyang; Ji, Liang

    2012-10-01

    The propagation of sound waves through a blade row is investigated numerically. A wave splitting method in a two-dimensional duct with arbitrary mean flow is presented, based on which pressure amplitude of different wave mode can be extracted at an axial plane. The propagation of sound wave through a flat plate blade row has been simulated by solving the unsteady Reynolds average Navier-Stokes equations (URANS). The transmission and reflection coefficients obtained by Computational Fluid Dynamics (CFD) are compared with semi-analytical results. It indicates that the low order URANS scheme will cause large errors if the sound pressure level is lower than -100 dB (with as reference pressure the product of density, main flow velocity, and speed of sound). The CFD code has sufficient precision when solving the interaction of sound wave and blade row providing the boundary reflections have no substantial influence. Finally, the effects of flow Mach number, blade thickness, and blade turning angle on sound propagation are studied.

  19. Numerical study of dynamic glottis and tidal breathing on respiratory sounds in a human upper airway model

    PubMed Central

    Wang, Zhaoxuan; Talaat, Khaled; Glide-Hurst, Carri; Dong, Haibo

    2018-01-01

    Background Human snores are caused by vibrating anatomical structures in the upper airway. The glottis is a highly variable structure and a critical organ regulating inhaled flows. However, the effects of the glottis motion on airflow and breathing sound are not well understood, while static glottises have been implemented in most previous in silico studies. The objective of this study is to develop a computational acoustic model of human airways with a dynamic glottis and quantify the effects of glottis motion and tidal breathing on airflow and sound generation. Methods Large eddy simulation and FW-H models were adopted to compute airflows and respiratory sounds in an image-based mouth-lung model. User-defined functions were developed that governed the glottis kinematics. Varying breathing scenarios (static vs. dynamic glottis; constant vs. sinusoidal inhalations) were simulated to understand the effects of glottis motion and inhalation pattern on sound generation. Pressure distributions were measured in airway casts with different glottal openings for model validation purpose. Results Significant flow fluctuations were predicted in the upper airways at peak inhalation rates or during glottal constriction. The inhalation speed through the glottis was the predominating factor in the sound generation while the transient effects were less important. For all frequencies considered (20–2500 Hz), the static glottis substantially underestimated the intensity of the generated sounds, which was most pronounced in the range of 100–500 Hz. Adopting an equivalent steady flow rather than a tidal breathing further underestimated the sound intensity. An increase of 25 dB in average was observed for the life condition (sine-dynamic) compared to the idealized condition (constant-rigid) for the broadband frequencies, with the largest increase of approximately 40 dB at the frequency around 250 Hz. Conclusion Results show that a severely narrowing glottis during inhalation, as well as flow fluctuations in the downstream trachea, can generate audible sound levels. PMID:29101633

  20. Numerical study of dynamic glottis and tidal breathing on respiratory sounds in a human upper airway model.

    PubMed

    Xi, Jinxiang; Wang, Zhaoxuan; Talaat, Khaled; Glide-Hurst, Carri; Dong, Haibo

    2018-05-01

    Human snores are caused by vibrating anatomical structures in the upper airway. The glottis is a highly variable structure and a critical organ regulating inhaled flows. However, the effects of the glottis motion on airflow and breathing sound are not well understood, while static glottises have been implemented in most previous in silico studies. The objective of this study is to develop a computational acoustic model of human airways with a dynamic glottis and quantify the effects of glottis motion and tidal breathing on airflow and sound generation. Large eddy simulation and FW-H models were adopted to compute airflows and respiratory sounds in an image-based mouth-lung model. User-defined functions were developed that governed the glottis kinematics. Varying breathing scenarios (static vs. dynamic glottis; constant vs. sinusoidal inhalations) were simulated to understand the effects of glottis motion and inhalation pattern on sound generation. Pressure distributions were measured in airway casts with different glottal openings for model validation purpose. Significant flow fluctuations were predicted in the upper airways at peak inhalation rates or during glottal constriction. The inhalation speed through the glottis was the predominating factor in the sound generation while the transient effects were less important. For all frequencies considered (20-2500 Hz), the static glottis substantially underestimated the intensity of the generated sounds, which was most pronounced in the range of 100-500 Hz. Adopting an equivalent steady flow rather than a tidal breathing further underestimated the sound intensity. An increase of 25 dB in average was observed for the life condition (sine-dynamic) compared to the idealized condition (constant-rigid) for the broadband frequencies, with the largest increase of approximately 40 dB at the frequency around 250 Hz. Results show that a severely narrowing glottis during inhalation, as well as flow fluctuations in the downstream trachea, can generate audible sound levels.

  1. Hearing in alpacas (Vicugna pacos): audiogram, localization acuity, and use of binaural locus cues.

    PubMed

    Heffner, Rickye S; Koay, Gimseong; Heffner, Henry E

    2014-02-01

    Behavioral audiograms and sound localization abilities were determined for three alpacas (Vicugna pacos). Their hearing at a level of 60 dB sound pressure level (SPL) (re 20 μPa) extended from 40 Hz to 32.8 kHz, a range of 9.7 octaves. They were most sensitive at 8 kHz, with an average threshold of -0.5 dB SPL. The minimum audible angle around the midline for 100-ms broadband noise was 23°, indicating relatively poor localization acuity and potentially supporting the finding that animals with broad areas of best vision have poorer sound localization acuity. The alpacas were able to localize low-frequency pure tones, indicating that they can use the binaural phase cue, but they were unable to localize pure tones above the frequency of phase ambiguity, thus indicating complete inability to use the binaural intensity-difference cue. In contrast, the alpacas relied on their high-frequency hearing for pinna cues; they could discriminate front-back sound sources using 3-kHz high-pass noise, but not 3-kHz low-pass noise. These results are compared to those of other hoofed mammals and to mammals more generally.

  2. The attenuation of sound by turbulence in internal flows.

    PubMed

    Weng, Chenyang; Boij, Susann; Hanifi, Ardeshir

    2013-06-01

    The attenuation of sound waves due to interaction with low Mach number turbulent boundary layers in internal flows (channel or pipe flow) is examined. Dynamic equations for the turbulent Reynolds stress on the sound wave are derived, and the analytical solution to the equation provides a frequency dependent eddy viscosity model. This model is used to predict the attenuation of sound propagating in fully developed turbulent pipe flow. The predictions are shown to compare well with the experimental data. The proposed dynamic equation shows that the turbulence behaves like a viscoelastic fluid in the interaction process, and that the ratio of turbulent relaxation time near the wall and the sound wave period is the parameter that controls the characteristics of the attenuation induced by the turbulent flow.

  3. Sound-induced Interfacial Dynamics in a Microfluidic Two-phase Flow

    NASA Astrophysics Data System (ADS)

    Mak, Sze Yi; Shum, Ho Cheung

    2014-11-01

    Retrieving sound wave by a fluidic means is challenging due to the difficulty in visualizing the very minute sound-induced fluid motion. This work studies the interfacial response of multiphase systems towards fluctuation in the flow. We demonstrate a direct visualization of music in the form of ripples at a microfluidic aqueous-aqueous interface with an ultra-low interfacial tension. The interface shows a passive response to sound of different frequencies with sufficiently precise time resolution, enabling the recording of musical notes and even subsequent reconstruction with high fidelity. This suggests that sensing and transmitting vibrations as tiny as those induced by sound could be realized in low interfacial tension systems. The robust control of the interfacial dynamics could be adopted for droplet and complex-fiber generation.

  4. Statistical properties of Chinese phonemic networks

    NASA Astrophysics Data System (ADS)

    Yu, Shuiyuan; Liu, Haitao; Xu, Chunshan

    2011-04-01

    The study of properties of speech sound systems is of great significance in understanding the human cognitive mechanism and the working principles of speech sound systems. Some properties of speech sound systems, such as the listener-oriented feature and the talker-oriented feature, have been unveiled with the statistical study of phonemes in human languages and the research of the interrelations between human articulatory gestures and the corresponding acoustic parameters. With all the phonemes of speech sound systems treated as a coherent whole, our research, which focuses on the dynamic properties of speech sound systems in operation, investigates some statistical parameters of Chinese phoneme networks based on real text and dictionaries. The findings are as follows: phonemic networks have high connectivity degrees and short average distances; the degrees obey normal distribution and the weighted degrees obey power law distribution; vowels enjoy higher priority than consonants in the actual operation of speech sound systems; the phonemic networks have high robustness against targeted attacks and random errors. In addition, for investigating the structural properties of a speech sound system, a statistical study of dictionaries is conducted, which shows the higher frequency of shorter words and syllables and the tendency that the longer a word is, the shorter the syllables composing it are. From these structural properties and dynamic properties one can derive the following conclusion: the static structure of a speech sound system tends to promote communication efficiency and save articulation effort while the dynamic operation of this system gives preference to reliable transmission and easy recognition. In short, a speech sound system is an effective, efficient and reliable communication system optimized in many aspects.

  5. Severe Weather Forecast Decision Aid

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III; Wheeler, Mark M.; Short, David A.

    2005-01-01

    This report presents a 15-year climatological study of severe weather events and related severe weather atmospheric parameters. Data sources included local forecast rules, archived sounding data, Cloud-to-Ground Lightning Surveillance System (CGLSS) data, surface and upper air maps, and two severe weather event databases covering east-central Florida. The local forecast rules were used to set threat assessment thresholds for stability parameters that were derived from the sounding data. The severe weather events databases were used to identify days with reported severe weather and the CGLSS data was used to differentiate between lightning and non-lightning days. These data sets provided the foundation for analyzing the stability parameters and synoptic patterns that were used to develop an objective tool to aid in forecasting severe weather events. The period of record for the analysis was May - September, 1989 - 2003. The results indicate that there are certain synoptic patterns more prevalent on days with severe weather and some of the stability parameters are better predictors of severe weather days based on locally tuned threat values. The results also revealed the stability parameters that did not display any skill related to severe weather days. An interactive web-based Severe Weather Decision Aid was developed to assist the duty forecaster by providing a level of objective guidance based on the analysis of the stability parameters, CGLSS data, and synoptic-scale dynamics. The tool will be tested and evaluated during the 2005 warm season.

  6. An Overview of the Major Phenomena of the Localization of Sound Sources by Normal-Hearing, Hearing-Impaired, and Aided Listeners

    PubMed Central

    2014-01-01

    Localizing a sound source requires the auditory system to determine its direction and its distance. In general, hearing-impaired listeners do less well in experiments measuring localization performance than normal-hearing listeners, and hearing aids often exacerbate matters. This article summarizes the major experimental effects in direction (and its underlying cues of interaural time differences and interaural level differences) and distance for normal-hearing, hearing-impaired, and aided listeners. Front/back errors and the importance of self-motion are noted. The influence of vision on the localization of real-world sounds is emphasized, such as through the ventriloquist effect or the intriguing link between spatial hearing and visual attention. PMID:25492094

  7. Mitigation of Sri Lanka Island Effects in Colombo Sounding Data during DYNAMO

    NASA Astrophysics Data System (ADS)

    Ciesielski, P. E.; Johnson, R. H.; Yoneyama, K.

    2013-12-01

    During the Dynamics of the MJO (DYNAMO) field campaign, upper-air soundings were launched at Colombo, Sri Lanka as part of the enhanced northern sounding array (NSA) of the experiment. The Colombo soundings were affected at low-levels by diurnal heating of this large island and by flow blocking due to elevated terrain to the east of the Colombo site. Because of the large spacing between sounding sites, these small-scale effects are aliased onto the larger scale impacting analyses and atmospheric budgets over the DYNAMO NSA. To mitigate these local island effects on the large-scale budgets, a procedure was designed which uses ECMWF-analyzed fields in the vicinity of Sri Lanka to estimate open-ocean conditions (i.e, as if this island were not present). These 'unperturbed' ECMWF fields at low-levels are then merged with observed Colombo soundings. This procedure effectively mutes the blocking effects and large diurnal cycle observed in the low-level Colombo fields. In westerly flow regimes, adjusted Colombo winds increase the low-level westerlies by 2-3 m/s with a similar increase of the low-level easterlies in easterly flow regimes. In general, over the NSA the impact of the adjusted Colombo winds results in more low-level divergence (convergence), more mid-level subsidence (rising motion) and reduced (increased) rainfall during the westerly (easterly) wind regimes. In comparison to independent TRMM rainfall estimates, both the mean budget-derived rainfall and its temporal correlation are improved by using the adjusted Colombo soundings. In addition, use of the 'unperturbed' fields result in a more realistic moisture budget analyses, both in its diurnal cycle and during the build-up phase of the November MJO when a gradual deepening of apparent drying was observed. Overall, use of the adjusted Colombo soundings appears to have a beneficial impact on the NSA analyses and budgets.

  8. Developmental Changes in Locating Voice and Sound in Space

    PubMed Central

    Kezuka, Emiko; Amano, Sachiko; Reddy, Vasudevi

    2017-01-01

    We know little about how infants locate voice and sound in a complex multi-modal space. Using a naturalistic laboratory experiment the present study tested 35 infants at 3 ages: 4 months (15 infants), 5 months (12 infants), and 7 months (8 infants). While they were engaged frontally with one experimenter, infants were presented with (a) a second experimenter’s voice and (b) castanet sounds from three different locations (left, right, and behind). There were clear increases with age in the successful localization of sounds from all directions, and a decrease in the number of repetitions required for success. Nonetheless even at 4 months two-thirds of the infants attempted to search for the voice or sound. At all ages localizing sounds from behind was more difficult and was clearly present only at 7 months. Perseverative errors (looking at the last location) were present at all ages and appeared to be task specific (only present in the 7 month-olds for the behind location). Spontaneous attention shifts by the infants between the two experimenters, evident at 7 months, suggest early evidence for infant initiation of triadic attentional engagements. There was no advantage found for voice over castanet sounds in this study. Auditory localization is a complex and contextual process emerging gradually in the first half of the first year. PMID:28979220

  9. An integrative time-varying frequency detection and channel sounding method for dynamic plasma sheath

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Yao, Bo; Zhao, Lei; Liu, Xiaotong; Yang, Min; Liu, Yanming

    2018-01-01

    The plasma sheath-surrounded hypersonic vehicle is a dynamic and time-varying medium and it is almost impossible to calculate time-varying physical parameters directly. The in-fight detection of the time-varying degree is important to understand the dynamic nature of the physical parameters and their effect on re-entry communication. In this paper, a constant envelope zero autocorrelation (CAZAC) sequence based on time-varying frequency detection and channel sounding method is proposed to detect the plasma sheath electronic density time-varying property and wireless channel characteristic. The proposed method utilizes the CAZAC sequence, which has excellent autocorrelation and spread gain characteristics, to realize dynamic time-varying detection/channel sounding under low signal-to-noise ratio in the plasma sheath environment. Theoretical simulation under a typical time-varying radio channel shows that the proposed method is capable of detecting time-variation frequency up to 200 kHz and can trace the channel amplitude and phase in the time domain well under -10 dB. Experimental results conducted in the RF modulation discharge plasma device verified the time variation detection ability in practical dynamic plasma sheath. Meanwhile, nonlinear phenomenon of dynamic plasma sheath on communication signal is observed thorough channel sounding result.

  10. The articulatory characteristics of the tongue in anterior openbite: observation by use of dynamic palatography.

    PubMed

    Suzuki, N; Sakuma, T; Michi, K; Ueno, T

    1981-01-01

    The tongue movements during the production of Japanese speech sounds in five patients with anterior openbite associated with 1-5 mm of overjet were investigated using dynamic palatography and cinematography. The dynamic palatograph is an electric device capable of recording constantly changing palatolingual contact as a function of time by use of a thin plastic artificial palate equipped with 63 electrodes. As a result, the following articulatory characteristics were observed during the utterance of the Japanese sounds /s/,/f/,/t/,/d/,/n/,/r/,/ts/,/tf/,/dz/,/d3/. (1) The area of maximal palatolingual contacts was smaller than the normal. (2) Forward positioning of the tongue was confirmed in all cases. (3) The interruption of the breath stream was made with the dorsal surface of the tongue and the maxillary anterior teeth. (4) The sounds /s/,/f/,/dz/,/d3/, were recognized as distorted sound as /theta/, in English.

  11. Anomalous dynamics triggered by a non-convex equation of state in relativistic flows

    NASA Astrophysics Data System (ADS)

    Ibáñez, J. M.; Marquina, A.; Serna, S.; Aloy, M. A.

    2018-05-01

    The non-monotonicity of the local speed of sound in dense matter at baryon number densities much higher than the nuclear saturation density (n0 ≈ 0.16 fm-3) suggests the possible existence of a non-convex thermodynamics which will lead to a non-convex dynamics. Here, we explore the rich and complex dynamics that an equation of state (EoS) with non-convex regions in the pressure-density plane may develop as a result of genuinely relativistic effects, without a classical counterpart. To this end, we have introduced a phenomenological EoS, the parameters of which can be restricted owing to causality and thermodynamic stability constraints. This EoS can be regarded as a toy model with which we may mimic realistic (and far more complex) EoSs of practical use in the realm of relativistic hydrodynamics.

  12. The Temporal Signature of Memories: Identification of a General Mechanism for Dynamic Memory Replay in Humans

    PubMed Central

    Michelmann, Sebastian; Bowman, Howard; Hanslmayr, Simon

    2016-01-01

    Reinstatement of dynamic memories requires the replay of neural patterns that unfold over time in a similar manner as during perception. However, little is known about the mechanisms that guide such a temporally structured replay in humans, because previous studies used either unsuitable methods or paradigms to address this question. Here, we overcome these limitations by developing a new analysis method to detect the replay of temporal patterns in a paradigm that requires participants to mentally replay short sound or video clips. We show that memory reinstatement is accompanied by a decrease of low-frequency (8 Hz) power, which carries a temporal phase signature of the replayed stimulus. These replay effects were evident in the visual as well as in the auditory domain and were localized to sensory-specific regions. These results suggest low-frequency phase to be a domain-general mechanism that orchestrates dynamic memory replay in humans. PMID:27494601

  13. Approaches for evaluating the effects of bivalve filter feeding on nutrient dynamics in Puget Sound, Washington

    USGS Publications Warehouse

    Konrad, Christopher P.

    2014-01-01

    Marine bivalves such as clams, mussels, and oysters are an important component of the food web, which influence nutrient dynamics and water quality in many estuaries. The role of bivalves in nutrient dynamics and, particularly, the contribution of commercial shellfish activities, are not well understood in Puget Sound, Washington. Numerous approaches have been used in other estuaries to quantify the effects of bivalves on nutrient dynamics, ranging from simple nutrient budgeting to sophisticated numerical models that account for tidal circulation, bioenergetic fluxes through food webs, and biochemical transformations in the water column and sediment. For nutrient management in Puget Sound, it might be possible to integrate basic biophysical indicators (residence time, phytoplankton growth rates, and clearance rates of filter feeders) as a screening tool to identify places where nutrient dynamics and water quality are likely to be sensitive to shellfish density and, then, apply more sophisticated methods involving in-situ measurements and simulation models to quantify those dynamics.

  14. Revealing the mechanism of the viscous-to-elastic crossover in liquids

    DOE PAGES

    Bolmatov, Dima; Zhernenkov, Mikhail; Zav'yalov, Dmitry; ...

    2015-07-18

    In our work, we report on inelastic X-ray scattering experiments combined with the molecular dynamics simulations on deeply supercritical Ar. Our results unveil the mechanism and regimes of sound propagation in the liquid matter and provide compelling evidence for the adiabatic-to-isothermal longitudinal sound propagation transition. We introduce a Hamiltonian predicting low-frequency transverse sound propagation gaps, which is confirmed by experimental findings and molecular dynamics calculations. As a result, a universal link is established between the positive sound dispersion (PSD) phenomenon and the origin of transverse sound propagation revealing the viscous-to-elastic crossover in liquids. The PSD and transverse phononic excitations evolvemore » consistently with theoretical predictions. Both can be considered as a universal fingerprint of the dynamic response of a liquid, which is also observable in a subdomain of supercritical phase. Furthermore, the simultaneous disappearance of both these effects at elevated temperatures is a manifestation of the Frenkel line. We expect that these findings will advance the current understanding of fluids under extreme thermodynamic conditions.« less

  15. Salient sounds activate human visual cortex automatically

    PubMed Central

    McDonald, John J.; Störmer, Viola S.; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A.

    2013-01-01

    Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, the present study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2, 3, and 4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of co-localized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task. PMID:23699530

  16. How the owl tracks its prey – II

    PubMed Central

    Takahashi, Terry T.

    2010-01-01

    Barn owls can capture prey in pitch darkness or by diving into snow, while homing in on the sounds made by their prey. First, the neural mechanisms by which the barn owl localizes a single sound source in an otherwise quiet environment will be explained. The ideas developed for the single source case will then be expanded to environments in which there are multiple sound sources and echoes – environments that are challenging for humans with impaired hearing. Recent controversies regarding the mechanisms of sound localization will be discussed. Finally, the case in which both visual and auditory information are available to the owl will be considered. PMID:20889819

  17. A Spiking Neural Network Model of the Medial Superior Olive Using Spike Timing Dependent Plasticity for Sound Localization

    PubMed Central

    Glackin, Brendan; Wall, Julie A.; McGinnity, Thomas M.; Maguire, Liam P.; McDaid, Liam J.

    2010-01-01

    Sound localization can be defined as the ability to identify the position of an input sound source and is considered a powerful aspect of mammalian perception. For low frequency sounds, i.e., in the range 270 Hz–1.5 KHz, the mammalian auditory pathway achieves this by extracting the Interaural Time Difference between sound signals being received by the left and right ear. This processing is performed in a region of the brain known as the Medial Superior Olive (MSO). This paper presents a Spiking Neural Network (SNN) based model of the MSO. The network model is trained using the Spike Timing Dependent Plasticity learning rule using experimentally observed Head Related Transfer Function data in an adult domestic cat. The results presented demonstrate how the proposed SNN model is able to perform sound localization with an accuracy of 91.82% when an error tolerance of ±10° is used. For angular resolutions down to 2.5°, it will be demonstrated how software based simulations of the model incur significant computation times. The paper thus also addresses preliminary implementation on a Field Programmable Gate Array based hardware platform to accelerate system performance. PMID:20802855

  18. Seasonal and Ontogenetic Changes in Movement Patterns of Sixgill Sharks

    PubMed Central

    Andrews, Kelly S.; Williams, Greg D.; Levin, Phillip S.

    2010-01-01

    Background Understanding movement patterns is fundamental to population and conservation biology. The way an animal moves through its environment influences the dynamics of local populations and will determine how susceptible it is to natural or anthropogenic perturbations. It is of particular interest to understand the patterns of movement for species which are susceptible to human activities (e.g. fishing), or that exert a large influence on community structure, such as sharks. Methodology/Principal Findings We monitored the patterns of movement of 34 sixgill sharks Hexanchus griseus using two large-scale acoustic arrays inside and outside Puget Sound, Washington, USA. Sixgill sharks were residents in Puget Sound for up to at least four years before making large movements out of the estuary. Within Puget Sound, sixgills inhabited sites for several weeks at a time and returned to the same sites annually. Across four years, sixgills had consistent seasonal movements in which they moved to the north from winter to spring and moved to the south from summer to fall. Just prior to leaving Puget Sound, sixgills altered their behavior and moved twice as fast among sites. Nineteen of the thirty-four sixgills were detected leaving Puget Sound for the outer coast. Three of these sharks returned to Puget Sound. Conclusions/Significance For most large marine predators, we have a limited understanding of how they move through their environment, and this clouds our ability to successfully manage their populations and their communities. With detailed movement information, such as that being uncovered with acoustic monitoring, we can begin to quantify the spatial and temporal impacts of large predators within the framework of their ecosystems. PMID:20838617

  19. PHYTOPLANKTON DYNAMICS IN A GULF OF MEXICO ESTUARY: TIME SERIES OF SIZE STRUCTURE, NUTRIENTS, VARIABLE FLUORESCENCE AND ALGAL PHOSPHATASE ACTIVITY

    EPA Science Inventory

    Relationships between phytoplankton dynamics and physiology, and environmental conditions were studied in Santa Rosa Sound, Florida, USA, at near-weekly intervals during 2001. Santa Rosa Sound is a component of the Pensacola Bay estuary in the northern Gulf of Mexico. Parameters ...

  20. Sound Links: Exploring the Social, Cultural and Educational Dynamics of Musical Communities in Australia

    ERIC Educational Resources Information Center

    Bartleet, Brydie-Leigh

    2009-01-01

    "Sound Links" examines the dynamics of community music in Australia, and the models it represents for informal music learning and teaching. This involves researching a selection of vibrant musical communities across the country, exploring their potential for complementarity and synergy with music in schools. This article focuses on the…

  1. Population diversity in Pacific herring of the Puget Sound, USA.

    PubMed

    Siple, Margaret C; Francis, Tessa B

    2016-01-01

    Demographic, functional, or habitat diversity can confer stability on populations via portfolio effects (PEs) that integrate across multiple ecological responses and buffer against environmental impacts. The prevalence of these PEs in aquatic organisms is as yet unknown, and can be difficult to quantify; however, understanding mechanisms that stabilize populations in the face of environmental change is a key concern in ecology. Here, we examine PEs in Pacific herring (Clupea pallasii) in Puget Sound (USA) using a 40-year time series of biomass data for 19 distinct spawning population units collected using two survey types. Multivariate auto-regressive state-space models show independent dynamics among spawning subpopulations, suggesting that variation in herring production is partially driven by local effects at spawning grounds or during the earliest life history stages. This independence at the subpopulation level confers a stabilizing effect on the overall Puget Sound spawning stock, with herring being as much as three times more stable in the face of environmental perturbation than a single population unit of the same size. Herring populations within Puget Sound are highly asynchronous but share a common negative growth rate and may be influenced by the Pacific Decadal Oscillation. The biocomplexity in the herring stock shown here demonstrates that preserving spatial and demographic diversity can increase the stability of this herring population and its availability as a resource for consumers.

  2. The CuSPED Mission: CubeSat for GNSS Sounding of the Ionosphere-Plasmasphere Electron Density

    NASA Technical Reports Server (NTRS)

    Gross, Jason N.; Keesee, Amy M.; Christian, John A.; Gu, Yu; Scime, Earl; Komjathy, Attila; Lightsey, E. Glenn; Pollock, Craig J.

    2016-01-01

    The CubeSat for GNSS Sounding of Ionosphere-Plasmasphere Electron Density (CuSPED) is a 3U CubeSat mission concept that has been developed in response to the NASA Heliophysics program's decadal science goal of the determining of the dynamics and coupling of the Earth's magnetosphere, ionosphere, and atmosphere and their response to solar and terrestrial inputs. The mission was formulated through a collaboration between West Virginia University, Georgia Tech, NASA GSFC and NASA JPL, and features a 3U CubeSat that hosts both a miniaturized space capable Global Navigation Satellite System (GNSS) receiver for topside atmospheric sounding, along with a Thermal Electron Capped Hemispherical Spectrometer (TECHS) for the purpose of in situ electron precipitation measurements. These two complimentary measurement techniques will provide data for the purpose of constraining ionosphere-magnetosphere coupling models and will also enable studies of the local plasma environment and spacecraft charging; a phenomenon which is known to lead to significant errors in the measurement of low-energy, charged species from instruments aboard spacecraft traversing the ionosphere. This paper will provide an overview of the concept including its science motivation and implementation.

  3. Audio reproduction for personal ambient home assistance: concepts and evaluations for normal-hearing and hearing-impaired persons.

    PubMed

    Huber, Rainer; Meis, Markus; Klink, Karin; Bartsch, Christian; Bitzer, Joerg

    2014-01-01

    Within the Lower Saxony Research Network Design of Environments for Ageing (GAL), a personal activity and household assistant (PAHA), an ambient reminder system, has been developed. One of its central output modality to interact with the user is sound. The study presented here evaluated three different system technologies for sound reproduction using up to five loudspeakers, including the "phantom source" concept. Moreover, a technology for hearing loss compensation for the mostly older users of the PAHA was implemented and evaluated. Evaluation experiments with 21 normal hearing and hearing impaired test subjects were carried out. The results show that after direct comparison of the sound presentation concepts, the presentation by the single TV speaker was most preferred, whereas the phantom source concept got the highest acceptance ratings as far as the general concept is concerned. The localization accuracy of the phantom source concept was good as long as the exact listening position was known to the algorithm and speech stimuli were used. Most subjects preferred the original signals over the pre-processed, dynamic-compressed signals, although processed speech was often described as being clearer.

  4. Dynamics of unstable sound waves in a non-equilibrium medium at the nonlinear stage

    NASA Astrophysics Data System (ADS)

    Khrapov, Sergey; Khoperskov, Alexander

    2018-03-01

    A new dispersion equation is obtained for a non-equilibrium medium with an exponential relaxation model of a vibrationally excited gas. We have researched the dependencies of the pump source and the heat removal on the medium thermodynamic parameters. The boundaries of sound waves stability regions in a non-equilibrium gas have been determined. The nonlinear stage of sound waves instability development in a vibrationally excited gas has been investigated within CSPH-TVD and MUSCL numerical schemes using parallel technologies OpenMP-CUDA. We have obtained a good agreement of numerical simulation results with the linear perturbations dynamics at the initial stage of the sound waves growth caused by instability. At the nonlinear stage, the sound waves amplitude reaches the maximum value that leads to the formation of shock waves system.

  5. How Do Honeybees Attract Nestmates Using Waggle Dances in Dark and Noisy Hives?

    PubMed Central

    Hasegawa, Yuji; Ikeno, Hidetoshi

    2011-01-01

    It is well known that honeybees share information related to food sources with nestmates using a dance language that is representative of symbolic communication among non-primates. Some honeybee species engage in visually apparent behavior, walking in a figure-eight pattern inside their dark hives. It has been suggested that sounds play an important role in this dance language, even though a variety of wing vibration sounds are produced by honeybee behaviors in hives. It has been shown that dances emit sounds primarily at about 250–300 Hz, which is in the same frequency range as honeybees' flight sounds. Thus the exact mechanism whereby honeybees attract nestmates using waggle dances in such a dark and noisy hive is as yet unclear. In this study, we used a flight simulator in which honeybees were attached to a torque meter in order to analyze the component of bees' orienting response caused only by sounds, and not by odor or by vibrations sensed by their legs. We showed using single sound localization that honeybees preferred sounds around 265 Hz. Furthermore, according to sound discrimination tests using sounds of the same frequency, honeybees preferred rhythmic sounds. Our results demonstrate that frequency and rhythmic components play a complementary role in localizing dance sounds. Dance sounds were presumably developed to share information in a dark and noisy environment. PMID:21603608

  6. Effect of Blast Injury on Auditory Localization in Military Service Members.

    PubMed

    Kubli, Lina R; Brungart, Douglas; Northern, Jerry

    Among the many advantages of binaural hearing are the abilities to localize sounds in space and to attend to one sound in the presence of many sounds. Binaural hearing provides benefits for all listeners, but it may be especially critical for military personnel who must maintain situational awareness in complex tactical environments with multiple speech and noise sources. There is concern that Military Service Members who have been exposed to one or more high-intensity blasts during their tour of duty may have difficulty with binaural and spatial ability due to degradation in auditory and cognitive processes. The primary objective of this study was to assess the ability of blast-exposed Military Service Members to localize speech sounds in quiet and in multisource environments with one or two competing talkers. Participants were presented with one, two, or three topic-related (e.g., sports, food, travel) sentences under headphones and required to attend to, and then locate the source of, the sentence pertaining to a prespecified target topic within a virtual space. The listener's head position was monitored by a head-mounted tracking device that continuously updated the apparent spatial location of the target and competing speech sounds as the subject turned within the virtual space. Measurements of auditory localization ability included mean absolute error in locating the source of the target sentence, the time it took to locate the target sentence within 30 degrees, target/competitor confusion errors, response time, and cumulative head motion. Twenty-one blast-exposed Active-Duty or Veteran Military Service Members (blast-exposed group) and 33 non-blast-exposed Service Members and beneficiaries (control group) were evaluated. In general, the blast-exposed group performed as well as the control group if the task involved localizing the source of a single speech target. However, if the task involved two or three simultaneous talkers, localization ability was compromised for some participants in the blast-exposed group. Blast-exposed participants were less accurate in their localization responses and required more exploratory head movements to find the location of the target talker. Results suggest that blast-exposed participants have more difficulty than non-blast-exposed participants in localizing sounds in complex acoustic environments. This apparent deficit in spatial hearing ability highlights the need to develop new diagnostic tests using complex listening tasks that involve multiple sound sources that require speech segregation and comprehension.

  7. New Perspectives from Satellite and Profile Observations on Tropospheric Ozone over Africa and the Adjacent Oceans: An Indian-Atlantic Ocean Link to tbe "Ozone Paradox"

    NASA Technical Reports Server (NTRS)

    Thompson, Anne M.; Witte, Jacquelyn C.; Diab, Roseanne D.; Thouret, Valerie; Sauvage, Bastien; Chatfield, B.; Guan, Hong

    2004-01-01

    In the past few years, tropospheric ozone observations of Africa and its adjacent ocenas have been greatly enhanced by high resolution (spatial and temporal) satellite measurements and profile data from aircraft (MOZAIC) and balloon-borne (SHADOZ) soundings. These views have demonstrated for the first time the complexity of chemical-dynamical interactions over the African continent and the Indian and Atlantic Oceans. The tropical Atlantic "ozone paradax" refers to the observation that during the season of maximum biomass burning in west Africa north of the Intertropical Convergence Zone (ITCZ), the highest tropospheric ozone total column occurs south of the ITCZ over the tropical Atlantic. The longitudinal view of tropospheric ozone in the southern tropics from SHADOZ (Southern Hemisphere Additional Ozonesondes) soundings shown the persistence of a "zonal-wave one" pattern that reinforces the "ozone paradox". These ozone features interact with dynamics over southern and northern Africa where anthropogenic sources include the industrial regions of the South African Highveld and Mideastern-Mediterranean influences, respectively. Our newest studies with satellites and soundings show that up to half the ozone pollution over the Atlantic in the January-March "paradox" period may originate from south Asian pollution. Individual patches of pollurion over the Indian Ocean are transported upward by convective mixing and are enriched by pyrogenic, biogenic sources and lightning as they cross Africa and descend over the Atlantic. In summary, local sources, intercontinental import and export and unique regional transport patterns put Africa at a crossroads of troposheric ozone influences.

  8. Psychophysics and Neuronal Bases of Sound Localization in Humans

    PubMed Central

    Ahveninen, Jyrki; Kopco, Norbert; Jääskeläinen, Iiro P.

    2013-01-01

    Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory “where” pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. PMID:23886698

  9. Approaches to the study of neural coding of sound source location and sound envelope in real environments

    PubMed Central

    Kuwada, Shigeyuki; Bishop, Brian; Kim, Duck O.

    2012-01-01

    The major functions of the auditory system are recognition (what is the sound) and localization (where is the sound). Although each of these has received considerable attention, rarely are they studied in combination. Furthermore, the stimuli used in the bulk of studies did not represent sound location in real environments and ignored the effects of reverberation. Another ignored dimension is the distance of a sound source. Finally, there is a scarcity of studies conducted in unanesthetized animals. We illustrate a set of efficient methods that overcome these shortcomings. We use the virtual auditory space method (VAS) to efficiently present sounds at different azimuths, different distances and in different environments. Additionally, this method allows for efficient switching between binaural and monaural stimulation and alteration of acoustic cues singly or in combination to elucidate neural mechanisms underlying localization and recognition. Such procedures cannot be performed with real sound field stimulation. Our research is designed to address the following questions: Are IC neurons specialized to process what and where auditory information? How does reverberation and distance of the sound source affect this processing? How do IC neurons represent sound source distance? Are neural mechanisms underlying envelope processing binaural or monaural? PMID:22754505

  10. Sound-direction identification with bilateral cochlear implants.

    PubMed

    Neuman, Arlene C; Haravon, Anita; Sislian, Nicole; Waltzman, Susan B

    2007-02-01

    The purpose of this study was to compare the accuracy of sound-direction identification in the horizontal plane by bilateral cochlear implant users when localization was measured with pink noise and with speech stimuli. Eight adults who were bilateral users of Nucleus 24 Contour devices participated in the study. All had received implants in both ears in a single surgery. Sound-direction identification was measured in a large classroom by using a nine-loudspeaker array. Localization was tested in three listening conditions (bilateral cochlear implants, left cochlear implant, and right cochlear implant), using two different stimuli (a speech stimulus and pink noise bursts) in a repeated-measures design. Sound-direction identification accuracy was significantly better when using two implants than when using a single implant. The mean root-mean-square error was 29 degrees for the bilateral condition, 54 degrees for the left cochlear implant, and 46.5 degrees for the right cochlear implant condition. Unilateral accuracy was similar for right cochlear implant and left cochlear implant performance. Sound-direction identification performance was similar for speech and pink noise stimuli. The data obtained in this study add to the growing body of evidence that sound-direction identification with bilateral cochlear implants is better than with a single implant. The similarity in localization performance obtained with the speech and pink noise supports the use of either stimulus for measuring sound-direction identification.

  11. Gravitoinertial force magnitude and direction influence head-centric auditory localization

    NASA Technical Reports Server (NTRS)

    DiZio, P.; Held, R.; Lackner, J. R.; Shinn-Cunningham, B.; Durlach, N.

    2001-01-01

    We measured the influence of gravitoinertial force (GIF) magnitude and direction on head-centric auditory localization to determine whether a true audiogravic illusion exists. In experiment 1, supine subjects adjusted computer-generated dichotic stimuli until they heard a fused sound straight ahead in the midsagittal plane of the head under a variety of GIF conditions generated in a slow-rotation room. The dichotic stimuli were constructed by convolving broadband noise with head-related transfer function pairs that model the acoustic filtering at the listener's ears. These stimuli give rise to the perception of externally localized sounds. When the GIF was increased from 1 to 2 g and rotated 60 degrees rightward relative to the head and body, subjects on average set an acoustic stimulus 7.3 degrees right of their head's median plane to hear it as straight ahead. When the GIF was doubled and rotated 60 degrees leftward, subjects set the sound 6.8 degrees leftward of baseline values to hear it as centered. In experiment 2, increasing the GIF in the median plane of the supine body to 2 g did not influence auditory localization. In experiment 3, tilts up to 75 degrees of the supine body relative to the normal 1 g GIF led to small shifts, 1--2 degrees, of auditory setting toward the up ear to maintain a head-centered sound localization. These results show that head-centric auditory localization is affected by azimuthal rotation and increase in magnitude of the GIF and demonstrate that an audiogravic illusion exists. Sound localization is shifted in the direction opposite GIF rotation by an amount related to the magnitude of the GIF and its angular deviation relative to the median plane.

  12. 77 FR 23119 - Annual Marine Events in the Eighth Coast Guard District, Smoking the Sound; Biloxi Ship Channel...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-18

    ... Marine Events in the Eighth Coast Guard District, Smoking the Sound; Biloxi Ship Channel; Biloxi, MS... enforce Special Local Regulations for the Smoking the Sound boat races in the Biloxi Ship Channel, Biloxi... during the Smoking the Sound boat races. During the enforcement period, entry into, transiting or...

  13. Investigation of spherical loudspeaker arrays for local active control of sound.

    PubMed

    Peleg, Tomer; Rafaely, Boaz

    2011-10-01

    Active control of sound can be employed globally to reduce noise levels in an entire enclosure, or locally around a listener's head. Recently, spherical loudspeaker arrays have been studied as multiple-channel sources for local active control of sound, presenting the fundamental theory and several active control configurations. In this paper, important aspects of using a spherical loudspeaker array for local active control of sound are further investigated. First, the feasibility of creating sphere-shaped quiet zones away from the source is studied both theoretically and numerically, showing that these quiet zones are associated with sound amplification and poor system robustness. To mitigate the latter, the design of shell-shaped quiet zones around the source is investigated. A combination of two spherical sources is then studied with the aim of enlarging the quiet zone. The two sources are employed to generate quiet zones that surround a rigid sphere, investigating the application of active control around a listener's head. A significant improvement in performance is demonstrated in this case over a conventional headrest-type system that uses two monopole secondary sources. Finally, several simulations are presented to support the theoretical work and to demonstrate the performance and limitations of the system. © 2011 Acoustical Society of America

  14. Ray-based acoustic localization of cavitation in a highly reverberant environment.

    PubMed

    Chang, Natasha A; Dowling, David R

    2009-05-01

    Acoustic detection and localization of cavitation have inherent advantages over optical techniques because cavitation bubbles are natural sound sources, and acoustic transduction of cavitation sounds does not require optical access to the region of cavitating flow. In particular, near cavitation inception, cavitation bubbles may be visually small and occur infrequently, but may still emit audible sound pulses. In this investigation, direct-path acoustic recordings of cavitation events are made with 16 hydrophones mounted on the periphery of a water tunnel test section containing a low-cavitation-event-rate vortical flow. These recordings are used to localize the events in three dimensions via cross correlations to obtain arrival time differences. Here, bubble localization is hindered by reverberation, background noise, and the fact that both the pulse emission time and waveform are unknown. These hindrances are partially mitigated by a signal-processing scheme that incorporates straight-ray acoustic propagation and Monte-Carlo techniques for compensating ray-path, sound-speed, and hydrophone-location uncertainties. The acoustic localization results are compared to simultaneous optical localization results from dual-camera high-speed digital-video recordings. For 53 bubbles and a peak-signal to noise ratio frequency of 6.7 kHz, the root-mean-square spatial difference between optical and acoustic bubble location results was 1.94 cm. Parametric dependences in acoustic localization performance are also presented.

  15. Atypical vertical sound localization and sound-onset sensitivity in people with autism spectrum disorders.

    PubMed

    Visser, Eelke; Zwiers, Marcel P; Kan, Cornelis C; Hoekstra, Liesbeth; van Opstal, A John; Buitelaar, Jan K

    2013-11-01

    Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs.

  16. Development of linear projecting in studies of non-linear flow. Acoustic heating induced by non-periodic sound

    NASA Astrophysics Data System (ADS)

    Perelomova, Anna

    2006-08-01

    The equation of energy balance is subdivided into two dynamics equations, one describing evolution of the dominative sound, and the second one responsible for acoustic heating. The first one is the famous KZK equation, and the second one is a novel equation governing acoustic heating. The novel dynamic equation considers both periodic and non-periodic sound. Quasi-plane geometry of flow is supposed. Subdividing is provided on the base of specific links of every mode. Media with arbitrary thermic T(p,ρ) and caloric e(p,ρ) equations of state are considered. Individual roles of thermal conductivity and viscosity in the heating induced by aperiodic sound in the ideal gases and media different from ideal gases are discussed.

  17. Effect of sound level on virtual and free-field localization of brief sounds in the anterior median plane.

    PubMed

    Marmel, Frederic; Marrufo-Pérez, Miriam I; Heeren, Jan; Ewert, Stephan; Lopez-Poveda, Enrique A

    2018-06-14

    The detection of high-frequency spectral notches has been shown to be worse at 70-80 dB sound pressure level (SPL) than at higher levels up to 100 dB SPL. The performance improvement at levels higher than 70-80 dB SPL has been related to an 'ideal observer' comparison of population auditory nerve spike trains to stimuli with and without high-frequency spectral notches. Insofar as vertical localization partly relies on information provided by pinna-based high-frequency spectral notches, we hypothesized that localization would be worse at 70-80 dB SPL than at higher levels. Results from a first experiment using a virtual localization set-up and non-individualized head-related transfer functions (HRTFs) were consistent with this hypothesis, but a second experiment using a free-field set-up showed that vertical localization deteriorates monotonically with increasing level up to 100 dB SPL. These results suggest that listeners use different cues when localizing sound sources in virtual and free-field conditions. In addition, they confirm that the worsening in vertical localization with increasing level continues beyond 70-80 dB SPL, the highest levels tested by previous studies. Further, they suggest that vertical localization, unlike high-frequency spectral notch detection, does not rely on an 'ideal observer' analysis of auditory nerve spike trains. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Non-linear solitary sound waves in lipid membranes and their possible role in biological signaling

    NASA Astrophysics Data System (ADS)

    Shrivastava, Shamit

    Biological macromolecules self-assemble under entropic forces to form a dynamic 2D interfacial medium where the elastic properties arise from the curvature of the entropic potential of the interface. Elastic interfaces should be capable of propagating localized perturbations analogous to sound waves. However, (1) the existence and (2) the possible role of such waves in affecting biological functions remain unexplored. Both these aspects of "sound" as a signaling mechanism in biology are explored experimentally on mixed monolayers of lipids-fluorophores-proteins at the air/water interface as a model biological interface. This study shows - for the first time - that the nonlinear susceptibility near a thermodynamic transition in a lipid monolayer results in nonlinear solitary sound waves that are of 'all or none' nature. The state dependence of the nonlinear propagation is characterized by studying the velocity-amplitude relationship and results on distance dependence, effect of geometry and collision of solitary waves are presented. Given that the lipid bilayers and real biological membranes have such nonlinearities in their susceptibility diagrams, similar solitary phenomenon should be expected in biological membranes. In fact the observed characteristics of solitary sound waves such as, their all or none nature, a biphasic pulse shape with a long tail and optp-mechano-electro-thermal coupling etc. are strikingly similar to the phenomenon of nerve pulse propagation as observed in single nerve fibers. Finally given the strong correlation between the activity of membrane bound enzymes and the susceptibility and the fact that the later varies within a single solitary pulse, a new thermodynamic basis for biological signaling is proposed. The state of the interface controls both the nature of sound propagation and its impact on incorporated enzymes and proteins. The proof of concept is demonstrated for acetylcholine esterase embedded in a lipid monolayer, where the enzyme is spatiotemporally "knocked out" by a propagating sound wave.

  19. Human brain detects short-time nonlinear predictability in the temporal fine structure of deterministic chaotic sounds

    NASA Astrophysics Data System (ADS)

    Itoh, Kosuke; Nakada, Tsutomu

    2013-04-01

    Deterministic nonlinear dynamical processes are ubiquitous in nature. Chaotic sounds generated by such processes may appear irregular and random in waveform, but these sounds are mathematically distinguished from random stochastic sounds in that they contain deterministic short-time predictability in their temporal fine structures. We show that the human brain distinguishes deterministic chaotic sounds from spectrally matched stochastic sounds in neural processing and perception. Deterministic chaotic sounds, even without being attended to, elicited greater cerebral cortical responses than the surrogate control sounds after about 150 ms in latency after sound onset. Listeners also clearly discriminated these sounds in perception. The results support the hypothesis that the human auditory system is sensitive to the subtle short-time predictability embedded in the temporal fine structure of sounds.

  20. Influence of double stimulation on sound-localization behavior in barn owls.

    PubMed

    Kettler, Lutz; Wagner, Hermann

    2014-12-01

    Barn owls do not immediately approach a source after they hear a sound, but wait for a second sound before they strike. This represents a gain in striking behavior by avoiding responses to random incidents. However, the first stimulus is also expected to change the threshold for perceiving the subsequent second sound, thus possibly introducing some costs. We mimicked this situation in a behavioral double-stimulus paradigm utilizing saccadic head turns of owls. The first stimulus served as an adapter, was presented in frontal space, and did not elicit a head turn. The second stimulus, emitted from a peripheral source, elicited the head turn. The time interval between both stimuli was varied. Data obtained with double stimulation were compared with data collected with a single stimulus from the same positions as the second stimulus in the double-stimulus paradigm. Sound-localization performance was quantified by the response latency, accuracy, and precision of the head turns. Response latency was increased with double stimuli, while accuracy and precision were decreased. The effect depended on the inter-stimulus interval. These results suggest that waiting for a second stimulus may indeed impose costs on sound localization by adaptation and this reduces the gain obtained by waiting for a second stimulus.

  1. Physiological models of the lateral superior olive

    PubMed Central

    2017-01-01

    In computational biology, modeling is a fundamental tool for formulating, analyzing and predicting complex phenomena. Most neuron models, however, are designed to reproduce certain small sets of empirical data. Hence their outcome is usually not compatible or comparable with other models or datasets, making it unclear how widely applicable such models are. In this study, we investigate these aspects of modeling, namely credibility and generalizability, with a specific focus on auditory neurons involved in the localization of sound sources. The primary cues for binaural sound localization are comprised of interaural time and level differences (ITD/ILD), which are the timing and intensity differences of the sound waves arriving at the two ears. The lateral superior olive (LSO) in the auditory brainstem is one of the locations where such acoustic information is first computed. An LSO neuron receives temporally structured excitatory and inhibitory synaptic inputs that are driven by ipsi- and contralateral sound stimuli, respectively, and changes its spike rate according to binaural acoustic differences. Here we examine seven contemporary models of LSO neurons with different levels of biophysical complexity, from predominantly functional ones (‘shot-noise’ models) to those with more detailed physiological components (variations of integrate-and-fire and Hodgkin-Huxley-type). These models, calibrated to reproduce known monaural and binaural characteristics of LSO, generate largely similar results to each other in simulating ITD and ILD coding. Our comparisons of physiological detail, computational efficiency, predictive performances, and further expandability of the models demonstrate (1) that the simplistic, functional LSO models are suitable for applications where low computational costs and mathematical transparency are needed, (2) that more complex models with detailed membrane potential dynamics are necessary for simulation studies where sub-neuronal nonlinear processes play important roles, and (3) that, for general purposes, intermediate models might be a reasonable compromise between simplicity and biological plausibility. PMID:29281618

  2. 40 CFR 205.54-2 - Sound data acquisition system.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... meets the “fast” dynamic requirement of a precision sound level meter indicating meter system for the... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Sound data acquisition system. 205.54... data acquisition system. (a) Systems employing tape recorders and graphic level recorders may be...

  3. Breaking the Sound Barrier

    ERIC Educational Resources Information Center

    Brown, Tom; Boehringer, Kim

    2007-01-01

    Students in a fourth-grade class participated in a series of dynamic sound learning centers followed by a dramatic capstone event--an exploration of the amazing Trashcan Whoosh Waves. It's a notoriously difficult subject to teach, but this hands-on, exploratory approach ignited student interest in sound, promoted language acquisition, and built…

  4. Dynamic Substrate for the Physical Encoding of Sensory Information in Bat Biosonar

    NASA Astrophysics Data System (ADS)

    Müller, Rolf; Gupta, Anupam K.; Zhu, Hongxiao; Pannala, Mittu; Gillani, Uzair S.; Fu, Yanqing; Caspers, Philip; Buck, John R.

    2017-04-01

    Horseshoe bats have dynamic biosonar systems with interfaces for ultrasonic emission (reception) that change shape while diffracting the outgoing (incoming) sound waves. An information-theoretic analysis based on numerical and physical prototypes shows that these shape changes add sensory information (mutual information between distant shape conformations <20 %), increase the number of resolvable directions of sound incidence, and improve the accuracy of direction finding. These results demonstrate that horseshoe bats have a highly effective substrate for dynamic encoding of sensory information.

  5. Dynamic Substrate for the Physical Encoding of Sensory Information in Bat Biosonar.

    PubMed

    Müller, Rolf; Gupta, Anupam K; Zhu, Hongxiao; Pannala, Mittu; Gillani, Uzair S; Fu, Yanqing; Caspers, Philip; Buck, John R

    2017-04-14

    Horseshoe bats have dynamic biosonar systems with interfaces for ultrasonic emission (reception) that change shape while diffracting the outgoing (incoming) sound waves. An information-theoretic analysis based on numerical and physical prototypes shows that these shape changes add sensory information (mutual information between distant shape conformations <20%), increase the number of resolvable directions of sound incidence, and improve the accuracy of direction finding. These results demonstrate that horseshoe bats have a highly effective substrate for dynamic encoding of sensory information.

  6. Modeling of reverberant room responses for two-dimensional spatial sound field analysis and synthesis.

    PubMed

    Bai, Mingsian R; Li, Yi; Chiang, Yi-Hao

    2017-10-01

    A unified framework is proposed for analysis and synthesis of two-dimensional spatial sound field in reverberant environments. In the sound field analysis (SFA) phase, an unbaffled 24-element circular microphone array is utilized to encode the sound field based on the plane-wave decomposition. Depending on the sparsity of the sound sources, the SFA stage can be implemented in two manners. For sparse-source scenarios, a one-stage algorithm based on compressive sensing algorithm is utilized. Alternatively, a two-stage algorithm can be used, where the minimum power distortionless response beamformer is used to localize the sources and Tikhonov regularization algorithm is used to extract the source amplitudes. In the sound field synthesis (SFS), a 32-element rectangular loudspeaker array is employed to decode the target sound field using pressure matching technique. To establish the room response model, as required in the pressure matching step of the SFS phase, an SFA technique for nonsparse-source scenarios is utilized. Choice of regularization parameters is vital to the reproduced sound field. In the SFS phase, three SFS approaches are compared in terms of localization performance and voice reproduction quality. Experimental results obtained in a reverberant room are presented and reveal that an accurate room response model is vital to immersive rendering of the reproduced sound field.

  7. Development and application of an empirical probability distribution for the prediction error of re-entry body maximum dynamic pressure

    NASA Technical Reports Server (NTRS)

    Lanzi, R. James; Vincent, Brett T.

    1993-01-01

    The relationship between actual and predicted re-entry maximum dynamic pressure is characterized using a probability density function and a cumulative distribution function derived from sounding rocket flight data. This paper explores the properties of this distribution and demonstrates applications of this data with observed sounding rocket re-entry body damage characteristics to assess probabilities of sustaining various levels of heating damage. The results from this paper effectively bridge the gap existing in sounding rocket reentry analysis between the known damage level/flight environment relationships and the predicted flight environment.

  8. First and second sound in a strongly interacting Fermi gas

    NASA Astrophysics Data System (ADS)

    Taylor, E.; Hu, H.; Liu, X.-J.; Pitaevskii, L. P.; Griffin, A.; Stringari, S.

    2009-11-01

    Using a variational approach, we solve the equations of two-fluid hydrodynamics for a uniform and trapped Fermi gas at unitarity. In the uniform case, we find that the first and second sound modes are remarkably similar to those in superfluid helium, a consequence of strong interactions. In the presence of harmonic trapping, first and second sound become degenerate at certain temperatures. At these points, second sound hybridizes with first sound and is strongly coupled with density fluctuations, giving a promising way of observing second sound. We also discuss the possibility of exciting second sound by generating local heat perturbations.

  9. NASA sounding rockets, 1958 - 1968: A historical summary

    NASA Technical Reports Server (NTRS)

    Corliss, W. R.

    1971-01-01

    The development and use of sounding rockets is traced from the Wac Corporal through the present generation of rockets. The Goddard Space Flight Center Sounding Rocket Program is discussed, and the use of sounding rockets during the IGY and the 1960's is described. Advantages of sounding rockets are identified as their simplicity and payload simplicity, low costs, payload recoverability, geographic flexibility, and temporal flexibility. The disadvantages are restricted time of observation, localized coverage, and payload limitations. Descriptions of major sounding rockets, trends in vehicle usage, and a compendium of NASA sounding rocket firings are also included.

  10. Contribution of self-motion perception to acoustic target localization.

    PubMed

    Pettorossi, V E; Brosch, M; Panichi, R; Botti, F; Grassi, S; Troiani, D

    2005-05-01

    The findings of this study suggest that acoustic spatial perception during head movement is achieved by the vestibular system, which is responsible for the correct dynamic of acoustic target pursuit. The ability to localize sounds in space during whole-body rotation relies on the auditory localization system, which recognizes the position of sound in a head-related frame, and on the sensory systems, namely the vestibular system, which perceive head and body movement. The aim of this study was to analyse the contribution of head motion cues to the spatial representation of acoustic targets in humans. Healthy subjects standing on a rotating platform in the dark were asked to pursue with a laser pointer an acoustic target which was horizontally rotated while the body was kept stationary or maintained stationary while the whole body was rotated. The contribution of head motion to the spatial acoustic representation could be inferred by comparing the gains and phases of the pursuit in the two experimental conditions when the frequency was varied. During acoustic target rotation there was a reduction in the gain and an increase in the phase lag, while during whole-body rotations the gain tended to increase and the phase remained constant. The different contributions of the vestibular and acoustic systems were confirmed by analysing the acoustic pursuit during asymmetric body rotation. In this particular condition, in which self-motion perception gradually diminished, an increasing delay in target pursuit was observed.

  11. Ballistic range experiments on superbooms generated by refraction

    NASA Technical Reports Server (NTRS)

    Sanai, M.; Toong, T.-Y.; Pierce, A. D.

    1976-01-01

    The enhanced sonic boom or supersonic boom generated as a result of atmospheric refraction in threshold Mach number flights was recreated in a ballistic range by firing projectiles at low supersonic speeds into a stratified medium obtained by slowly injecting carbon dioxide into air. The range was equipped with a fast-response dynamic pressure transducer and schlieren photographic equipment, and the sound speed variation with height was controlled by regulating the flow rate of the CO2. The schlieren observations of the resulting flow field indicate that the generated shocks are reflected near the sonic cutoff altitude where local sound speed equals body speed, provided such an altitude exists. Maximum shock strength occurs very nearly at the point where the incident and reflected shocks join, indicating that the presence of the reflected shock may have an appreciable effect on the magnitude of the focus factor. The largest focus factor detected was 1.7 and leads to an estimate that the constant in the Guiraud-Thery scaling law should have a value of 1.30.

  12. Coupled auralization and virtual video for immersive multimedia displays

    NASA Astrophysics Data System (ADS)

    Henderson, Paul D.; Torres, Rendell R.; Shimizu, Yasushi; Radke, Richard; Lonsway, Brian

    2003-04-01

    The implementation of maximally-immersive interactive multimedia in exhibit spaces requires not only the presentation of realistic visual imagery but also the creation of a perceptually accurate aural experience. While conventional implementations treat audio and video problems as essentially independent, this research seeks to couple the visual sensory information with dynamic auralization in order to enhance perceptual accuracy. An implemented system has been developed for integrating accurate auralizations with virtual video techniques for both interactive presentation and multi-way communication. The current system utilizes a multi-channel loudspeaker array and real-time signal processing techniques for synthesizing the direct sound, early reflections, and reverberant field excited by a moving sound source whose path may be interactively defined in real-time or derived from coupled video tracking data. In this implementation, any virtual acoustic environment may be synthesized and presented in a perceptually-accurate fashion to many participants over a large listening and viewing area. Subject tests support the hypothesis that the cross-modal coupling of aural and visual displays significantly affects perceptual localization accuracy.

  13. The Sound Patterns of Camuno: Description and Explanation in Evolutionary Phonology

    ERIC Educational Resources Information Center

    Cresci, Michela

    2014-01-01

    This dissertation presents a linguistic study of the sound patterns of Camuno framed within Evolutionary Phonology (Blevins, 2004, 2006, to appear). Camuno is a variety of Eastern Lombard, a Romance language of northern Italy, spoken in Valcamonica. Camuno is not a local variety of Italian, but a sister of Italian, a local divergent development of…

  14. Light-induced vibration in the hearing organ

    PubMed Central

    Ren, Tianying; He, Wenxuan; Li, Yizeng; Grosh, Karl; Fridberger, Anders

    2014-01-01

    The exceptional sensitivity of mammalian hearing organs is attributed to an active process, where force produced by sensory cells boost sound-induced vibrations, making soft sounds audible. This process is thought to be local, with each section of the hearing organ capable of amplifying sound-evoked movement, and nearly instantaneous, since amplification can work for sounds at frequencies up to 100 kHz in some species. To test these fundamental precepts, we developed a method for focally stimulating the living hearing organ with light. Light pulses caused intense and highly damped mechanical responses followed by traveling waves that developed with considerable delay. The delayed response was identical to movements evoked by click-like sounds. This shows that the active process is neither local nor instantaneous, but requires mechanical waves traveling from the cochlear base toward its apex. A physiologically-based mathematical model shows that such waves engage the active process, enhancing hearing sensitivity. PMID:25087606

  15. Sound source localization inspired by the ears of the Ormia ochracea

    NASA Astrophysics Data System (ADS)

    Kuntzman, Michael L.; Hall, Neal A.

    2014-07-01

    The parasitoid fly Ormia ochracea has the remarkable ability to locate crickets using audible sound. This ability is, in fact, remarkable as the fly's hearing mechanism spans only 1.5 mm which is 50× smaller than the wavelength of sound emitted by the cricket. The hearing mechanism is, for all practical purposes, a point in space with no significant interaural time or level differences to draw from. It has been discovered that evolution has empowered the fly with a hearing mechanism that utilizes multiple vibration modes to amplify interaural time and level differences. Here, we present a fully integrated, man-made mimic of the Ormia's hearing mechanism capable of replicating the remarkable sound localization ability of the special fly. A silicon-micromachined prototype is presented which uses multiple piezoelectric sensing ports to simultaneously transduce two orthogonal vibration modes of the sensing structure, thereby enabling simultaneous measurement of sound pressure and pressure gradient.

  16. Atypical vertical sound localization and sound-onset sensitivity in people with autism spectrum disorders

    PubMed Central

    Visser, Eelke; Zwiers, Marcel P.; Kan, Cornelis C.; Hoekstra, Liesbeth; van Opstal, A. John; Buitelaar, Jan K.

    2013-01-01

    Background Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. Methods We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). Results Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. Limitations The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. Conclusion Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs. PMID:24148845

  17. Better protection from blasts without sacrificing situational awareness.

    PubMed

    Killion, Mead C; Monroe, Tim; Drambarean, Viorel

    2011-03-01

    A large number of soldiers returning from war report hearing loss and/or tinnitus. Many deployed soldiers decline to wear their hearing protection devices (HPDs) because they feel that earplugs interfere with their ability to detect and localize the enemy and their friends. The detection problem is easily handled in electronic devices with low-noise microphones. The localization problem is not as easy. In this paper, the factors that reduce situational awareness--hearing loss and restricted bandwidth in HPD devices--are discussed in light of available data, followed by a review of the cues to localization. Two electronic blast plug earplugs with 16-kHz bandwidth are described. Both provide subjectively transparent sound with regard to sound quality and localization, i.e., they sound almost as if nothing is in the ears, while protecting the ears from blasts. Finally, two formal experiments are described which investigated localization performance compared to popular existing military HPDs and the open ear. The tested earplugs performed well regarding maintaining situational awareness. Detection-distance and acceptance studies are underway.

  18. Structured Counseling for Auditory Dynamic Range Expansion.

    PubMed

    Gold, Susan L; Formby, Craig

    2017-02-01

    A structured counseling protocol is described that, when combined with low-level broadband sound therapy from bilateral sound generators, offers audiologists a new tool for facilitating the expansion of the auditory dynamic range (DR) for loudness. The protocol and its content are specifically designed to address and treat problems that impact hearing-impaired persons who, due to their reduced DRs, may be limited in the use and benefit of amplified sound from hearing aids. The reduced DRs may result from elevated audiometric thresholds and/or reduced sound tolerance as documented by lower-than-normal loudness discomfort levels (LDLs). Accordingly, the counseling protocol is appropriate for challenging and difficult-to-fit persons with sensorineural hearing losses who experience loudness recruitment or hyperacusis. Positive treatment outcomes for individuals with the former and latter conditions are highlighted in this issue by incremental shifts (improvements) in LDL and/or categorical loudness judgments, associated reduced complaints of sound intolerance, and functional improvements in daily communication, speech understanding, and quality of life leading to improved hearing aid benefit, satisfaction, and aided sound quality, posttreatment.

  19. Structured Counseling for Auditory Dynamic Range Expansion

    PubMed Central

    Gold, Susan L.; Formby, Craig

    2017-01-01

    A structured counseling protocol is described that, when combined with low-level broadband sound therapy from bilateral sound generators, offers audiologists a new tool for facilitating the expansion of the auditory dynamic range (DR) for loudness. The protocol and its content are specifically designed to address and treat problems that impact hearing-impaired persons who, due to their reduced DRs, may be limited in the use and benefit of amplified sound from hearing aids. The reduced DRs may result from elevated audiometric thresholds and/or reduced sound tolerance as documented by lower-than-normal loudness discomfort levels (LDLs). Accordingly, the counseling protocol is appropriate for challenging and difficult-to-fit persons with sensorineural hearing losses who experience loudness recruitment or hyperacusis. Positive treatment outcomes for individuals with the former and latter conditions are highlighted in this issue by incremental shifts (improvements) in LDL and/or categorical loudness judgments, associated reduced complaints of sound intolerance, and functional improvements in daily communication, speech understanding, and quality of life leading to improved hearing aid benefit, satisfaction, and aided sound quality, posttreatment. PMID:28286367

  20. Towards Dynamic Contrast Specific Ultrasound Tomography

    NASA Astrophysics Data System (ADS)

    Demi, Libertario; van Sloun, Ruud J. G.; Wijkstra, Hessel; Mischi, Massimo

    2016-10-01

    We report on the first study demonstrating the ability of a recently-developed, contrast-enhanced, ultrasound imaging method, referred to as cumulative phase delay imaging (CPDI), to image and quantify ultrasound contrast agent (UCA) kinetics. Unlike standard ultrasound tomography, which exploits changes in speed of sound and attenuation, CPDI is based on a marker specific to UCAs, thus enabling dynamic contrast-specific ultrasound tomography (DCS-UST). For breast imaging, DCS-UST will lead to a more practical, faster, and less operator-dependent imaging procedure compared to standard echo-contrast, while preserving accurate imaging of contrast kinetics. Moreover, a linear relation between CPD values and ultrasound second-harmonic intensity was measured (coefficient of determination = 0.87). DCS-UST can find clinical applications as a diagnostic method for breast cancer localization, adding important features to multi-parametric ultrasound tomography of the breast.

  1. Towards Dynamic Contrast Specific Ultrasound Tomography.

    PubMed

    Demi, Libertario; Van Sloun, Ruud J G; Wijkstra, Hessel; Mischi, Massimo

    2016-10-05

    We report on the first study demonstrating the ability of a recently-developed, contrast-enhanced, ultrasound imaging method, referred to as cumulative phase delay imaging (CPDI), to image and quantify ultrasound contrast agent (UCA) kinetics. Unlike standard ultrasound tomography, which exploits changes in speed of sound and attenuation, CPDI is based on a marker specific to UCAs, thus enabling dynamic contrast-specific ultrasound tomography (DCS-UST). For breast imaging, DCS-UST will lead to a more practical, faster, and less operator-dependent imaging procedure compared to standard echo-contrast, while preserving accurate imaging of contrast kinetics. Moreover, a linear relation between CPD values and ultrasound second-harmonic intensity was measured (coefficient of determination = 0.87). DCS-UST can find clinical applications as a diagnostic method for breast cancer localization, adding important features to multi-parametric ultrasound tomography of the breast.

  2. Towards Dynamic Contrast Specific Ultrasound Tomography

    PubMed Central

    Demi, Libertario; Van Sloun, Ruud J. G.; Wijkstra, Hessel; Mischi, Massimo

    2016-01-01

    We report on the first study demonstrating the ability of a recently-developed, contrast-enhanced, ultrasound imaging method, referred to as cumulative phase delay imaging (CPDI), to image and quantify ultrasound contrast agent (UCA) kinetics. Unlike standard ultrasound tomography, which exploits changes in speed of sound and attenuation, CPDI is based on a marker specific to UCAs, thus enabling dynamic contrast-specific ultrasound tomography (DCS-UST). For breast imaging, DCS-UST will lead to a more practical, faster, and less operator-dependent imaging procedure compared to standard echo-contrast, while preserving accurate imaging of contrast kinetics. Moreover, a linear relation between CPD values and ultrasound second-harmonic intensity was measured (coefficient of determination = 0.87). DCS-UST can find clinical applications as a diagnostic method for breast cancer localization, adding important features to multi-parametric ultrasound tomography of the breast. PMID:27703251

  3. Method for creating an aeronautic sound shield having gas distributors arranged on the engines, wings, and nose of an aircraft

    NASA Technical Reports Server (NTRS)

    Corda, Stephen (Inventor); Smith, Mark Stephen (Inventor); Myre, David Daniel (Inventor)

    2008-01-01

    The present invention blocks and/or attenuates the upstream travel of acoustic disturbances or sound waves from a flight vehicle or components of a flight vehicle traveling at subsonic speed using a local injection of a high molecular weight gas. Additional benefit may also be obtained by lowering the temperature of the gas. Preferably, the invention has a means of distributing the high molecular weight gas from the nose, wing, component, or other structure of the flight vehicle into the upstream or surrounding air flow. Two techniques for distribution are direct gas injection and sublimation of the high molecular weight solid material from the vehicle surface. The high molecular weight and low temperature of the gas significantly decreases the local speed of sound such that a localized region of supersonic flow and possibly shock waves are formed, preventing the upstream travel of sound waves from the flight vehicle.

  4. Sound localization and word discrimination in reverberant environment in children with developmental dyslexia.

    PubMed

    Castro-Camacho, Wendy; Peñaloza-López, Yolanda; Pérez-Ruiz, Santiago J; García-Pedroza, Felipe; Padilla-Ortiz, Ana L; Poblano, Adrián; Villarruel-Rivas, Concepción; Romero-Díaz, Alfredo; Careaga-Olvera, Aidé

    2015-04-01

    Compare if localization of sounds and words discrimination in reverberant environment is different between children with dyslexia and controls. We studied 30 children with dyslexia and 30 controls. Sound and word localization and discrimination was studied in five angles from left to right auditory fields (-90o, -45o, 0o, +45o, +90o), under reverberant and no-reverberant conditions; correct answers were compared. Spatial location of words in no-reverberant test was deficient in children with dyslexia at 0º and +90o. Spatial location for reverberant test was altered in children with dyslexia at all angles, except –-90o. Word discrimination in no-reverberant test in children with dyslexia had a poor performance at left angles. In reverberant test, children with dyslexia exhibited deficiencies at -45o, -90o, and +45o angles. Children with dyslexia could had problems when have to locate sound, and discriminate words in extreme locations of the horizontal plane in classrooms with reverberation.

  5. The Radio Plasma Imager Investigation on the IMAGE Spacecraft

    NASA Technical Reports Server (NTRS)

    Reinisch, Bodo W.; Haines, D. M.; Bibl, K.; Cheney, G.; Galkin, I. A.; Huang, X.; Myers, S. H.; Sales, G. S.; Benson, R. F.; Fung, S. F.

    1999-01-01

    Radio plasma imaging uses total reflection of electromagnetic waves from plasmas whose plasma frequencies equal the radio sounding frequency and whose electron density gradients are parallel to the wave normals. The Radio Plasma Imager (RPI) has two orthogonal 500-m long dipole antennas in the spin plane for near omni-directional transmission. The third antenna is a 20-m dipole. Echoes from the magnetopause, plasmasphere and cusp will be received with three orthogonal antennas, allowing the determination of their angle-of-arrival. Thus it will be possible to create image fragments of the reflecting density structures. The instrument can execute a large variety of programmable measuring programs operating at frequencies between 3 kHz and 3 MHz. Tuning of the transmit antennas provides optimum power transfer from the 10 W transmitter to the antennas. The instrument can operate in three active sounding modes: (1) remote sounding to probe magnetospheric boundaries, (2) local (relaxation) sounding to probe the local plasma, and (3) whistler stimulation sounding. In addition, there is a passive mode to record natural emissions, and to determine the local electron density and temperature by using a thermal noise spectroscopy technique.

  6. Radio Sounding Science at High Powers

    NASA Technical Reports Server (NTRS)

    Green, J. L.; Reinisch, B. W.; Song, P.; Fung, S. F.; Benson, R. F.; Taylor, W. W. L.; Cooper, J. F.; Garcia, L.; Markus, T.; Gallagher, D. L.

    2004-01-01

    Future space missions like the Jupiter Icy Moons Orbiter (JIMO) planned to orbit Callisto, Ganymede, and Europa can fully utilize a variable power radio sounder instrument. Radio sounding at 1 kHz to 10 MHz at medium power levels (10 W to kW) will provide long-range magnetospheric sounding (several Jovian radii) like those first pioneered by the radio plasma imager instrument on IMAGE at low power (less than l0 W) and much shorter distances (less than 5 R(sub E)). A radio sounder orbiting a Jovian icy moon would be able to globally measure time-variable electron densities in the moon ionosphere and the local magnetospheric environment. Near-spacecraft resonance and guided echoes respectively allow measurements of local field magnitude and local field line geometry, perturbed both by direct magnetospheric interactions and by induced components from subsurface oceans. JIMO would allow radio sounding transmissions at much higher powers (approx. 10 kW) making subsurface sounding of the Jovian icy moons possible at frequencies above the ionosphere peak plasma frequency. Subsurface variations in dielectric properties, can be probed for detection of dense and solid-liquid phase boundaries associated with oceans and related structures in overlying ice crusts.

  7. Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity.

    PubMed

    Berger, Christopher C; Gonzalez-Franco, Mar; Tajadura-Jiménez, Ana; Florencio, Dinei; Zhang, Zhengyou

    2018-01-01

    Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.

  8. Degradation of Auditory Localization Performance Due to Helmet Ear Coverage: The Effects of Normal Acoustic Reverberation

    DTIC Science & Technology

    2009-07-01

    Therefore, it’s safe to assume that most large errors are due to front-back confusions. Front-back confusions occur in part because the binaural ...two ear) cues that dominate sound localization do not distinguish the front and rear hemispheres. The two binaural cues relied on are interaural...121 (5), 3094–3094. Shinn-Cunningham, B. G.; Kopčo, N.; Martin, T. J. Localizing Nearby Sound Sources in a Classroom: Binaural Room Impulse

  9. 33 CFR 100.1308 - Special Local Regulation; Hydroplane Races within the Captain of the Port Puget Sound Area of...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...; Hydroplane Races within the Captain of the Port Puget Sound Area of Responsibility. 100.1308 Section 100.1308... SAFETY OF LIFE ON NAVIGABLE WATERS § 100.1308 Special Local Regulation; Hydroplane Races within the... race areas for the purpose of reoccurring hydroplane races: (1) Dyes Inlet. West of Port Orchard, WA to...

  10. 33 CFR 100.1308 - Special Local Regulation; Hydroplane Races within the Captain of the Port Puget Sound Area of...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...; Hydroplane Races within the Captain of the Port Puget Sound Area of Responsibility. 100.1308 Section 100.1308... SAFETY OF LIFE ON NAVIGABLE WATERS § 100.1308 Special Local Regulation; Hydroplane Races within the... race areas for the purpose of reoccurring hydroplane races: (1) Dyes Inlet. West of Port Orchard, WA to...

  11. 33 CFR 100.1308 - Special Local Regulation; Hydroplane Races within the Captain of the Port Puget Sound Area of...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...; Hydroplane Races within the Captain of the Port Puget Sound Area of Responsibility. 100.1308 Section 100.1308... SAFETY OF LIFE ON NAVIGABLE WATERS § 100.1308 Special Local Regulation; Hydroplane Races within the... race areas for the purpose of reoccurring hydroplane races: (1) Dyes Inlet. West of Port Orchard, WA to...

  12. 33 CFR 100.1308 - Special Local Regulation; Hydroplane Races within the Captain of the Port Puget Sound Area of...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...; Hydroplane Races within the Captain of the Port Puget Sound Area of Responsibility. 100.1308 Section 100.1308... SAFETY OF LIFE ON NAVIGABLE WATERS § 100.1308 Special Local Regulation; Hydroplane Races within the... race areas for the purpose of reoccurring hydroplane races: (1) Dyes Inlet. West of Port Orchard, WA to...

  13. Full Spatial Resolution Infrared Sounding Application in the Preconvection Environment

    NASA Astrophysics Data System (ADS)

    Liu, C.; Liu, G.; Lin, T.

    2013-12-01

    Advanced infrared (IR) sounders such as the Atmospheric Infrared Sounder (AIRS) and Infrared Atmospheric Sounding Interferometer (IASI) provide atmospheric temperature and moisture profiles with high vertical resolution and high accuracy in preconvection environments. The derived atmospheric stability indices such as convective available potential energy (CAPE) and lifted index (LI) from advanced IR soundings can provide critical information 1 ; 6 h before the development of severe convective storms. Three convective storms are selected for the evaluation of applying AIRS full spatial resolution soundings and the derived products on providing warning information in the preconvection environments. In the first case, the AIRS full spatial resolution soundings revealed local extremely high atmospheric instability 3 h ahead of the convection on the leading edge of a frontal system, while the second case demonstrates that the extremely high atmospheric instability is associated with the local development of severe thunderstorm in the following hours. The third case is a local severe storm that occurred on 7-8 August 2010 in Zhou Qu, China, which caused more than 1400 deaths and left another 300 or more people missing. The AIRS full spatial resolution LI product shows the atmospheric instability 3.5 h before the storm genesis. The CAPE and LI from AIRS full spatial resolution and operational AIRS/AMSU soundings along with Geostationary Operational Environmental Satellite (GOES) Sounder derived product image (DPI) products were analyzed and compared. Case studies show that full spatial resolution AIRS retrievals provide more useful warning information in the preconvection environments for determining favorable locations for convective initiation (CI) than do the coarser spatial resolution operational soundings and lower spectral resolution GOES Sounder retrievals. The retrieved soundings are also tested in a regional data assimilation WRF 3D-var system to evaluate the potential assist in the NWP model.

  14. Transmission acoustic microscopy investigation

    NASA Astrophysics Data System (ADS)

    Maev, Roman; Kolosov, Oleg; Levin, Vadim; Lobkis, Oleg

    The nature of acoustic contrast, i.e. the connection of the amplitude and phase of the output signal of the acoustic microscope with the local values of the acoustic parameters of the sample (density, elasticity, viscosity) is a central problem of acoustic microscopy. A considerable number of studies have been devoted to the formation of the output signal of the reflection scanning acoustic microscope. For the transmission acoustic microscope (TAM) this problem has remained almost unstudied. Experimental investigation of the confocal system of the TAM was carried out on an independently manufactured laboratory mockup of the TAM with the working frequency of the 420 MHz. Acoustic lenses with the radius of curvature of about 500 microns and aperture angle of 45 deg were polished out in the end faces of two cylindrical sound conductors made from Al2O3 single crystals with an axis parallel to the axis C of the crystal (the length of the sound conductor is 20 mm; diameter, 6 mm). At the end faces of the sound conductor, opposite to the lenses, CdS transducers with a diameter of 2 mm were disposed. The electric channel of the TAM provided a possibility for registering the amplitude of the microscope output signal in the case of the dynamic range of the 50 dB.

  15. Numerical Study of Sound Emission by 2D Regular and Chaotic Vortex Configurations

    NASA Astrophysics Data System (ADS)

    Knio, Omar M.; Collorec, Luc; Juvé, Daniel

    1995-02-01

    The far-field noise generated by a system of three Gaussian vortices lying over a flat boundary is numerically investigated using a two-dimensional vortex element method. The method is based on the discretization of the vorticity field into a finite number of smoothed vortex elements of spherical overlapping cores. The elements are convected in a Lagrangian reference along particle trajectories using the local velocity vector, given in terms of a desingularized Biot-Savart law. The initial structure of the vortex system is triangular; a one-dimensional family of initial configurations is constructed by keeping one side of the triangle fixed and vertical, and varying the abscissa of the centroid of the remaining vortex. The inviscid dynamics of this vortex configuration are first investigated using non-deformable vortices. Depending on the aspect ratio of the initial system, regular or chaotic motion occurs. Due to wall-related symmetries, the far-field sound always exhibits a time-independent quadrupolar directivity with maxima parallel end perpendicular to the wall. When regular motion prevails, the noise spectrum is dominated by discrete frequencies which correspond to the fundamental system frequency and its superharmonics. For chaotic motion, a broadband spectrum is obtained; computed soundlevels are substantially higher than in non-chaotic systems. A more sophisticated analysis is then performed which accounts for vortex core dynamics. Results show that the vortex cores are susceptible to inviscid instability which leads to violent vorticity reorganization within the core. This phenomenon has little effect on the large-scale features of the motion of the system or on low frequency sound emission. However, it leads to the generation of a high-frequency noise band in the acoustic pressure spectrum. The latter is observed in both regular and chaotic system simulations.

  16. Ultrafast atomic-scale visualization of acoustic phonons generated by optically excited quantum dots

    PubMed Central

    Vanacore, Giovanni M.; Hu, Jianbo; Liang, Wenxi; Bietti, Sergio; Sanguinetti, Stefano; Carbone, Fabrizio; Zewail, Ahmed H.

    2017-01-01

    Understanding the dynamics of atomic vibrations confined in quasi-zero dimensional systems is crucial from both a fundamental point-of-view and a technological perspective. Using ultrafast electron diffraction, we monitored the lattice dynamics of GaAs quantum dots—grown by Droplet Epitaxy on AlGaAs—with sub-picosecond and sub-picometer resolutions. An ultrafast laser pulse nearly resonantly excites a confined exciton, which efficiently couples to high-energy acoustic phonons through the deformation potential mechanism. The transient behavior of the measured diffraction pattern reveals the nonequilibrium phonon dynamics both within the dots and in the region surrounding them. The experimental results are interpreted within the theoretical framework of a non-Markovian decoherence, according to which the optical excitation creates a localized polaron within the dot and a travelling phonon wavepacket that leaves the dot at the speed of sound. These findings indicate that integration of a phononic emitter in opto-electronic devices based on quantum dots for controlled communication processes can be fundamentally feasible. PMID:28852685

  17. Spatial and identity negative priming in audition: evidence of feature binding in auditory spatial memory.

    PubMed

    Mayr, Susanne; Buchner, Axel; Möller, Malte; Hauke, Robert

    2011-08-01

    Two experiments are reported with identical auditory stimulation in three-dimensional space but with different instructions. Participants localized a cued sound (Experiment 1) or identified a sound at a cued location (Experiment 2). A distractor sound at another location had to be ignored. The prime distractor and the probe target sound were manipulated with respect to sound identity (repeated vs. changed) and location (repeated vs. changed). The localization task revealed a symmetric pattern of partial repetition costs: Participants were impaired on trials with identity-location mismatches between the prime distractor and probe target-that is, when either the sound was repeated but not the location or vice versa. The identification task revealed an asymmetric pattern of partial repetition costs: Responding was slowed down when the prime distractor sound was repeated as the probe target, but at another location; identity changes at the same location were not impaired. Additionally, there was evidence of retrieval of incompatible prime responses in the identification task. It is concluded that feature binding of auditory prime distractor information takes place regardless of whether the task is to identify or locate a sound. Instructions determine the kind of identity-location mismatch that is detected. Identity information predominates over location information in auditory memory.

  18. Linear multivariate evaluation models for spatial perception of soundscape.

    PubMed

    Deng, Zhiyong; Kang, Jian; Wang, Daiwei; Liu, Aili; Kang, Joe Zhengyu

    2015-11-01

    Soundscape is a sound environment that emphasizes the awareness of auditory perception and social or cultural understandings. The case of spatial perception is significant to soundscape. However, previous studies on the auditory spatial perception of the soundscape environment have been limited. Based on 21 native binaural-recorded soundscape samples and a set of auditory experiments for subjective spatial perception (SSP), a study of the analysis among semantic parameters, the inter-aural-cross-correlation coefficient (IACC), A-weighted-equal sound-pressure-level (L(eq)), dynamic (D), and SSP is introduced to verify the independent effect of each parameter and to re-determine some of their possible relationships. The results show that the more noisiness the audience perceived, the worse spatial awareness they received, while the closer and more directional the sound source image variations, dynamics, and numbers of sound sources in the soundscape are, the better the spatial awareness would be. Thus, the sensations of roughness, sound intensity, transient dynamic, and the values of Leq and IACC have a suitable range for better spatial perception. A better spatial awareness seems to promote the preference slightly for the audience. Finally, setting SSPs as functions of the semantic parameters and Leq-D-IACC, two linear multivariate evaluation models of subjective spatial perception are proposed.

  19. Numerical calculation of listener-specific head-related transfer functions and sound localization: Microphone model and mesh discretization

    PubMed Central

    Ziegelwanger, Harald; Majdak, Piotr; Kreuzer, Wolfgang

    2015-01-01

    Head-related transfer functions (HRTFs) can be numerically calculated by applying the boundary element method on the geometry of a listener’s head and pinnae. The calculation results are defined by geometrical, numerical, and acoustical parameters like the microphone used in acoustic measurements. The scope of this study was to estimate requirements on the size and position of the microphone model and on the discretization of the boundary geometry as triangular polygon mesh for accurate sound localization. The evaluation involved the analysis of localization errors predicted by a sagittal-plane localization model, the comparison of equivalent head radii estimated by a time-of-arrival model, and the analysis of actual localization errors obtained in a sound-localization experiment. While the average edge length (AEL) of the mesh had a negligible effect on localization performance in the lateral dimension, the localization performance in sagittal planes, however, degraded for larger AELs with the geometrical error as dominant factor. A microphone position at an arbitrary position at the entrance of the ear canal, a microphone size of 1 mm radius, and a mesh with 1 mm AEL yielded a localization performance similar to or better than observed with acoustically measured HRTFs. PMID:26233020

  20. Statistical mechanics of self-driven Carnot cycles.

    PubMed

    Smith, E

    1999-10-01

    The spontaneous generation and finite-amplitude saturation of sound, in a traveling-wave thermoacoustic engine, are derived as properties of a second-order phase transition. It has previously been argued that this dynamical phase transition, called "onset," has an equivalent equilibrium representation, but the saturation mechanism and scaling were not computed. In this work, the sound modes implementing the engine cycle are coarse-grained and statistically averaged, in a partition function derived from microscopic dynamics on criteria of scale invariance. Self-amplification performed by the engine cycle is introduced through higher-order modal interactions. Stationary points and fluctuations of the resulting phenomenological Lagrangian are analyzed and related to background dynamical currents. The scaling of the stable sound amplitude near the critical point is derived and shown to arise universally from the interaction of finite-temperature disorder, with the order induced by self-amplification.

  1. Subtidal sea level variability in a shallow Mississippi River deltaic estuary, Louisiana

    USGS Publications Warehouse

    Snedden, G.A.; Cable, J.E.; Wiseman, W.J.

    2007-01-01

    The relative roles of river, atmospheric, and tidal forcings on estuarine sea level variability are examined in Breton Sound, a shallow (0.7 m) deltaic estuary situated in an interdistributary basin on the Mississippi River deltaic plain. The deltaic landscape contains vegetated marshes, tidal flats, circuitous channels, and other features that frictionally dissipate waves propagating through the system. Direct forcing by local wind stress over the surface of the estuary is minimal, owing to the lack of significant fetch due to landscape features of the estuary. Atmospheric forcing occurs almost entirely through remote forcing, where alongshore winds facilitate estuary-shelf exchange through coastal Ekman convergence. The highly frictional nature of the deltaic landscape causes the estuary to act as a low-pass filter to remote atmospheric forcing, where high-frequency, coastally-induced fluctuations are significantly damped, and the damping increases with distance from the estuary mouth. During spring, when substantial quantities of controlled Mississippi River inputs (q?? = 62 m3 s-1) are discharged into the estuary, upper estuary subtidal sea levels are forced by a combination of river and remote atmospheric forcings, while river effects are less clear downestuary. During autumn (q?? = 7 m3 s-1) sea level variability throughout the estuary is governed entirely by coastal variations at the marine boundary. A frequency-dependent analytical model, previously used to describe sea level dynamics forced by local wind stress and coastal forcing in deeper, less frictional systems, is applied in the shallow Breton Sound estuary. In contrast to deeper systems where coastally-induced fluctuations exhibit little or no frictional attenuation inside the estuary, these fluctuations in the shallow Breton Sound estuary show strong frequency-dependent amplitude reductions that extend well into the subtidal frequency spectrum. ?? 2007 Estuarine Research Federation.

  2. Noise-induced hearing loss alters the temporal dynamics of auditory-nerve responses

    PubMed Central

    Scheidt, Ryan E.; Kale, Sushrut; Heinz, Michael G.

    2010-01-01

    Auditory-nerve fibers demonstrate dynamic response properties in that they adapt to rapid changes in sound level, both at the onset and offset of a sound. These dynamic response properties affect temporal coding of stimulus modulations that are perceptually relevant for many sounds such as speech and music. Temporal dynamics have been well characterized in auditory-nerve fibers from normal-hearing animals, but little is known about the effects of sensorineural hearing loss on these dynamics. This study examined the effects of noise-induced hearing loss on the temporal dynamics in auditory-nerve fiber responses from anesthetized chinchillas. Post-stimulus time histograms were computed from responses to 50-ms tones presented at characteristic frequency and 30 dB above fiber threshold. Several response metrics related to temporal dynamics were computed from post-stimulus-time histograms and were compared between normal-hearing and noise-exposed animals. Results indicate that noise-exposed auditory-nerve fibers show significantly reduced response latency, increased onset response and percent adaptation, faster adaptation after onset, and slower recovery after offset. The decrease in response latency only occurred in noise-exposed fibers with significantly reduced frequency selectivity. These changes in temporal dynamics have important implications for temporal envelope coding in hearing-impaired ears, as well as for the design of dynamic compression algorithms for hearing aids. PMID:20696230

  3. Identifying local characteristic lengths governing sound wave properties in solid foams

    NASA Astrophysics Data System (ADS)

    Tan Hoang, Minh; Perrot, Camille

    2013-02-01

    Identifying microscopic geometric properties and fluid flow through opened-cell and partially closed-cell solid structures is a challenge for material science, in particular, for the design of porous media used as sound absorbers in building and transportation industries. We revisit recent literature data to identify the local characteristic lengths dominating the transport properties and sound absorbing behavior of polyurethane foam samples by performing numerical homogenization simulations. To determine the characteristic sizes of the model, we need porosity and permeability measurements in conjunction with ligament lengths estimates from available scanning electron microscope images. We demonstrate that this description of the porous material, consistent with the critical path picture following from the percolation arguments, is widely applicable. This is an important step towards tuning sound proofing properties of complex materials.

  4. Auditory Space Perception in Left- and Right-Handers

    ERIC Educational Resources Information Center

    Ocklenburg, Sebastian; Hirnstein, Marco; Hausmann, Markus; Lewald, Jorg

    2010-01-01

    Several studies have shown that handedness has an impact on visual spatial abilities. Here we investigated the effect of laterality on auditory space perception. Participants (33 right-handers, 20 left-handers) completed two tasks of sound localization. In a dark, anechoic, and sound-proof room, sound stimuli (broadband noise) were presented via…

  5. Optical and Acoustic Sensor-Based 3D Ball Motion Estimation for Ball Sport Simulators †.

    PubMed

    Seo, Sang-Woo; Kim, Myunggyu; Kim, Yejin

    2018-04-25

    Estimation of the motion of ball-shaped objects is essential for the operation of ball sport simulators. In this paper, we propose an estimation system for 3D ball motion, including speed and angle of projection, by using acoustic vector and infrared (IR) scanning sensors. Our system is comprised of three steps to estimate a ball motion: sound-based ball firing detection, sound source localization, and IR scanning for motion analysis. First, an impulsive sound classification based on the mel-frequency cepstrum and feed-forward neural network is introduced to detect the ball launch sound. An impulsive sound source localization using a 2D microelectromechanical system (MEMS) microphones and delay-and-sum beamforming is presented to estimate the firing position. The time and position of a ball in 3D space is determined from a high-speed infrared scanning method. Our experimental results demonstrate that the estimation of ball motion based on sound allows a wider activity area than similar camera-based methods. Thus, it can be practically applied to various simulations in sports such as soccer and baseball.

  6. Radial wave crystals: radially periodic structures from anisotropic metamaterials for engineering acoustic or electromagnetic waves.

    PubMed

    Torrent, Daniel; Sánchez-Dehesa, José

    2009-08-07

    We demonstrate that metamaterials with anisotropic properties can be used to develop a new class of periodic structures that has been named radial wave crystals. They can be sonic or photonic, and wave propagation along the radial directions is obtained through Bloch states like in usual sonic or photonic crystals. The band structure of the proposed structures can be tailored in a large amount to get exciting novel wave phenomena. For example, it is shown that acoustical cavities based on radial sonic crystals can be employed as passive devices for beam forming or dynamically orientated antennas for sound localization.

  7. A precedence effect resolves phantom sound source illusions in the parasitoid fly Ormia ochracea

    PubMed Central

    Lee, Norman; Elias, Damian O.; Mason, Andrew C.

    2009-01-01

    Localizing individual sound sources under reverberant environmental conditions can be a challenge when the original source and its acoustic reflections arrive at the ears simultaneously from different paths that convey ambiguous directional information. The acoustic parasitoid fly Ormia ochracea (Diptera: Tachinidae) relies on a pair of ears exquisitely sensitive to sound direction to localize the 5-kHz tone pulsatile calling song of their host crickets. In nature, flies are expected to encounter a complex sound field with multiple sources and their reflections from acoustic clutter potentially masking temporal information relevant to source recognition and localization. In field experiments, O. ochracea were lured onto a test arena and subjected to small random acoustic asymmetries between 2 simultaneous sources. Most flies successfully localize a single source but some localize a ‘phantom’ source that is a summed effect of both source locations. Such misdirected phonotaxis can be elicited reliably in laboratory experiments that present symmetric acoustic stimulation. By varying onset delay between 2 sources, we test whether hyperacute directional hearing in O. ochracea can function to exploit small time differences to determine source location. Selective localization depends on both the relative timing and location of competing sources. Flies preferred phonotaxis to a forward source. With small onset disparities within a 10-ms temporal window of attention, flies selectively localize the leading source while the lagging source has minimal influence on orientation. These results demonstrate the precedence effect as a mechanism to overcome phantom source illusions that arise from acoustic reflections or competing sources. PMID:19332794

  8. Diversity of acoustic tracheal system and its role for directional hearing in crickets

    PubMed Central

    2013-01-01

    Background Sound localization in small insects can be a challenging task due to physical constraints in deriving sufficiently large interaural intensity differences (IIDs) between both ears. In crickets, sound source localization is achieved by a complex type of pressure difference receiver consisting of four potential sound inputs. Sound acts on the external side of two tympana but additionally reaches the internal tympanal surface via two external sound entrances. Conduction of internal sound is realized by the anatomical arrangement of connecting trachea. A key structure is a trachea coupling both ears which is characterized by an enlarged part in its midline (i.e., the acoustic vesicle) accompanied with a thin membrane (septum). This facilitates directional sensitivity despite an unfavorable relationship between wavelength of sound and body size. Here we studied the morphological differences of the acoustic tracheal system in 40 cricket species (Gryllidae, Mogoplistidae) and species of outgroup taxa (Gryllotalpidae, Rhaphidophoridae, Gryllacrididae) of the suborder Ensifera comprising hearing and non hearing species. Results We found a surprisingly high variation of acoustic tracheal systems and almost all investigated species using intraspecific acoustic communication were characterized by an acoustic vesicle associated with a medial septum. The relative size of the acoustic vesicle - a structure most crucial for deriving high IIDs - implies an important role for sound localization. Most remarkable in this respect was the size difference of the acoustic vesicle between species; those with a more unfavorable ratio of body size to sound wavelength tend to exhibit a larger acoustic vesicle. On the other hand, secondary loss of acoustic signaling was nearly exclusively associated with the absence of both acoustic vesicle and septum. Conclusion The high diversity of acoustic tracheal morphology observed between species might reflect different steps in the evolution of the pressure difference receiver; with a precursor structure already present in ancestral non-hearing species. In addition, morphological transitions of the acoustic vesicle suggest a possible adaptive role for the generation of binaural directional cues. PMID:24131512

  9. 33 CFR 100.100 - Special Local Regulations; Regattas and Boat Races in the Coast Guard Sector Long Island Sound...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...; Regattas and Boat Races in the Coast Guard Sector Long Island Sound Captain of the Port Zone. 100.100... MARINE PARADES SAFETY OF LIFE ON NAVIGABLE WATERS § 100.100 Special Local Regulations; Regattas and Boat... any time it is deemed necessary to ensure the safety of life or property. (i) For all power boat races...

  10. 33 CFR 100.100 - Special Local Regulations; Regattas and Boat Races in the Coast Guard Sector Long Island Sound...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...; Regattas and Boat Races in the Coast Guard Sector Long Island Sound Captain of the Port Zone. 100.100... MARINE PARADES SAFETY OF LIFE ON NAVIGABLE WATERS § 100.100 Special Local Regulations; Regattas and Boat... it is deemed necessary to ensure the safety of life or property. (i) For all power boat races listed...

  11. 33 CFR 100.100 - Special Local Regulations; Regattas and Boat Races in the Coast Guard Sector Long Island Sound...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...; Regattas and Boat Races in the Coast Guard Sector Long Island Sound Captain of the Port Zone. 100.100... MARINE PARADES SAFETY OF LIFE ON NAVIGABLE WATERS § 100.100 Special Local Regulations; Regattas and Boat... any time it is deemed necessary to ensure the safety of life or property. (i) For all power boat races...

  12. Olivocochlear Efferent Control in Sound Localization and Experience-Dependent Learning

    PubMed Central

    Irving, Samuel; Moore, David R.; Liberman, M. Charles; Sumner, Christian J.

    2012-01-01

    Efferent auditory pathways have been implicated in sound localization and its plasticity. We examined the role of the olivocochlear system (OC) in horizontal sound localization by the ferret and in localization learning following unilateral earplugging. Under anesthesia, adult ferrets underwent olivocochlear bundle section at the floor of the fourth ventricle, either at the midline or laterally (left). Lesioned and control animals were trained to localize 1 s and 40ms amplitude-roved broadband noise stimuli from one of 12 loudspeakers. Neither type of lesion affected normal localization accuracy. All ferrets then received a left earplug and were tested and trained over 10 d. The plug profoundly disrupted localization. Ferrets in the control and lateral lesion groups improved significantly during subsequent training on the 1 s stimulus. No improvement (learning) occurred in the midline lesion group. Markedly poorer performance and failure to learn was observed with the 40 ms stimulus in all groups. Plug removal resulted in a rapid resumption of normal localization in all animals. Insertion of a subsequent plug in the right ear produced similar results to left earplugging. Learning in the lateral lesion group was independent of the side of the lesion relative to the earplug. Lesions in all reported cases were verified histologically. The results suggest the OC system is not needed for accurate localization, but that it is involved in relearning localization during unilateral conductive hearing loss. PMID:21325517

  13. On the relevance of source effects in geomagnetic pulsations for induction soundings

    NASA Astrophysics Data System (ADS)

    Neska, Anne; Tadeusz Reda, Jan; Leszek Neska, Mariusz; Petrovich Sumaruk, Yuri

    2018-03-01

    This study is an attempt to close a gap between recent research on geomagnetic pulsations and their usage as source signals in electromagnetic induction soundings (i.e., magnetotellurics, geomagnetic depth sounding, and magnetovariational sounding). The plane-wave assumption as a precondition for the proper performance of these methods is partly violated by the local nature of field line resonances which cause a considerable portion of pulsations at mid latitudes. It is demonstrated that and explained why in spite of this, the application of remote reference stations in quasi-global distances for the suppression of local correlated-noise effects in induction arrows is possible in the geomagnetic pulsation range. The important role of upstream waves and of the magnetic equatorial region for such applications is emphasized. Furthermore, the principal difference between application of reference stations for local transfer functions (which result in sounding curves and induction arrows) and for inter-station transfer functions is considered. The preconditions for the latter are much stricter than for the former. Hence a failure to estimate an inter-station transfer function to be interpreted in terms of electromagnetic induction, e.g., because of field line resonances, does not necessarily prohibit use of the station pair for a remote reference estimation of the impedance tensor.

  14. Reverberation enhances onset dominance in sound localization.

    PubMed

    Stecker, G Christopher; Moore, Travis M

    2018-02-01

    Temporal variation in sensitivity to sound-localization cues was measured in anechoic conditions and in simulated reverberation using the temporal weighting function (TWF) paradigm [Stecker and Hafter (2002). J. Acoust. Soc. Am. 112, 1046-1057]. Listeners judged the locations of Gabor click trains (4 kHz center frequency, 5-ms interclick interval) presented from an array of loudspeakers spanning 360° azimuth. Targets ranged ±56.25° across trials. Individual clicks within each train varied by an additional ±11.25° to allow TWF calculation by multiple regression. In separate conditions, sounds were presented directly or in the presence of simulated reverberation: 13 orders of lateral reflection were computed for a 10 m × 10 m room ( RT 60 ≊300 ms) and mapped to the appropriate locations in the loudspeaker array. Results reveal a marked increase in perceptual weight applied to the initial click in reverberation, along with a reduction in the impact of late-arriving sound. In a second experiment, target stimuli were preceded by trains of "conditioner" sounds with or without reverberation. Effects were modest and limited to the first few clicks in a train, suggesting that impacts of reverberant pre-exposure on localization may be limited to the processing of information from early reflections.

  15. [The underwater and airborne horizontal localization of sound by the northern fur seal].

    PubMed

    Babushina, E S; Poliakov, M A

    2004-01-01

    The accuracy of the underwater and airborne horizontal localization of different acoustic signals by the northern fur seal was investigated by the method of instrumental conditioned reflexes with food reinforcement. For pure-tone pulsed signals in the frequency range of 0.5-25 kHz the minimum angles of sound localization at 75% of correct responses corresponded to sound transducer azimuth of 6.5-7.5 degrees +/- 0.1-0.4 degrees underwater (at impulse duration of 3-90 ms) and of 3.5-5.5 degrees +/- 0.05-0.5 degrees in air (at impulse duration of 3-160 ms). The source of pulsed noise signals (of 3-ms duration) was localized with the accuracy of 3.0 degrees +/- 0.2 degrees underwater. The source of continuous (of 1-s duration) narrow band (10% of c.fr.) noise signals was localized in air with the accuracy of 2-5 degrees +/- 0.02-0.4 degrees and of continuous broad band (1-20 kHz) noise, with the accuracy of 4.5 degrees +/- 0.2 degrees.

  16. CLIVAR Mode Water Dynamics Experiment (CLIMODE), Fall 2006 R/V Oceanus Voyage 434, November 16, 2006-December 3, 2006

    DTIC Science & Technology

    2007-12-01

    except for the dive zero time which needed to be programmed during the cruise when the deployment schedule dates were confirmed. _ ACM - Aanderaa ACM...guards bolted on to complete the frame prior to deployment. Sound Source - Sound sources were scheduled to be redeployed. Sound sources were originally...battery voltages and a vacuum. A +27 second time drift was noted and the time was reset. The sound source was scheduled to go to full power on November

  17. 77 FR 6954 - Special Local Regulations; Safety and Security Zones; Recurring Events in Captain of the Port...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-10

    ... Events in Captain of the Port Long Island Sound Zone AGENCY: Coast Guard, DHS. ACTION: Final rule... Sector Long Island Sound Captain of the Port (COTP) Zone. These limited access areas include special... Sector Long Island Sound, telephone 203-468- 4544, email [email protected] . If you have questions...

  18. 33 CFR 100.121 - Swim Across the Sound, Long Island Sound, Port Jefferson, NY to Captain's Cove Seaport...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY REGATTAS AND MARINE PARADES... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Swim Across the Sound, Long... the Federal Register, separate marine broadcasts and local notice to mariners. [USCG-2009-0395, 75 FR...

  19. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments †

    PubMed Central

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G.

    2017-01-01

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators. PMID:29099790

  20. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments.

    PubMed

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Kumon, Makoto; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G

    2017-11-03

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators.

  1. Method and apparatus for ultrasonic doppler velocimetry using speed of sound and reflection mode pulsed wideband doppler

    DOEpatents

    Shekarriz, Alireza; Sheen, David M.

    2000-01-01

    According to the present invention, a method and apparatus rely upon tomographic measurement of the speed of sound and fluid velocity in a pipe. The invention provides a more accurate profile of velocity within flow fields where the speed of sound varies within the cross-section of the pipe. This profile is obtained by reconstruction of the velocity profile from the local speed of sound measurement simultaneously with the flow velocity. The method of the present invention is real-time tomographic ultrasonic Doppler velocimetry utilizing a to plurality of ultrasonic transmission and reflection measurements along two orthogonal sets of parallel acoustic lines-of-sight. The fluid velocity profile and the acoustic velocity profile are determined by iteration between determining a fluid velocity profile and measuring local acoustic velocity until convergence is reached.

  2. Spin stability of sounding rocket secondary payloads following high velocity ejections

    NASA Astrophysics Data System (ADS)

    Nelson, Weston M.

    The Auroral Spatial Structures Probe (ASSP) mission is a sounding rocket mission studying solar energy input to space weather. ASSP requires the high velocity ejection (up to 50 m/s) of 6 secondary payloads, spin stabilized perpendicular to the ejection velocity. The proposed scientific instrumentation depends on a high degree of spin stability, requiring a maximum coning angle of less than 5°. It also requires that the spin axis be aligned within 25° of the local magnetic field lines. The maximum velocities of current ejection methods are typically less than 10m/s, and often produce coning angles in excess of 20°. Because of this they do not meet the ASSP mission requirements. To meet these requirements a new ejection method is being developed by NASA Wallops Flight Facility. Success of the technique in meeting coning angle and B-field alignment requirements is evaluated herein by modeling secondary payload dynamic behavior using a 6-DOF dynamic simulation employing state space integration written in MATLAB. Simulation results showed that secondary payload mass balancing is the most important factor in meeting stability requirements. Secondary mass payload properties will be measured using an inverted torsion pendulum. If moment of inertia measurement errors can be reduced to 0.5%, it is possible to achieve mean coning and B-field alignment angles of 2.16° and 2.71°, respectively.

  3. Dynamic Correlations between Intrinsic Connectivity and Extrinsic Connectivity of the Auditory Cortex in Humans.

    PubMed

    Cui, Zhuang; Wang, Qian; Gao, Yayue; Wang, Jing; Wang, Mengyang; Teng, Pengfei; Guan, Yuguang; Zhou, Jian; Li, Tianfu; Luan, Guoming; Li, Liang

    2017-01-01

    The arrival of sound signals in the auditory cortex (AC) triggers both local and inter-regional signal propagations over time up to hundreds of milliseconds and builds up both intrinsic functional connectivity (iFC) and extrinsic functional connectivity (eFC) of the AC. However, interactions between iFC and eFC are largely unknown. Using intracranial stereo-electroencephalographic recordings in people with drug-refractory epilepsy, this study mainly investigated the temporal dynamic of the relationships between iFC and eFC of the AC. The results showed that a Gaussian wideband-noise burst markedly elicited potentials in both the AC and numerous higher-order cortical regions outside the AC (non-auditory cortices). Granger causality analyses revealed that in the earlier time window, iFC of the AC was positively correlated with both eFC from the AC to the inferior temporal gyrus and that to the inferior parietal lobule. While in later periods, the iFC of the AC was positively correlated with eFC from the precentral gyrus to the AC and that from the insula to the AC. In conclusion, dual-directional interactions occur between iFC and eFC of the AC at different time windows following the sound stimulation and may form the foundation underlying various central auditory processes, including auditory sensory memory, object formation, integrations between sensory, perceptional, attentional, motor, emotional, and executive processes.

  4. Elastic and thermal properties of the layered thermoelectrics BiOCuSe and LaOCuSe

    NASA Astrophysics Data System (ADS)

    Saha, S. K.; Dutta, G.

    2016-09-01

    We determine the elastic properties of the layered thermoelectrics BiOCuSe and LaOCuSe using first-principles density functional theory calculations. To predict their stability, we calculate six distinct elastic constants, where all of them are positive, and suggest mechanically stable tetragonal crystals. As elastic properties relate to the nature and the strength of the chemical bond, the latter is analyzed by means of real-space descriptors, such as the electron localization function (ELF) and Bader charge. From elastic constants, a set of related properties, namely, bulk modulus, shear modulus, Young's modulus, sound velocity, Debye temperature, Grüneisen parameter, and thermal conductivity, are evaluated. Both materials are found to be ductile in nature and not brittle. We find BiOCuSe to have a smaller sound velocity and, hence, within the accuracy of the used Slack's model, a smaller thermal conductivity than LaOCuSe. Our calculations also reveal that the elastic properties and the related lattice thermal transport of both materials exhibit a much larger anisotropy than their electronic band properties that are known to be moderately anisotropic because of a moderate effective-electron-mass anisotropy. Finally, we determine the lattice dynamical properties, such as phonon dispersion, atomic displacement, and mode Grüneisen parameters, in order to correlate the elastic response, chemical bonding, and lattice dynamics.

  5. Lattice dynamics and the nature of structural transitions in organolead halide perovskites

    DOE PAGES

    Comin, Riccardo; Crawford, Michael K.; Said, Ayman H.; ...

    2016-09-09

    Organolead halide perovskites are a family of hybrid organic-inorganic compounds whose remark- able optoelectronic properties have been under intensive scrutiny in recent years. Here we use inelastic X-ray scattering to study low-energy lattice excitations in single crystals of methylammonium lead iodide and bromide perovskites. Our ndings conrm the displacive nature of the cubic-to- tetragonal phase transition, which is further shown, using neutron and x-ray diraction, to be close to a tricritical point. The experimental sound speed, around 100-200 m/s, suggests that electron- phonon scattering is likely a limiting factor for further improvements in carrier mobility. Lastly, we detect quasistatic symmetry-breakingmore » nanodomains persisting well into the high-temperature cubic phase, possibly stabilized by local defects. These ndings reveal key structural properties of these materials, but also bear important implications for carrier dynamics across an extended temperature range relevant for photovoltaic applications.« less

  6. Dynamic calibration of fast-response probes in low-pressure shock tubes

    NASA Astrophysics Data System (ADS)

    Persico, G.; Gaetani, P.; Guardone, A.

    2005-09-01

    Shock tube flows resulting from the incomplete burst of the diaphragm are investigated in connection with the dynamic calibration of fast-response pressure probes. As a result of the partial opening of the diaphragm, pressure disturbances are observed past the shock wave and the measured total pressure profile deviates from the envisaged step signal required by the calibration process. Pressure oscillations are generated as the initially normal shock wave diffracts from the diaphragm's orifice and reflects on the shock tube walls, with the lowest local frequency roughly equal to the ratio of the sound speed in the perturbed region to the shock tube diameter. The energy integral of the perturbations decreases with increasing distance from the diaphragm, as the diffracted leading shock and downwind reflections coalesce into a single normal shock. A procedure is proposed to calibrate fast-response pressure probes downwind of a partially opened shock tube diaphragm.

  7. Ultrafast switching of valence and generation of coherent acoustic phonons in semiconducting rare-earth monosulfides

    NASA Astrophysics Data System (ADS)

    Punpongjareorn, Napat; He, Xing; Tang, Zhongjia; Guloy, Arnold M.; Yang, Ding-Shyue

    2017-08-01

    We report on the ultrafast carrier dynamics and generation of coherent acoustic phonons in YbS, a semiconducting rare-earth monochalcogenide, using two-color pump-probe reflectivity. Compared to the carrier relaxation processes and lifetimes of conventional semiconductors, recombination of photoexcited electrons with holes in localized f orbitals is found to take place rapidly with a density-independent time constant of <500 fs in YbS. Such carrier annihilation signifies the unique and ultrafast nature of valence restoration of ytterbium ions after femtosecond photoexcitation switching. Following transfer of the absorbed energy to the lattice, coherent acoustic phonons emerge on the picosecond timescale as a result of the thermal strain in the photoexcited region. By analyzing the electronic and structural dynamics, we obtain the physical properties of YbS including its two-photon absorption and thermooptic coefficients, the period and decay time of the coherent oscillation, and the sound velocity.

  8. Lattice dynamics and the nature of structural transitions in organolead halide perovskites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Comin, Riccardo; Crawford, Michael K.; Said, Ayman H.

    Organolead halide perovskites are a family of hybrid organic-inorganic compounds whose remark- able optoelectronic properties have been under intensive scrutiny in recent years. Here we use inelastic X-ray scattering to study low-energy lattice excitations in single crystals of methylammonium lead iodide and bromide perovskites. Our ndings conrm the displacive nature of the cubic-to- tetragonal phase transition, which is further shown, using neutron and x-ray diraction, to be close to a tricritical point. The experimental sound speed, around 100-200 m/s, suggests that electron- phonon scattering is likely a limiting factor for further improvements in carrier mobility. Lastly, we detect quasistatic symmetry-breakingmore » nanodomains persisting well into the high-temperature cubic phase, possibly stabilized by local defects. These ndings reveal key structural properties of these materials, but also bear important implications for carrier dynamics across an extended temperature range relevant for photovoltaic applications.« less

  9. Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners with Bilateral and with Hearing-Preservation Cochlear Implants

    ERIC Educational Resources Information Center

    Loiselle, Louise H.; Dorman, Michael F.; Yost, William A.; Cook, Sarah J.; Gifford, Rene H.

    2016-01-01

    Purpose: To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Methods: Eleven bilateral listeners with MED-EL…

  10. The Impact of Masker Fringe and Masker Sparial Uncertainty on Sound Localization

    DTIC Science & Technology

    2010-09-01

    spatial uncertainty on sound localization and to examine how such effects might be related to binaural detection and informational masking. 2 Methods...results from the binaural detection literature and suggest that a longer duration fringe provides a more robust context against which to judge the...results from the binaural detection literature, which suggest that forward masker fringe provides a greater benefit than backward masker fringe [2]. The

  11. Dragon Ears airborne acoustic array: CSP analysis applied to cross array to compute real-time 2D acoustic sound field

    NASA Astrophysics Data System (ADS)

    Cerwin, Steve; Barnes, Julie; Kell, Scott; Walters, Mark

    2003-09-01

    This paper describes development and application of a novel method to accomplish real-time solid angle acoustic direction finding using two 8-element orthogonal microphone arrays. The developed prototype system was intended for localization and signature recognition of ground-based sounds from a small UAV. Recent advances in computer speeds have enabled the implementation of microphone arrays in many audio applications. Still, the real-time presentation of a two-dimensional sound field for the purpose of audio target localization is computationally challenging. In order to overcome this challenge, a crosspower spectrum phase1 (CSP) technique was applied to each 8-element arm of a 16-element cross array to provide audio target localization. In this paper, we describe the technique and compare it with two other commonly used techniques; Cross-Spectral Matrix2 and MUSIC3. The results show that the CSP technique applied to two 8-element orthogonal arrays provides a computationally efficient solution with reasonable accuracy and tolerable artifacts, sufficient for real-time applications. Additional topics include development of a synchronized 16-channel transmitter and receiver to relay the airborne data to the ground-based processor and presentation of test data demonstrating both ground-mounted operation and airborne localization of ground-based gunshots and loud engine sounds.

  12. Substance Abuse and HIV/AIDS in the Caribbean.

    PubMed

    Angulo-Arreola, Iliana Alexandra; Bastos, Francisco I; Strathdee, Steffanie A

    The Caribbean and Central America represent a formidable challenge for researchers and policy makers in the HIV field, due to their pronounced heterogeneity in terms of social, economic, and cultural contexts and the different courses the HIV epidemic has followed in the region. Such contrasting contexts and epidemics can be exemplified by 2 countries that share the island of Hispaniola, the French Creole-speaking Haiti, and the Spanish-speaking Dominican Republic. Haiti has experienced the worst epidemics outside of sub-Saharan Africa. Following a protracted economic and social crisis, recently aggravated by a devastating earthquake, the local HIV epidemic could experience resurgence. The region, strategically located on the way between coca-producing countries and the profitable North American markets, has been a transshipment area for years. Notwithstanding, the impact of such routes on local drug scenes has been very heterogeneous and dynamic, depending on a combination of local mores, drug enforcement activities, and the broad social and political context. Injecting drug use remains rare in the region, but local drug scenes are dynamic under the influence of increasing mobility of people and goods to and from North and South America, growing tourism and commerce, and prostitution. The multiple impacts of the recent economic and social crisis, as well as the influence of drug-trafficking routes across the Caribbean and other Latin American countries require a sustained effort to track changes in the HIV risk environment to inform sound drug policies and initiatives to minimize drug-related harms in the region.

  13. Resonant modal group theory of membrane-type acoustical metamaterials for low-frequency sound attenuation

    NASA Astrophysics Data System (ADS)

    Ma, Fuyin; Wu, Jiu Hui; Huang, Meng

    2015-09-01

    In order to overcome the influence of the structural resonance on the continuous structures and obtain a lightweight thin-layer structure which can effectively isolate the low-frequency noises, an elastic membrane structure was proposed. In the low-frequency range below 500 Hz, the sound transmission loss (STL) of this membrane type structure is greatly higher than that of the current sound insulation material EVA (ethylene-vinyl acetate copo) of vehicle, so it is possible to replace the EVA by the membrane-type metamaterial structure in practice engineering. Based on the band structure, modal shapes, as well as the sound transmission simulation, the sound insulation mechanism of the designed membrane-type acoustic metamaterials was analyzed from a new perspective, which had been validated experimentally. It is suggested that in the frequency range above 200 Hz for this membrane-mass type structure, the sound insulation effect was principally not due to the low-level locally resonant mode of the mass block, but the continuous vertical resonant modes of the localized membrane. So based on such a physical property, a resonant modal group theory is initially proposed in this paper. In addition, the sound insulation mechanism of the membrane-type structure and thin plate structure were combined by the membrane/plate resonant theory.

  14. Vibroacoustic study of a point-constrained plate mounted in a duct

    NASA Astrophysics Data System (ADS)

    Sapkale, Swapnil L.; Sucheendran, Mahesh M.; Gupta, Shakti S.; Kanade, Shantanu V.

    2018-04-01

    The vibroacoustic study of the interaction of sound with a point-constrained, simply-supported square plate is considered in this paper. The plate is mounted flush on one of the walls of an infinite duct of rectangular cross section and is backed by a cavity. The plate response and the acoustic field is predicted by solving the coupled governing equations using modal expansion with the relevant eigenmodes of the plate dynamics and acoustic fields in the duct and cavity. By varying the location of the point constraint, the frequency characteristics of the transmission loss in the duct can be tuned. The point constraint can also alter the amplitude and spectral characteristics of the plate's response. Interestingly, some new peaks are observed in the response because of the excitation of unsymmetric modes which are otherwise dormant. Mode-localization phenomenon, which is the localization of vibration in specific regions of the plate, is observed for selected constrained points.

  15. Electrophysiological models of neural processing.

    PubMed

    Nelson, Mark E

    2011-01-01

    The brain is an amazing information processing system that allows organisms to adaptively monitor and control complex dynamic interactions with their environment across multiple spatial and temporal scales. Mathematical modeling and computer simulation techniques have become essential tools in understanding diverse aspects of neural processing ranging from sub-millisecond temporal coding in the sound localization circuity of barn owls to long-term memory storage and retrieval in humans that can span decades. The processing capabilities of individual neurons lie at the core of these models, with the emphasis shifting upward and downward across different levels of biological organization depending on the nature of the questions being addressed. This review provides an introduction to the techniques for constructing biophysically based models of individual neurons and local networks. Topics include Hodgkin-Huxley-type models of macroscopic membrane currents, Markov models of individual ion-channel currents, compartmental models of neuronal morphology, and network models involving synaptic interactions among multiple neurons.

  16. Design of laser monitoring and sound localization system

    NASA Astrophysics Data System (ADS)

    Liu, Yu-long; Xu, Xi-ping; Dai, Yu-ming; Qiao, Yang

    2013-08-01

    In this paper, a novel design of laser monitoring and sound localization system is proposed. It utilizes laser to monitor and locate the position of the indoor conversation. In China most of the laser monitors no matter used in labor in an instrument uses photodiode or phototransistor as a detector at present. At the laser receivers of those facilities, light beams are adjusted to ensure that only part of the window in photodiodes or phototransistors received the beams. The reflection would deviate from its original path because of the vibration of the detected window, which would cause the changing of imaging spots in photodiode or phototransistor. However, such method is limited not only because it could bring in much stray light in receivers but also merely single output of photocurrent could be obtained. Therefore a new method based on quadrant detector is proposed. It utilizes the relation of the optical integral among quadrants to locate the position of imaging spots. This method could eliminate background disturbance and acquired two-dimensional spots vibrating data pacifically. The principle of this whole system could be described as follows. Collimated laser beams are reflected from vibrate-window caused by the vibration of sound source. Therefore reflected beams are modulated by vibration source. Such optical signals are collected by quadrant detectors and then are processed by photoelectric converters and corresponding circuits. Speech signals are eventually reconstructed. In addition, sound source localization is implemented by the means of detecting three different reflected light sources simultaneously. Indoor mathematical models based on the principle of Time Difference Of Arrival (TDOA) are established to calculate the twodimensional coordinate of sound source. Experiments showed that this system is able to monitor the indoor sound source beyond 15 meters with a high quality of speech reconstruction and to locate the sound source position accurately.

  17. Blind people are more sensitive than sighted people to binaural sound-location cues, particularly inter-aural level differences.

    PubMed

    Nilsson, Mats E; Schenkman, Bo N

    2016-02-01

    Blind people use auditory information to locate sound sources and sound-reflecting objects (echolocation). Sound source localization benefits from the hearing system's ability to suppress distracting sound reflections, whereas echolocation would benefit from "unsuppressing" these reflections. To clarify how these potentially conflicting aspects of spatial hearing interact in blind versus sighted listeners, we measured discrimination thresholds for two binaural location cues: inter-aural level differences (ILDs) and inter-aural time differences (ITDs). The ILDs or ITDs were present in single clicks, in the leading component of click pairs, or in the lagging component of click pairs, exploiting processes related to both sound source localization and echolocation. We tested 23 blind (mean age = 54 y), 23 sighted-age-matched (mean age = 54 y), and 42 sighted-young (mean age = 26 y) listeners. The results suggested greater ILD sensitivity for blind than for sighted listeners. The blind group's superiority was particularly evident for ILD-lag-click discrimination, suggesting not only enhanced ILD sensitivity in general but also increased ability to unsuppress lagging clicks. This may be related to the blind person's experience of localizing reflected sounds, for which ILDs may be more efficient than ITDs. On the ITD-discrimination tasks, the blind listeners performed better than the sighted age-matched listeners, but not better than the sighted young listeners. ITD sensitivity declines with age, and the equal performance of the blind listeners compared to a group of substantially younger listeners is consistent with the notion that blind people's experience may offset age-related decline in ITD sensitivity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Snap Your Fingers! An ERP/sLORETA Study Investigating Implicit Processing of Self- vs. Other-Related Movement Sounds Using the Passive Oddball Paradigm

    PubMed Central

    Justen, Christoph; Herbert, Cornelia

    2016-01-01

    So far, neurophysiological studies have investigated implicit and explicit self-related processing particularly for self-related stimuli such as the own face or name. The present study extends previous research to the implicit processing of self-related movement sounds and explores their spatio-temporal dynamics. Event-related potentials (ERPs) were assessed while participants (N = 12 healthy subjects) listened passively to previously recorded self- and other-related finger snapping sounds, presented either as deviants or standards during an oddball paradigm. Passive listening to low (500 Hz) and high (1000 Hz) pure tones served as additional control. For self- vs. other-related finger snapping sounds, analysis of ERPs revealed significant differences in the time windows of the N2a/MMN and P3. An subsequent source localization analysis with standardized low-resolution brain electromagnetic tomography (sLORETA) revealed increased cortical activation in distinct motor areas such as the supplementary motor area (SMA) in the N2a/mismatch negativity (MMN) as well as the P3 time window during processing of self- and other-related finger snapping sounds. In contrast, brain regions associated with self-related processing [e.g., right anterior/posterior cingulate cortex (ACC/PPC)] as well as the right inferior parietal lobule (IPL) showed increased activation particularly during processing of self- vs. other-related finger snapping sounds in the time windows of the N2a/MMN (ACC/PCC) or the P3 (IPL). None of these brain regions showed enhanced activation while listening passively to low (500 Hz) and high (1000 Hz) pure tones. Taken together, the current results indicate (1) a specific role of motor regions such as SMA during auditory processing of movement-related information, regardless of whether this information is self- or other-related, (2) activation of neural sources such as the ACC/PCC and the IPL during implicit processing of self-related movement stimuli, and (3) their differential temporal activation during deviance (N2a/MMN – ACC/PCC) and target detection (P3 – IPL) of self- vs. other-related movement sounds. PMID:27777557

  19. Sound field measurement in a double layer cavitation cluster by rugged miniature needle hydrophones.

    PubMed

    Koch, Christian

    2016-03-01

    During multi-bubble cavitation the bubbles tend to organize themselves into clusters and thus the understanding of properties and dynamics of clustering is essential for controlling technical applications of cavitation. Sound field measurements are a potential technique to provide valuable experimental information about the status of cavitation clouds. Using purpose-made, rugged, wide band, and small-sized needle hydrophones, sound field measurements in bubble clusters were performed and time-dependent sound pressure waveforms were acquired and analyzed in the frequency domain up to 20 MHz. The cavitation clusters were synchronously observed by an electron multiplying charge-coupled device (EMCCD) camera and the relation between the sound field measurements and cluster behaviour was investigated. Depending on the driving power, three ranges could be identified and characteristic properties were assigned. At low power settings no transient and no or very low stable cavitation activity can be observed. The medium range is characterized by strong pressure peaks and various bubble cluster forms. At high power a stable double layer was observed which grew with further increasing power and became quite dynamic. The sound field was irregular and the fundamental at driving frequency decreased. Between the bubble clouds completely different sound field properties were found in comparison to those in the cloud where the cavitation activity is high. In between the sound field pressure amplitude was quite small and no collapses were detected. Copyright © 2015. Published by Elsevier B.V.

  20. Displaying Composite and Archived Soundings in the Advanced Weather Interactive Processing System

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Volkmer, Matthew R.; Blottman, Peter F.; Sharp, David W.

    2008-01-01

    In a previous task, the Applied Meteorology Unit (AMU) developed spatial and temporal climatologies of lightning occurrence based on eight atmospheric flow regimes. The AMU created climatological, or composite, soundings of wind speed and direction, temperature, and dew point temperature at four rawinsonde observation stations at Jacksonville, Tampa, Miami, and Cape Canaveral Air Force Station, for each of the eight flow regimes. The composite soundings were delivered to the National Weather Service (NWS) Melbourne (MLB) office for display using the National version of the Skew-T Hodograph analysis and Research Program (NSHARP) software program. The NWS MLB requested the AMU make the composite soundings available for display in the Advanced Weather Interactive Processing System (AWIPS), so they could be overlaid on current observed soundings. This will allow the forecasters to compare the current state of the atmosphere with climatology. This presentation describes how the AMU converted the composite soundings from NSHARP Archive format to Network Common Data Form (NetCDF) format, so that the soundings could be displayed in AWl PS. The NetCDF is a set of data formats, programming interfaces, and software libraries used to read and write scientific data files. In AWIPS, each meteorological data type, such as soundings or surface observations, has a unique NetCDF format. Each format is described by a NetCDF template file. Although NetCDF files are in binary format, they can be converted to a text format called network Common data form Description Language (CDL). A software utility called ncgen is used to create a NetCDF file from a CDL file, while the ncdump utility is used to create a CDL file from a NetCDF file. An AWIPS receives soundings in Binary Universal Form for the Representation of Meteorological data (BUFR) format (http://dss.ucar.edu/docs/formats/bufr/), and then decodes them into NetCDF format. Only two sounding files are generated in AWIPS per day. One file contains all of the soundings received worldwide between 0000 UTC and 1200 UTC, and the other includes all soundings between 1200 UTC and 0000 UTC. In order to add the composite soundings into AWIPS, a procedure was created to configure, or localize, AWIPS. This involved modifying and creating several configuration text files. A unique fourcharacter site identifier was created for each of the 32 soundings so each could be viewed separately. The first three characters were based on the site identifier of the observed sounding, while the last character was based on the flow regime. While researching the localization process for soundings, the AMU discovered a method of archiving soundings so old soundings would not get purged automatically by AWl PS. This method could provide an alternative way of localizing AWl PS for composite soundings. In addition, this would allow forecasters to use archived soundings in AWIPS for case studies. A test sounding file in NetCDF format was written in order to verify the correct format for soundings in AWIPS. After the file was viewed successfully in AWIPS, the AMU wrote a software program in the Tool Command Language/Tool Kit (Tcl/Tk) language to convert the 32 composite soundings from NSHARP Archive to CDL format. The ncgen utility was then used to convert the CDL file to a NetCDF file. The NetCDF file could then be read and displayed in AWIPS.

  1. Optical microphone

    DOEpatents

    Veligdan, James T.

    2000-01-11

    An optical microphone includes a laser and beam splitter cooperating therewith for splitting a laser beam into a reference beam and a signal beam. A reflecting sensor receives the signal beam and reflects it in a plurality of reflections through sound pressure waves. A photodetector receives both the reference beam and reflected signal beam for heterodyning thereof to produce an acoustic signal for the sound waves. The sound waves vary the local refractive index in the path of the signal beam which experiences a Doppler frequency shift directly analogous with the sound waves.

  2. Front-Presented Looming Sound Selectively Alters the Perceived Size of a Visual Looming Object.

    PubMed

    Yamasaki, Daiki; Miyoshi, Kiyofumi; Altmann, Christian F; Ashida, Hiroshi

    2018-07-01

    In spite of accumulating evidence for the spatial rule governing cross-modal interaction according to the spatial consistency of stimuli, it is still unclear whether 3D spatial consistency (i.e., front/rear of the body) of stimuli also regulates audiovisual interaction. We investigated how sounds with increasing/decreasing intensity (looming/receding sound) presented from the front and rear space of the body impact the size perception of a dynamic visual object. Participants performed a size-matching task (Experiments 1 and 2) and a size adjustment task (Experiment 3) of visual stimuli with increasing/decreasing diameter, while being exposed to a front- or rear-presented sound with increasing/decreasing intensity. Throughout these experiments, we demonstrated that only the front-presented looming sound caused overestimation of the spatially consistent looming visual stimulus in size, but not of the spatially inconsistent and the receding visual stimulus. The receding sound had no significant effect on vision. Our results revealed that looming sound alters dynamic visual size perception depending on the consistency in the approaching quality and the front-rear spatial location of audiovisual stimuli, suggesting that the human brain differently processes audiovisual inputs based on their 3D spatial consistency. This selective interaction between looming signals should contribute to faster detection of approaching threats. Our findings extend the spatial rule governing audiovisual interaction into 3D space.

  3. Coastal Vulnerability to Sea Level Rise and Erosion in Northwest Alaska (Invited)

    NASA Astrophysics Data System (ADS)

    Gorokhovich, Y.; Leiserowitz, A.

    2009-12-01

    Northwest Alaska is experiencing significant climate change and human impacts. The study area includes the coastal zone of Kotzebue Sound and the Chukchi Sea and provides the local population (predominantly Inupiaq Eskimo) with critical subsistence resources of meat, fish, berries, herbs, and wood. The geomorphology of the coast includes barrier islands, inlets, estuaries, deltas, cliffs, bluffs, and beaches that host modern settlements and infrastructure. Coastal dynamics and sea-level rise are contributing to erosion, intermittent erosion/accretion patterns, landslides, slumps and coastal retreat. These factors are causing the sedimentation of deltas and lagoons, and changing local bathymetry, morphological parameters of beaches and underwater slopes, rates of coastal dynamics, and turbidity and nutrient cycling in coastal waters. This study is constructing vulnerability maps to help local people and federal officials understand the potential consequences of sea-level rise and coastal erosion on local infrastructure, subsistence resources, and culturally important sites. A lack of complete and uniform data (in terms of methods of collection, geographic scale and spatial resolution) creates an additional level of uncertainty that complicates geographic analysis. These difficulties were overcome by spatial modeling with selected spatial resolution using extrapolation methods. Data include subsistence resource maps obtained using Participatory GIS with local hunters and elders, geological and geographic data on coastal dynamics from satellite imagery, aerial photos, bathymetry and topographic maps, and digital elevation models. These data were classified and ranked according to the level of coastal vulnerability (Figure 1). The resulting qualitative multicriteria model helps to identify the coastal areas with the greatest vulnerability to coastal erosion and of the potential loss of subsistence resources. Acknowldgements: Dr. Ron Abileah (private consultant, jOmegak) helped in preliminary analysis of Landsat imagery, Mr. Alex Whiting provided valuable information on subsistence resources in Kotzebue region, hunters and elders of villages in Kivalina, Kotzebue, Selawik and Deering provided input in GIS database on subsistence resources.

  4. Performance on Tests of Central Auditory Processing by Individuals Exposed to High-Intensity Blasts

    DTIC Science & Technology

    2012-07-01

    percent (gap detected on at least four of the six presentations), with all longer durations receiving a score greater than 50 percent. Binaural ...Processing and Sound Localization Temporal precision of neural firing is also involved in binaural processing and localization of sound in space. The...Masking Level Difference (MLD) test evaluates the integrity of the earliest sites of binaural comparison and sensitivity to interaural phase in the

  5. Speech Understanding and Sound Source Localization by Cochlear Implant Listeners Using a Pinna-Effect Imitating Microphone and an Adaptive Beamformer.

    PubMed

    Dorman, Michael F; Natale, Sarah; Loiselle, Louise

    2018-03-01

    Sentence understanding scores for patients with cochlear implants (CIs) when tested in quiet are relatively high. However, sentence understanding scores for patients with CIs plummet with the addition of noise. To assess, for patients with CIs (MED-EL), (1) the value to speech understanding of two new, noise-reducing microphone settings and (2) the effect of the microphone settings on sound source localization. Single-subject, repeated measures design. For tests of speech understanding, repeated measures on (1) number of CIs (one, two), (2) microphone type (omni, natural, adaptive beamformer), and (3) type of noise (restaurant, cocktail party). For sound source localization, repeated measures on type of signal (low-pass [LP], high-pass [HP], broadband noise). Ten listeners, ranging in age from 48 to 83 yr (mean = 57 yr), participated in this prospective study. Speech understanding was assessed in two noise environments using monaural and bilateral CIs fit with three microphone types. Sound source localization was assessed using three microphone types. In Experiment 1, sentence understanding scores (in terms of percent words correct) were obtained in quiet and in noise. For each patient, noise was first added to the signal to drive performance off of the ceiling in the bilateral CI-omni microphone condition. The other conditions were then administered at that signal-to-noise ratio in quasi-random order. In Experiment 2, sound source localization accuracy was assessed for three signal types using a 13-loudspeaker array over a 180° arc. The dependent measure was root-mean-score error. Both the natural and adaptive microphone settings significantly improved speech understanding in the two noise environments. The magnitude of the improvement varied between 16 and 19 percentage points for tests conducted in the restaurant environment and between 19 and 36 percentage points for tests conducted in the cocktail party environment. In the restaurant and cocktail party environments, both the natural and adaptive settings, when implemented on a single CI, allowed scores that were as good as, or better, than scores in the bilateral omni test condition. Sound source localization accuracy was unaltered by either the natural or adaptive settings for LP, HP, or wideband noise stimuli. The data support the use of the natural microphone setting as a default setting. The natural setting (1) provides better speech understanding in noise than the omni setting, (2) does not impair sound source localization, and (3) retains low-frequency sensitivity to signals from the rear. Moreover, bilateral CIs equipped with adaptive beamforming technology can engender speech understanding scores in noise that fall only a little short of scores for a single CI in quiet. American Academy of Audiology

  6. A satellite-based multichannel infrared radiometer to sound the atmosphere

    NASA Technical Reports Server (NTRS)

    Esplin, Roy W.; Batty, J. Clair; Jensen, Mark; McLain, Dave; Jensen, Scott; Stauder, John; Stump, Charles W.; Roettker, William A.; Vanek, Michael D.

    1995-01-01

    This paper describes a 12-channel infrared radiometer with the acronym SABER (Sounding of the Atmosphere using Broadband Emission radiometry) that has been selected by NASA to fly on the TIMED (Thermosphere-Ionosphere-Mesosphere Energetics and Dynamics) mission.

  7. The auditory and non-auditory brain areas involved in tinnitus. An emergent property of multiple parallel overlapping subnetworks

    PubMed Central

    Vanneste, Sven; De Ridder, Dirk

    2012-01-01

    Tinnitus is the perception of a sound in the absence of an external sound source. It is characterized by sensory components such as the perceived loudness, the lateralization, the tinnitus type (pure tone, noise-like) and associated emotional components, such as distress and mood changes. Source localization of quantitative electroencephalography (qEEG) data demonstrate the involvement of auditory brain areas as well as several non-auditory brain areas such as the anterior cingulate cortex (dorsal and subgenual), auditory cortex (primary and secondary), dorsal lateral prefrontal cortex, insula, supplementary motor area, orbitofrontal cortex (including the inferior frontal gyrus), parahippocampus, posterior cingulate cortex and the precuneus, in different aspects of tinnitus. Explaining these non-auditory brain areas as constituents of separable subnetworks, each reflecting a specific aspect of the tinnitus percept increases the explanatory power of the non-auditory brain areas involvement in tinnitus. Thus, the unified percept of tinnitus can be considered an emergent property of multiple parallel dynamically changing and partially overlapping subnetworks, each with a specific spontaneous oscillatory pattern and functional connectivity signature. PMID:22586375

  8. Quantum heat waves in a one-dimensional condensate

    NASA Astrophysics Data System (ADS)

    Agarwal, Kartiek; Dalla Torre, Emanuele G.; Schmiedmayer, Jörg; Demler, Eugene

    2017-05-01

    We study the dynamics of phase relaxation between a pair of one-dimensional condensates created by a bi-directional, supersonic `unzipping' of a finite single condensate. We find that the system fractures into different extensive chunks of space-time, within which correlations appear thermal but correspond to different effective temperatures. Coherences between different eigen-modes are crucial for understanding the development of such thermal correlations; at no point in time can our system be described by a generalized Gibbs' ensemble despite nearly always appearing locally thermal. We rationalize a picture of propagating fronts of hot and cold sound waves, populated at effective, relativistically red- and blue-shifted temperatures to intuitively explain our findings. The disparity between these hot and cold temperatures vanishes for the case of instantaneous splitting but diverges in the limit where the splitting velocity approaches the speed of sound; in this limit, a sonic boom occurs wherein the system is excited only along an infinitely narrow, and infinitely hot beam. We expect our findings to apply generally to the study of superluminal perturbations in systems with emergent Lorentz symmetry.

  9. Near-infrared hyperspectral imaging of water evaporation dynamics for early detection of incipient caries.

    PubMed

    Usenik, Peter; Bürmen, Miran; Fidler, Aleš; Pernuš, Franjo; Likar, Boštjan

    2014-10-01

    Incipient caries is characterized as demineralization of the tooth enamel reflecting in increased porosity of enamel structure. As a result, the demineralized enamel may contain increased amount of water, and exhibit different water evaporation dynamics than the sound enamel. The objective of this paper is to assess the applicability of water evaporation dynamics of sound and demineralized enamel for detection and quantification of incipient caries using near-infrared hyperspectral imaging. The time lapse of water evaporation from enamel samples with artificial and natural caries lesions of different stages was imaged by a near-infrared hyperspectral imaging system. Partial least squares regression was used to predict the water content from the acquired spectra. The water evaporation dynamics was characterized by a first order logarithmic drying model. The calculated time constants of the logarithmic drying model were used as the discriminative feature. The conducted measurements showed that demineralized enamel contains more water and exhibits significantly faster water evaporation than the sound enamel. By appropriate modelling of the water evaporation process from the enamel surface, the contrast between the sound and demineralized enamel observed in the individual near infrared spectral images can be substantially enhanced. The presented results indicate that near-infrared based prediction of water content combined with an appropriate drying model presents a strong foundation for development of novel diagnostic tools for incipient caries detection. The results of the study enhance the understanding of the water evaporation process from the sound and demineralized enamel and have significant implications for the detection of incipient caries by near-infrared hyperspectral imaging. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Flexible, rapid and automatic neocortical word form acquisition mechanism in children as revealed by neuromagnetic brain response dynamics.

    PubMed

    Partanen, Eino; Leminen, Alina; de Paoli, Stine; Bundgaard, Anette; Kingo, Osman Skjold; Krøjgaard, Peter; Shtyrov, Yury

    2017-07-15

    Children learn new words and word forms with ease, often acquiring a new word after very few repetitions. Recent neurophysiological research on word form acquisition in adults indicates that novel words can be acquired within minutes of repetitive exposure to them, regardless of the individual's focused attention on the speech input. Although it is well-known that children surpass adults in language acquisition, the developmental aspects of such rapid and automatic neural acquisition mechanisms remain unexplored. To address this open question, we used magnetoencephalography (MEG) to scrutinise brain dynamics elicited by spoken words and word-like sounds in healthy monolingual (Danish) children throughout a 20-min repetitive passive exposure session. We found rapid neural dynamics manifested as an enhancement of early (~100ms) brain activity over the short exposure session, with distinct spatiotemporal patterns for different novel sounds. For novel Danish word forms, signs of such enhancement were seen in the left temporal regions only, suggesting reliance on pre-existing language circuits for acquisition of novel word forms with native phonology. In contrast, exposure both to novel word forms with non-native phonology and to novel non-speech sounds led to activity enhancement in both left and right hemispheres, suggesting that more wide-spread cortical networks contribute to the build-up of memory traces for non-native and non-speech sounds. Similar studies in adults have previously reported more sluggish (~15-25min, as opposed to 4min in the present study) or non-existent neural dynamics for non-native sound acquisition, which might be indicative of a higher degree of plasticity in the children's brain. Overall, the results indicate a rapid and highly plastic mechanism for a dynamic build-up of memory traces for novel acoustic information in the children's brain that operates automatically and recruits bilateral temporal cortical circuits. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Soliton-sound interactions in quasi-one-dimensional Bose-Einstein condensates.

    PubMed

    Parker, N G; Proukakis, N P; Leadbeater, M; Adams, C S

    2003-06-06

    Longitudinal confinement of dark solitons in quasi-one-dimensional Bose-Einstein condensates leads to sound emission and reabsorption. We perform quantitative studies of the dynamics of a soliton oscillating in a tight dimple trap, embedded in a weaker harmonic trap. The dimple depth provides a sensitive handle to control the soliton-sound interaction. In the limit of no reabsorption, the power radiated is found to be proportional to the soliton acceleration squared. An experiment is proposed to detect sound emission as a change in amplitude and frequency of soliton oscillations.

  12. Energy localization and frequency analysis in the locust ear.

    PubMed

    Malkin, Robert; McDonagh, Thomas R; Mhatre, Natasha; Scott, Thomas S; Robert, Daniel

    2014-01-06

    Animal ears are exquisitely adapted to capture sound energy and perform signal analysis. Studying the ear of the locust, we show how frequency signal analysis can be performed solely by using the structural features of the tympanum. Incident sound waves generate mechanical vibrational waves that travel across the tympanum. These waves shoal in a tsunami-like fashion, resulting in energy localization that focuses vibrations onto the mechanosensory neurons in a frequency-dependent manner. Using finite element analysis, we demonstrate that two mechanical properties of the locust tympanum, distributed thickness and tension, are necessary and sufficient to generate frequency-dependent energy localization.

  13. Joint inversion for transponder localization and sound-speed profile temporal variation in high-precision acoustic surveys.

    PubMed

    Li, Zhao; Dosso, Stan E; Sun, Dajun

    2016-07-01

    This letter develops a Bayesian inversion for localizing underwater acoustic transponders using a surface ship which compensates for sound-speed profile (SSP) temporal variation during the survey. The method is based on dividing observed acoustic travel-time data into time segments and including depth-independent SSP variations for each segment as additional unknown parameters to approximate the SSP temporal variation. SSP variations are estimated jointly with transponder locations, rather than calculated separately as in existing two-step inversions. Simulation and sea-trial results show this localization/SSP joint inversion performs better than two-step inversion in terms of localization accuracy, agreement with measured SSP variations, and computational efficiency.

  14. Synthesis of Systemic Functional Theory & Dynamical Systems Theory for Socio-Cultural Modeling

    DTIC Science & Technology

    2011-01-26

    is, language and other resources (e.g. images and sound resources) are conceptualised as inter-locking systems of meaning which realise four...hierarchical ranks and strata (e.g. sounds, word groups, clauses, and complex discourse structures in language, and elements, figures and episodes in images ...integrating platform for describing how language and other resources (e.g. images and sound) work together to fulfil particular objectives. While

  15. Ion-ion dynamic structure factor of warm dense mixtures

    DOE PAGES

    Gill, N. M.; Heinonen, R. A.; Starrett, C. E.; ...

    2015-06-25

    In this study, the ion-ion dynamic structure factor of warm dense matter is determined using the recently developed pseudoatom molecular dynamics method [Starrett et al., Phys. Rev. E 91, 013104 (2015)]. The method uses density functional theory to determine ion-ion pair interaction potentials that have no free parameters. These potentials are used in classical molecular dynamics simulations. This constitutes a computationally efficient and realistic model of dense plasmas. Comparison with recently published simulations of the ion-ion dynamic structure factor and sound speed of warm dense aluminum finds good to reasonable agreement. Using this method, we make predictions of the ion-ionmore » dynamical structure factor and sound speed of a warm dense mixture—equimolar carbon-hydrogen. This material is commonly used as an ablator in inertial confinement fusion capsules, and our results are amenable to direct experimental measurement.« less

  16. Instability of the Superfluid Flow as Black-Hole Lasing Effect.

    PubMed

    Finazzi, S; Piazza, F; Abad, M; Smerzi, A; Recati, A

    2015-06-19

    We show that the critical velocity of a superfluid flow through a penetrable barrier coincides with the onset of the analog black-hole lasing effect. This dynamical instability is triggered by modes resonating in an effective cavity formed by two horizons enclosing the barrier. The location of the horizons is set by v(x)=c(x), with v(x),c(x) being the local fluid velocity and sound speed, respectively. We compute the critical velocity analytically and show that it is univocally determined by the configuration of the horizons. In the limit of broad barriers, the continuous spectrum at the origin of the Hawking-like radiation and of the Landau energetic instability is recovered.

  17. Kinetic limit of heterogeneous melting in metals.

    PubMed

    Ivanov, Dmitriy S; Zhigilei, Leonid V

    2007-05-11

    The velocity and nanoscale shape of the melting front are investigated in a model that combines the molecular dynamics method with a continuum description of the electron heat conduction and electron-phonon coupling. The velocity of the melting front is strongly affected by the local drop of the lattice temperature, defined by the kinetic balance between the transfer of thermal energy to the latent heat of melting, the electron heat conduction from the overheated solid, and the electron-phonon coupling. The maximum velocity of the melting front is found to be below 3% of the room temperature speed of sound in the crystal, suggesting a limited contribution of heterogeneous melting under conditions of fast heating.

  18. Elastic properties of aspirin in its crystalline and glassy phases studied by micro-Brillouin scattering

    NASA Astrophysics Data System (ADS)

    Ko, Jae-Hyeon; Lee, Kwang-Sei; Ike, Yuji; Kojima, Seiji

    2008-11-01

    The acoustic waves propagating along the direction perpendicular to the (1 0 0) cleavage plane of aspirin crystal were investigated using micro-Brillouin spectroscopy from which C11, C55 and C66 were obtained. The temperature dependence of the longitudinal acoustic waves could be explained by normal anharmonic lattice models, while the transverse acoustic waves showed an abnormal increase in the hypersonic attenuation at low temperatures indicating their coupling to local remnant dynamics. The sound velocity as well as the attenuation of the longitudinal acoustic waves of glassy aspirin showed a substantial change at ˜235 K confirming a transition from glassy to supercooled liquid state in vitreous aspirin.

  19. Modular and Adaptive Control of Sound Processing

    NASA Astrophysics Data System (ADS)

    van Nort, Douglas

    This dissertation presents research into the creation of systems for the control of sound synthesis and processing. The focus differs from much of the work related to digital musical instrument design, which has rightly concentrated on the physicality of the instrument and interface: sensor design, choice of controller, feedback to performer and so on. Often times a particular choice of sound processing is made, and the resultant parameters from the physical interface are conditioned and mapped to the available sound parameters in an exploratory fashion. The main goal of the work presented here is to demonstrate the importance of the space that lies between physical interface design and the choice of sound manipulation algorithm, and to present a new framework for instrument design that strongly considers this essential part of the design process. In particular, this research takes the viewpoint that instrument designs should be considered in a musical control context, and that both control and sound dynamics must be considered in tandem. In order to achieve this holistic approach, the work presented in this dissertation assumes complementary points of view. Instrument design is first seen as a function of musical context, focusing on electroacoustic music and leading to a view on gesture that relates perceived musical intent to the dynamics of an instrumental system. The important design concept of mapping is then discussed from a theoretical and conceptual point of view, relating perceptual, systems and mathematically-oriented ways of examining the subject. This theoretical framework gives rise to a mapping design space, functional analysis of pertinent existing literature, implementations of mapping tools, instrumental control designs and several perceptual studies that explore the influence of mapping structure. Each of these reflect a high-level approach in which control structures are imposed on top of a high-dimensional space of control and sound synthesis parameters. In this view, desired gestural dynamics and sonic response are achieved through modular construction of mapping layers that are themselves subject to parametric control. Complementing this view of the design process, the work concludes with an approach in which the creation of gestural control/sound dynamics are considered in the low-level of the underlying sound model. The result is an adaptive system that is specialized to noise-based transformations that are particularly relevant in an electroacoustic music context. Taken together, these different approaches to design and evaluation result in a unified framework for creation of an instrumental system. The key point is that this framework addresses the influence that mapping structure and control dynamics have on the perceived feel of the instrument. Each of the results illustrate this using either top-down or bottom-up approaches that consider musical control context, thereby pointing to the greater potential for refined sonic articulation that can be had by combining them in the design process.

  20. Improvements of sound localization abilities by the facial ruff of the barn owl (Tyto alba) as demonstrated by virtual ruff removal.

    PubMed

    Hausmann, Laura; von Campenhausen, Mark; Endler, Frank; Singheiser, Martin; Wagner, Hermann

    2009-11-05

    When sound arrives at the eardrum it has already been filtered by the body, head, and outer ear. This process is mathematically described by the head-related transfer functions (HRTFs), which are characteristic for the spatial position of a sound source and for the individual ear. HRTFs in the barn owl (Tyto alba) are also shaped by the facial ruff, a specialization that alters interaural time differences (ITD), interaural intensity differences (ILD), and the frequency spectrum of the incoming sound to improve sound localization. Here we created novel stimuli to simulate the removal of the barn owl's ruff in a virtual acoustic environment, thus creating a situation similar to passive listening in other animals, and used these stimuli in behavioral tests. HRTFs were recorded from an owl before and after removal of the ruff feathers. Normal and ruff-removed conditions were created by filtering broadband noise with the HRTFs. Under normal virtual conditions, no differences in azimuthal head-turning behavior between individualized and non-individualized HRTFs were observed. The owls were able to respond differently to stimuli from the back than to stimuli from the front having the same ITD. By contrast, such a discrimination was not possible after the virtual removal of the ruff. Elevational head-turn angles were (slightly) smaller with non-individualized than with individualized HRTFs. The removal of the ruff resulted in a large decrease in elevational head-turning amplitudes. The facial ruff a) improves azimuthal sound localization by increasing the ITD range and b) improves elevational sound localization in the frontal field by introducing a shift of iso-ILD lines out of the midsagittal plane, which causes ILDs to increase with increasing stimulus elevation. The changes at the behavioral level could be related to the changes in the binaural physical parameters that occurred after the virtual removal of the ruff. These data provide new insights into the function of external hearing structures and open up the possibility to apply the results on autonomous agents, creation of virtual auditory environments for humans, or in hearing aids.

  1. Atmospheric effects on microphone array analysis of aircraft vortex sound

    DOT National Transportation Integrated Search

    2006-05-08

    This paper provides the basis of a comprehensive analysis of vortex sound propagation : through the atmosphere in order to assess real atmospheric effects on acoustic array : processing. Such effects may impact vortex localization accuracy and detect...

  2. Oceanographic Measurements Program Review.

    DTIC Science & Technology

    1982-03-01

    prototype Advanced Microstructure Profiler (AMP) was completed and the unit was operationally tested in local waters (Lake Washington and Puget Sound ...Expendables ....... ............. ..21 A.W. Green The Developent of an Air-Launched ................ 25 Expendable Sound Velocimeter (AXSV); R. Bixby...8217., ,? , .’,*, ;; .,’...; "’ . :" .* " . .. ". ;’ - ~ ~ ~ ~ ’ V’ 7T W, V a .. -- THE DEVELOPMENT OF AN AIR-LAUNCHED EXPENDABLE SOUND VELOCIMETER (AXSV) Richard Bixby

  3. Atmospheric Propagation

    NASA Technical Reports Server (NTRS)

    Embleton, Tony F. W.; Daigle, Gilles A.

    1991-01-01

    Reviewed here is the current state of knowledge with respect to each basic mechanism of sound propagation in the atmosphere and how each mechanism changes the spectral or temporal characteristics of the sound received at a distance from the source. Some of the basic processes affecting sound wave propagation which are present in any situation are discussed. They are geometrical spreading, molecular absorption, and turbulent scattering. In geometrical spreading, sound levels decrease with increasing distance from the source; there is no frequency dependence. In molecular absorption, sound energy is converted into heat as the sound wave propagates through the air; there is a strong dependence on frequency. In turbulent scattering, local variations in wind velocity and temperature induce fluctuations in phase and amplitude of the sound waves as they propagate through an inhomogeneous medium; there is a moderate dependence on frequency.

  4. Experiments of multichannel least-square methods for sound field reproduction inside aircraft mock-up: Objective evaluations

    NASA Astrophysics Data System (ADS)

    Gauthier, P.-A.; Camier, C.; Lebel, F.-A.; Pasco, Y.; Berry, A.; Langlois, J.; Verron, C.; Guastavino, C.

    2016-08-01

    Sound environment reproduction of various flight conditions in aircraft mock-ups is a valuable tool for the study, prediction, demonstration and jury testing of interior aircraft sound quality and annoyance. To provide a faithful reproduced sound environment, time, frequency and spatial characteristics should be preserved. Physical sound field reproduction methods for spatial sound reproduction are mandatory to immerse the listener's body in the proper sound fields so that localization cues are recreated at the listener's ears. Vehicle mock-ups pose specific problems for sound field reproduction. Confined spaces, needs for invisible sound sources and very specific acoustical environment make the use of open-loop sound field reproduction technologies such as wave field synthesis (based on free-field models of monopole sources) not ideal. In this paper, experiments in an aircraft mock-up with multichannel least-square methods and equalization are reported. The novelty is the actual implementation of sound field reproduction with 3180 transfer paths and trim panel reproduction sources in laboratory conditions with a synthetic target sound field. The paper presents objective evaluations of reproduced sound fields using various metrics as well as sound field extrapolation and sound field characterization.

  5. Sound waves and resonances in electron-hole plasma

    NASA Astrophysics Data System (ADS)

    Lucas, Andrew

    2016-06-01

    Inspired by the recent experimental signatures of relativistic hydrodynamics in graphene, we investigate theoretically the behavior of hydrodynamic sound modes in such quasirelativistic fluids near charge neutrality, within linear response. Locally driving an electron fluid at a resonant frequency to such a sound mode can lead to large increases in the electrical response at the edges of the sample, a signature, which cannot be explained using diffusive models of transport. We discuss the robustness of this signal to various effects, including electron-acoustic phonon coupling, disorder, and long-range Coulomb interactions. These long-range interactions convert the sound mode into a collective plasmonic mode at low frequencies unless the fluid is charge neutral. At the smallest frequencies, the response in a disordered fluid is quantitatively what is predicted by a "momentum relaxation time" approximation. However, this approximation fails at higher frequencies (which can be parametrically small), where the classical localization of sound waves cannot be neglected. Experimental observation of such resonances is a clear signature of relativistic hydrodynamics, and provides an upper bound on the viscosity of the electron-hole plasma.

  6. Monaural Sound Localization Based on Reflective Structure and Homomorphic Deconvolution

    PubMed Central

    Park, Yeonseok; Choi, Anthony

    2017-01-01

    The asymmetric structure around the receiver provides a particular time delay for the specific incoming propagation. This paper designs a monaural sound localization system based on the reflective structure around the microphone. The reflective plates are placed to present the direction-wise time delay, which is naturally processed by convolutional operation with a sound source. The received signal is separated for estimating the dominant time delay by using homomorphic deconvolution, which utilizes the real cepstrum and inverse cepstrum sequentially to derive the propagation response’s autocorrelation. Once the localization system accurately estimates the information, the time delay model computes the corresponding reflection for localization. Because of the structure limitation, two stages of the localization process perform the estimation procedure as range and angle. The software toolchain from propagation physics and algorithm simulation realizes the optimal 3D-printed structure. The acoustic experiments in the anechoic chamber denote that 79.0% of the study range data from the isotropic signal is properly detected by the response value, and 87.5% of the specific direction data from the study range signal is properly estimated by the response time. The product of both rates shows the overall hit rate to be 69.1%. PMID:28946625

  7. Comparison between bilateral cochlear implants and Neurelec Digisonic(®) SP Binaural cochlear implant: speech perception, sound localization and patient self-assessment.

    PubMed

    Bonnard, Damien; Lautissier, Sylvie; Bosset-Audoit, Amélie; Coriat, Géraldine; Beraha, Max; Maunoury, Antoine; Martel, Jacques; Darrouzet, Vincent; Bébéar, Jean-Pierre; Dauman, René

    2013-01-01

    An alternative to bilateral cochlear implantation is offered by the Neurelec Digisonic(®) SP Binaural cochlear implant, which allows stimulation of both cochleae within a single device. The purpose of this prospective study was to compare a group of Neurelec Digisonic(®) SP Binaural implant users (denoted BINAURAL group, n = 7) with a group of bilateral adult cochlear implant users (denoted BILATERAL group, n = 6) in terms of speech perception, sound localization, and self-assessment of health status and hearing disability. Speech perception was assessed using word recognition at 60 dB SPL in quiet and in a 'cocktail party' noise delivered through five loudspeakers in the hemi-sound field facing the patient (signal-to-noise ratio = +10 dB). The sound localization task was to determine the source of a sound stimulus among five speakers positioned between -90° and +90° from midline. Change in health status was assessed using the Glasgow Benefit Inventory and hearing disability was evaluated with the Abbreviated Profile of Hearing Aid Benefit. Speech perception was not statistically different between the two groups, even though there was a trend in favor of the BINAURAL group (mean percent word recognition in the BINAURAL and BILATERAL groups: 70 vs. 56.7% in quiet, 55.7 vs. 43.3% in noise). There was also no significant difference with regard to performance in sound localization and self-assessment of health status and hearing disability. On the basis of the BINAURAL group's performance in hearing tasks involving the detection of interaural differences, implantation with the Neurelec Digisonic(®) SP Binaural implant may be considered to restore effective binaural hearing. Based on these first comparative results, this device seems to provide benefits similar to those of traditional bilateral cochlear implantation, with a new approach to stimulate both auditory nerves. Copyright © 2013 S. Karger AG, Basel.

  8. On the Locality of Transient Electromagnetic Soundings with a Single-Loop Configuration

    NASA Astrophysics Data System (ADS)

    Barsukov, P. O.; Fainberg, E. B.

    2018-03-01

    The possibilities of reconstructing two-dimensional (2D) cross sections based on the data of the profile soundings by the transient electromagnetic method (TEM) with a single ungrounded loop are illustrated on three-dimensional (3D) models. The process of reconstruction includes three main steps: transformation of the responses in the depth dependence of resistivity ρ(h) measured along the profile, with their subsequent stitching into the 2D pseudo section; point-by-point one-dimensional (1D) inversion of the responses with the starting model constructed based on the transformations; and correction of the 2D cross section with the use of 2.5-dimensional (2.5D) block inversion. It is shown that single-loop TEM soundings allow studying the geological media within a local domain the lateral dimensions of which are commensurate with the depth of the investigation. The structure of the medium beyond this domain insignificantly affects the sounding results. This locality enables the TEM to reconstruct the geoelectrical structure of the medium from the 2D cross sections with the minimal distortions caused by the lack of information beyond the profile of the transient response measurements.

  9. Nonlinear Dynamic of Curved Railway Tracks in Three-Dimensional Space

    NASA Astrophysics Data System (ADS)

    Liu, X.; Ngamkhanong, C.; Kaewunruen, S.

    2017-12-01

    On curved tracks, high-pitch noise pollution can often be a considerable concern of rail asset owners, commuters, and people living or working along the rail corridor. Inevitably, wheel/rail interface can cause a traveling source of sound and vibration, which spread over a long distance of rail network. The sound and vibration can be in various forms and spectra. The undesirable sound and vibration on curves is often called ‘noise,’ includes flanging and squealing noises. This paper focuses on the squeal noise phenomena on curved tracks located in urban environments. It highlights the effect of curve radii on lateral track dynamics. It is important to note that rail freight curve noises, especially for curve squeals, can be observed almost everywhere and every type of track structures. The most pressing noise appears at sharper curved tracks where excessive lateral wheel/rail dynamics resonate with falling friction states, generating a tonal noise problem, so-call ‘squeal’. Many researchers have carried out measurements and simulations to understand the actual root causes of the squeal noise. Most researchers believe that wheel resonance over falling friction is the main cause, whilst a few others think that dynamic mode coupling of wheel and rail may also cause the squeal. Therefore, this paper is devoted to systems thinking the approach and dynamic assessment in resolving railway curve noise problems. The simulations of railway tracks with different curve radii will be carried out to develop state-of-the-art understanding into lateral track dynamics, including rail dynamics, cant dynamics, gauge dynamics and overall track responses.

  10. Assessment of auditory and psychosocial handicap associated with unilateral hearing loss among Indian patients.

    PubMed

    Augustine, Ann Mary; Chrysolyte, Shipra B; Thenmozhi, K; Rupa, V

    2013-04-01

    In order to assess psychosocial and auditory handicap in Indian patients with unilateral sensorineural hearing loss (USNHL), a prospective study was conducted on 50 adults with USNHL in the ENT Outpatient clinic of a tertiary care centre. The hearing handicap inventory for adults (HHIA) as well as speech in noise and sound localization tests were administered to patients with USNHL. An equal number of age-matched, normal controls also underwent the speech and sound localization tests. The results showed that HHIA scores ranged from 0 to 60 (mean 20.7). Most patients (84.8 %) had either mild to moderate or no handicap. Emotional subscale scores were higher than social subscale scores (p = 0.01). When the effect of sociodemographic factors on HHIA scores was analysed, educated individuals were found to have higher social subscale scores (p = 0.04). Age, sex, side and duration of hearing loss, occupation and income did not affect HHIA scores. Speech in noise and sound localization were significantly poorer in cases compared to controls (p < 0.001). About 75 % of patients refused a rehabilitative device. We conclude that USNHL in Indian adults does not usually produce severe handicap. When present, the handicap is more emotional than social. USNHL significantly affects sound localization and speech in noise. Yet, affected patients seldom seek a rehabilitative device.

  11. Influence of aging on human sound localization

    PubMed Central

    Dobreva, Marina S.; O'Neill, William E.

    2011-01-01

    Errors in sound localization, associated with age-related changes in peripheral and central auditory function, can pose threats to self and others in a commonly encountered environment such as a busy traffic intersection. This study aimed to quantify the accuracy and precision (repeatability) of free-field human sound localization as a function of advancing age. Head-fixed young, middle-aged, and elderly listeners localized band-passed targets using visually guided manual laser pointing in a darkened room. Targets were presented in the frontal field by a robotically controlled loudspeaker assembly hidden behind a screen. Broadband targets (0.1–20 kHz) activated all auditory spatial channels, whereas low-pass and high-pass targets selectively isolated interaural time and intensity difference cues (ITDs and IIDs) for azimuth and high-frequency spectral cues for elevation. In addition, to assess the upper frequency limit of ITD utilization across age groups more thoroughly, narrowband targets were presented at 250-Hz intervals from 250 Hz up to ∼2 kHz. Young subjects generally showed horizontal overestimation (overshoot) and vertical underestimation (undershoot) of auditory target location, and this effect varied with frequency band. Accuracy and/or precision worsened in older individuals for broadband, high-pass, and low-pass targets, reflective of peripheral but also central auditory aging. In addition, compared with young adults, middle-aged, and elderly listeners showed pronounced horizontal localization deficiencies (imprecision) for narrowband targets within 1,250–1,575 Hz, congruent with age-related central decline in auditory temporal processing. Findings underscore the distinct neural processing of the auditory spatial cues in sound localization and their selective deterioration with advancing age. PMID:21368004

  12. A real-time biomimetic acoustic localizing system using time-shared architecture

    NASA Astrophysics Data System (ADS)

    Nourzad Karl, Marianne; Karl, Christian; Hubbard, Allyn

    2008-04-01

    In this paper a real-time sound source localizing system is proposed, which is based on previously developed mammalian auditory models. Traditionally, following the models, which use interaural time delay (ITD) estimates, the amount of parallel computations needed by a system to achieve real-time sound source localization is a limiting factor and a design challenge for hardware implementations. Therefore a new approach using a time-shared architecture implementation is introduced. The proposed architecture is a purely sample-base-driven digital system, and it follows closely the continuous-time approach described in the models. Rather than having dedicated hardware on a per frequency channel basis, a specialized core channel, shared for all frequency bands is used. Having an optimized execution time, which is much less than the system's sample rate, the proposed time-shared solution allows the same number of virtual channels to be processed as the dedicated channels in the traditional approach. Hence, the time-shared approach achieves a highly economical and flexible implementation using minimal silicon area. These aspects are particularly important in efficient hardware implementation of a real time biomimetic sound source localization system.

  13. Recovery of Neonatal Head Turning to Decreased Sound Pressure Level.

    ERIC Educational Resources Information Center

    Tarquinio, Nancy; And Others

    1990-01-01

    Investigated newborns' responses to decreased sound pressure level (SPL) by means of a localized head turning habituation procedure. Findings, which demonstrated recovery of neonatal head turning to decreased SPL, were inconsistent with the selective receptor adaptation model. (RH)

  14. Psychophysical evidence for auditory motion parallax.

    PubMed

    Genzel, Daria; Schutte, Michael; Brimijoin, W Owen; MacNeilage, Paul R; Wiegrebe, Lutz

    2018-04-17

    Distance is important: From an ecological perspective, knowledge about the distance to either prey or predator is vital. However, the distance of an unknown sound source is particularly difficult to assess, especially in anechoic environments. In vision, changes in perspective resulting from observer motion produce a reliable, consistent, and unambiguous impression of depth known as motion parallax. Here we demonstrate with formal psychophysics that humans can exploit auditory motion parallax, i.e., the change in the dynamic binaural cues elicited by self-motion, to assess the relative depths of two sound sources. Our data show that sensitivity to relative depth is best when subjects move actively; performance deteriorates when subjects are moved by a motion platform or when the sound sources themselves move. This is true even though the dynamic binaural cues elicited by these three types of motion are identical. Our data demonstrate a perceptual strategy to segregate intermittent sound sources in depth and highlight the tight interaction between self-motion and binaural processing that allows assessment of the spatial layout of complex acoustic scenes.

  15. Estimating the Intended Sound Direction of the User: Toward an Auditory Brain-Computer Interface Using Out-of-Head Sound Localization

    PubMed Central

    Nambu, Isao; Ebisawa, Masashi; Kogure, Masumi; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro

    2013-01-01

    The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system. PMID:23437338

  16. Relevance of Spectral Cues for Auditory Spatial Processing in the Occipital Cortex of the Blind

    PubMed Central

    Voss, Patrice; Lepore, Franco; Gougoux, Frédéric; Zatorre, Robert J.

    2011-01-01

    We have previously shown that some blind individuals can localize sounds more accurately than their sighted counterparts when one ear is obstructed, and that this ability is strongly associated with occipital cortex activity. Given that spectral cues are important for monaurally localizing sounds when one ear is obstructed, and that blind individuals are more sensitive to small spectral differences, we hypothesized that enhanced use of spectral cues via occipital cortex mechanisms could explain the better performance of blind individuals in monaural localization. Using positron-emission tomography (PET), we scanned blind and sighted persons as they discriminated between sounds originating from a single spatial position, but with different spectral profiles that simulated different spatial positions based on head-related transfer functions. We show here that a sub-group of early blind individuals showing superior monaural sound localization abilities performed significantly better than any other group on this spectral discrimination task. For all groups, performance was best for stimuli simulating peripheral positions, consistent with the notion that spectral cues are more helpful for discriminating peripheral sources. PET results showed that all blind groups showed cerebral blood flow increases in the occipital cortex; but this was also the case in the sighted group. A voxel-wise covariation analysis showed that more occipital recruitment was associated with better performance across all blind subjects but not the sighted. An inter-regional covariation analysis showed that the occipital activity in the blind covaried with that of several frontal and parietal regions known for their role in auditory spatial processing. Overall, these results support the notion that the superior ability of a sub-group of early-blind individuals to localize sounds is mediated by their superior ability to use spectral cues, and that this ability is subserved by cortical processing in the occipital cortex. PMID:21716600

  17. Sparse representation of Gravitational Sound

    NASA Astrophysics Data System (ADS)

    Rebollo-Neira, Laura; Plastino, A.

    2018-03-01

    Gravitational Sound clips produced by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and the Massachusetts Institute of Technology (MIT) are considered within the particular context of data reduction. We advance a procedure to this effect and show that these types of signals can be approximated with high quality using significantly fewer elementary components than those required within the standard orthogonal basis framework. Furthermore, a local measure sparsity is shown to render meaningful information about the variation of a signal along time, by generating a set of local sparsity values which is much smaller than the dimension of the signal. This point is further illustrated by recourse to a more complex signal, generated by Milde Science Communication to divulge Gravitational Sound in the form of a ring tone.

  18. [Functional anatomy of the cochlear nerve and the central auditory system].

    PubMed

    Simon, E; Perrot, X; Mertens, P

    2009-04-01

    The auditory pathways are a system of afferent fibers (through the cochlear nerve) and efferent fibers (through the vestibular nerve), which are not limited to a simple information transmitting system but create a veritable integration of the sound stimulus at the different levels, by analyzing its three fundamental elements: frequency (pitch), intensity, and spatial localization of the sound source. From the cochlea to the primary auditory cortex, the auditory fibers are organized anatomically in relation to the characteristic frequency of the sound signal that they transmit (tonotopy). Coding the intensity of the sound signal is based on temporal recruitment (the number of action potentials) and spatial recruitment (the number of inner hair cells recruited near the cell of the frequency that is characteristic of the stimulus). Because of binaural hearing, commissural pathways at each level of the auditory system and integration of the phase shift and the difference in intensity between signals coming from both ears, spatial localization of the sound source is possible. Finally, through the efferent fibers in the vestibular nerve, higher centers exercise control over the activity of the cochlea and adjust the peripheral hearing organ to external sound conditions, thus protecting the auditory system or increasing sensitivity by the attention given to the signal.

  19. Sound Generation by Aircraft Wake Vortices

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.; Wang, Frank Y.

    2003-01-01

    This report provides an extensive analysis of potential wake vortex noise sources that might be utilized to aid in their tracking. Several possible mechanisms of aircraft vortex sound generation are examined on the basis of discrete vortex dynamic models and characteristic acoustic signatures calculated by application of vortex sound theory. It is shown that the most robust mechanisms result in very low frequency infrasound. An instability of the vortex core structure is discussed and shown to be a possible mechanism for generating higher frequency sound bordering the audible frequency range. However, the frequencies produced are still low and cannot explain the reasonably high-pitched sound that has occasionally been observed experimentally. Since the robust mechanisms appear to generate only very low frequency sound, infrasonic tracking of the vortices may be warranted.

  20. Seeing Circles and Drawing Ellipses: When Sound Biases Reproduction of Visual Motion

    PubMed Central

    Aramaki, Mitsuko; Bringoux, Lionel; Ystad, Sølvi; Kronland-Martinet, Richard

    2016-01-01

    The perception and production of biological movements is characterized by the 1/3 power law, a relation linking the curvature and the velocity of an intended action. In particular, motions are perceived and reproduced distorted when their kinematics deviate from this biological law. Whereas most studies dealing with this perceptual-motor relation focused on visual or kinaesthetic modalities in a unimodal context, in this paper we show that auditory dynamics strikingly biases visuomotor processes. Biologically consistent or inconsistent circular visual motions were used in combination with circular or elliptical auditory motions. Auditory motions were synthesized friction sounds mimicking those produced by the friction of the pen on a paper when someone is drawing. Sounds were presented diotically and the auditory motion velocity was evoked through the friction sound timbre variations without any spatial cues. Remarkably, when subjects were asked to reproduce circular visual motion while listening to sounds that evoked elliptical kinematics without seeing their hand, they drew elliptical shapes. Moreover, distortion induced by inconsistent elliptical kinematics in both visual and auditory modalities added up linearly. These results bring to light the substantial role of auditory dynamics in the visuo-motor coupling in a multisensory context. PMID:27119411

  1. Behavioural sensitivity to binaural spatial cues in ferrets: evidence for plasticity in the duplex theory of sound localization

    PubMed Central

    Keating, Peter; Nodal, Fernando R; King, Andrew J

    2014-01-01

    For over a century, the duplex theory has guided our understanding of human sound localization in the horizontal plane. According to this theory, the auditory system uses interaural time differences (ITDs) and interaural level differences (ILDs) to localize low-frequency and high-frequency sounds, respectively. Whilst this theory successfully accounts for the localization of tones by humans, some species show very different behaviour. Ferrets are widely used for studying both clinical and fundamental aspects of spatial hearing, but it is not known whether the duplex theory applies to this species or, if so, to what extent the frequency range over which each binaural cue is used depends on acoustical or neurophysiological factors. To address these issues, we trained ferrets to lateralize tones presented over earphones and found that the frequency dependence of ITD and ILD sensitivity broadly paralleled that observed in humans. Compared with humans, however, the transition between ITD and ILD sensitivity was shifted toward higher frequencies. We found that the frequency dependence of ITD sensitivity in ferrets can partially be accounted for by acoustical factors, although neurophysiological mechanisms are also likely to be involved. Moreover, we show that binaural cue sensitivity can be shaped by experience, as training ferrets on a 1-kHz ILD task resulted in significant improvements in thresholds that were specific to the trained cue and frequency. Our results provide new insights into the factors limiting the use of different sound localization cues and highlight the importance of sensory experience in shaping the underlying neural mechanisms. PMID:24256073

  2. Local Application of Sodium Salicylate Enhances Auditory Responses in the Rat’s Dorsal Cortex of the Inferior Colliculus

    PubMed Central

    Patel, Chirag R.; Zhang, Huiming

    2014-01-01

    Sodium salicylate (SS) is a widely used medication with side effects on hearing. In order to understand these side effects, we recorded sound-driven local-field potentials in a neural structure, the dorsal cortex of the inferior colliculus (ICd). Using a microiontophoretic technique, we applied SS at sites of recording and studied how auditory responses were affected by the drug. Furthermore, we studied how the responses were affected by combined local application of SS and an agonists/antagonist of the type-A or type-B γ-aminobutyric acid receptor (GABAA or GABAB receptor). Results revealed that SS applied alone enhanced auditory responses in the ICd, indicating that the drug had local targets in the structure. Simultaneous application of the drug and a GABAergic receptor antagonist synergistically enhanced amplitudes of responses. The synergistic interaction between SS and a GABAA receptor antagonist had a relatively early start in reference to the onset of acoustic stimulation and the duration of this interaction was independent of sound intensity. The interaction between SS and a GABAB receptor antagonist had a relatively late start, and the duration of this interaction was dependent on sound intensity. Simultaneous application of the drug and a GABAergic receptor agonist produced an effect different from the sum of effects produced by the two drugs released individually. These differences between simultaneous and individual drug applications suggest that SS modified GABAergic inhibition in the ICd. Our results indicate that SS can affect sound-driven activity in the ICd by modulating local GABAergic inhibition. PMID:25452744

  3. Frontal Cortex Activation Causes Rapid Plasticity of Auditory Cortical Processing

    PubMed Central

    Winkowski, Daniel E.; Bandyopadhyay, Sharba; Shamma, Shihab A.

    2013-01-01

    Neurons in the primary auditory cortex (A1) can show rapid changes in receptive fields when animals are engaged in sound detection and discrimination tasks. The source of a signal to A1 that triggers these changes is suspected to be in frontal cortical areas. How or whether activity in frontal areas can influence activity and sensory processing in A1 and the detailed changes occurring in A1 on the level of single neurons and in neuronal populations remain uncertain. Using electrophysiological techniques in mice, we found that pairing orbitofrontal cortex (OFC) stimulation with sound stimuli caused rapid changes in the sound-driven activity within A1 that are largely mediated by noncholinergic mechanisms. By integrating in vivo two-photon Ca2+ imaging of A1 with OFC stimulation, we found that pairing OFC activity with sounds caused dynamic and selective changes in sensory responses of neural populations in A1. Further, analysis of changes in signal and noise correlation after OFC pairing revealed improvement in neural population-based discrimination performance within A1. This improvement was frequency specific and dependent on correlation changes. These OFC-induced influences on auditory responses resemble behavior-induced influences on auditory responses and demonstrate that OFC activity could underlie the coordination of rapid, dynamic changes in A1 to dynamic sensory environments. PMID:24227723

  4. Modeling the biomechanical influence of epilaryngeal stricture on the vocal folds: a low-dimensional model of vocal-ventricular fold coupling.

    PubMed

    Moisik, Scott R; Esling, John H

    2014-04-01

    PURPOSE Physiological and phonetic studies suggest that, at moderate levels of epilaryngeal stricture, the ventricular folds impinge upon the vocal folds and influence their dynamical behavior, which is thought to be responsible for constricted laryngeal sounds. In this work, the authors examine this hypothesis through biomechanical modeling. METHOD The dynamical response of a low-dimensional, lumped-element model of the vocal folds under the influence of vocal-ventricular fold coupling was evaluated. The model was assessed for F0 and cover-mass phase difference. Case studies of simulations of different constricted phonation types and of glottal stop illustrate various additional aspects of model performance. RESULTS Simulated vocal-ventricular fold coupling lowers F0 and perturbs the mucosal wave. It also appears to reinforce irregular patterns of oscillation, and it can enhance laryngeal closure in glottal stop production. CONCLUSION The effects of simulated vocal-ventricular fold coupling are consistent with sounds, such as creaky voice, harsh voice, and glottal stop, that have been observed to involve epilaryngeal stricture and apparent contact between the vocal folds and ventricular folds. This supports the view that vocal-ventricular fold coupling is important in the vibratory dynamics of such sounds and, furthermore, suggests that these sounds may intrinsically require epilaryngeal stricture.

  5. Wave Field Synthesis of moving sources with arbitrary trajectory and velocity profile.

    PubMed

    Firtha, Gergely; Fiala, Péter

    2017-08-01

    The sound field synthesis of moving sound sources is of great importance when dynamic virtual sound scenes are to be reconstructed. Previous solutions considered only virtual sources moving uniformly along a straight trajectory, synthesized employing a linear loudspeaker array. This article presents the synthesis of point sources following an arbitrary trajectory. Under high-frequency assumptions 2.5D Wave Field Synthesis driving functions are derived for arbitrary shaped secondary source contours by adapting the stationary phase approximation to the dynamic description of sources in motion. It is explained how a referencing function should be chosen in order to optimize the amplitude of synthesis on an arbitrary receiver curve. Finally, a finite difference implementation scheme is considered, making the presented approach suitable for real-time applications.

  6. Dynamics of Clouds and Mesoscale Circulations over the Maritime Continent

    NASA Astrophysics Data System (ADS)

    Jin, Y.; Wang, S.; Xian, P.; Reid, J. S.; Nachamkin, J.

    2010-12-01

    In recent decades Southeast Asia (SEA) has seen rapid economic growth as well as increased biomass burning, resulting in high air pollution levels and reduced air qual-ity. At the same time clouds often prevent accurate air-quality monitoring and analysis using satellite observations. The Seven SouthEast Asian Studies (7SEAS) field campaign currently underway over SEA provides an unprecedented opportunity to study the com-plex interplay between aerosol and clouds. 7SEAS is a comprehensive interdisciplinary atmospheric sciences program through international partnership of NASA, NRL, ONR and seven local institutions including those from Indonesia, Malaysia, the Philippines, Singapore, Taiwan, Thailand, and Vietnam. While the original goal of 7SEAS is to iso-late the impacts of aerosol particles on weather and the environment, it is recognized that better understanding of SEA meteorological conditions, especially those associated with cloud formation and evolution, is critical to the success of the campaign. In this study we attempt to gain more insight into the dynamic and physical processes associated with low level clouds and atmospheric circulation at the regional scale over SEA, using the Navy’s Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS® ), a regional forecast model in operation at FNMOC since 1998. This effort comprises two main components. First, multiple-years of COAMPS operational forecasts over SEA are analyzed for basic climatology of atmospheric fea-tures. Second, mesoscale circulation and cloud properties are simulated at relatively higher resolution (15-km) for selected periods in the Gulf of Tonkin and adjacent coastal areas. Simulation results are compared to MODIS cloud observations and local sound-ings obtained during 7SEAS for model verifications. Atmospheric boundary layer proc-esses are examined in relation to spatial and temporal variations of cloud fields. The cur-rent work serves as an important step toward improving our understanding of the effects of aerosol particles on maritime clouds. The detailed analysis will be presented at the conference.

  7. Effects of capacity limits, memory loss, and sound type in change deafness.

    PubMed

    Gregg, Melissa K; Irsik, Vanessa C; Snyder, Joel S

    2017-11-01

    Change deafness, the inability to notice changes to auditory scenes, has the potential to provide insights about sound perception in busy situations typical of everyday life. We determined the extent to which change deafness to sounds is due to the capacity of processing multiple sounds and the loss of memory for sounds over time. We also determined whether these processing limitations work differently for varying types of sounds within a scene. Auditory scenes composed of naturalistic sounds, spectrally dynamic unrecognizable sounds, tones, and noise rhythms were presented in a change-detection task. On each trial, two scenes were presented that were same or different. We manipulated the number of sounds within each scene to measure memory capacity and the silent interval between scenes to measure memory loss. For all sounds, change detection was worse as scene size increased, demonstrating the importance of capacity limits. Change detection to the natural sounds did not deteriorate much as the interval between scenes increased up to 2,000 ms, but it did deteriorate substantially with longer intervals. For artificial sounds, in contrast, change-detection performance suffered even for very short intervals. The results suggest that change detection is generally limited by capacity, regardless of sound type, but that auditory memory is more enduring for sounds with naturalistic acoustic structures.

  8. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation

    PubMed Central

    Salomons, Erik M.; Lohman, Walter J. A.; Zhou, Han

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing. PMID:26789631

  9. Simulation of Sound Waves Using the Lattice Boltzmann Method for Fluid Flow: Benchmark Cases for Outdoor Sound Propagation.

    PubMed

    Salomons, Erik M; Lohman, Walter J A; Zhou, Han

    2016-01-01

    Propagation of sound waves in air can be considered as a special case of fluid dynamics. Consequently, the lattice Boltzmann method (LBM) for fluid flow can be used for simulating sound propagation. In this article application of the LBM to sound propagation is illustrated for various cases: free-field propagation, propagation over porous and non-porous ground, propagation over a noise barrier, and propagation in an atmosphere with wind. LBM results are compared with solutions of the equations of acoustics. It is found that the LBM works well for sound waves, but dissipation of sound waves with the LBM is generally much larger than real dissipation of sound waves in air. To circumvent this problem it is proposed here to use the LBM for assessing the excess sound level, i.e. the difference between the sound level and the free-field sound level. The effect of dissipation on the excess sound level is much smaller than the effect on the sound level, so the LBM can be used to estimate the excess sound level for a non-dissipative atmosphere, which is a useful quantity in atmospheric acoustics. To reduce dissipation in an LBM simulation two approaches are considered: i) reduction of the kinematic viscosity and ii) reduction of the lattice spacing.

  10. DXL: A Sounding Rocket Mission for the Study of Solar Wind Charge Exchange and Local Hot Bubble X-Ray Emission

    NASA Technical Reports Server (NTRS)

    Galeazzi, M.; Prasai, K.; Uprety, Y.; Chiao, M.; Collier, M. R.; Koutroumpa, D.; Porter, F. S.; Snowden, S.; Cravens, T.; Robertson, I.; hide

    2011-01-01

    The Diffuse X-rays from the Local galaxy (DXL) mission is an approved sounding rocket project with a first launch scheduled around December 2012. Its goal is to identify and separate the X-ray emission generated by solar wind charge exchange from that of the local hot bubble to improve our understanding of both. With 1,000 square centimeters proportional counters and grasp of about 10 square centimeters sr both in the 1/4 and 3/4 keV bands, DXL will achieve in a 5-minute flight what cannot be achieved by current and future X-ray satellites.

  11. Psychoacoustics

    NASA Astrophysics Data System (ADS)

    Moore, Brian C. J.

    Psychoacoustics psychological is concerned with the relationships between the physical characteristics of sounds and their perceptual attributes. This chapter describes: the absolute sensitivity of the auditory system for detecting weak sounds and how that sensitivity varies with frequency; the frequency selectivity of the auditory system (the ability to resolve or hear out the sinusoidal components in a complex sound) and its characterization in terms of an array of auditory filters; the processes that influence the masking of one sound by another; the range of sound levels that can be processed by the auditory system; the perception and modeling of loudness; level discrimination; the temporal resolution of the auditory system (the ability to detect changes over time); the perception and modeling of pitch for pure and complex tones; the perception of timbre for steady and time-varying sounds; the perception of space and sound localization; and the mechanisms underlying auditory scene analysis that allow the construction of percepts corresponding to individual sounds sources when listening to complex mixtures of sounds.

  12. Re-Sonification of Objects, Events, and Environments

    NASA Astrophysics Data System (ADS)

    Fink, Alex M.

    Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from arbitrary methods of altering recordings to precise simulations of vibrating objects. In this work, methods of sound synthesis by re-sonification are considered. Re-sonification, herein, refers to the general process of analyzing, possibly transforming, and resynthesizing or reusing recorded sounds in meaningful ways, to convey information. Applied to soundscapes, re-sonification is presented as a means of conveying activity within an environment. Applied to the sounds of objects, this work examines modeling the perception of objects as well as their physical properties and the ability to simulate interactive events with such objects. To create soundscapes to re-sonify geographic environments, a method of automated soundscape design is presented. Using recorded sounds that are classified based on acoustic, social, semantic, and geographic information, this method produces stochastically generated soundscapes to re-sonify selected geographic areas. Drawing on prior knowledge, local sounds and those deemed similar comprise a locale's soundscape. In the context of re-sonifying events, this work examines processes for modeling and estimating the excitations of sounding objects. These include plucking, striking, rubbing, and any interaction that imparts energy into a system, affecting the resultant sound. A method of estimating a linear system's input, constrained to a signal-subspace, is presented and applied toward improving the estimation of percussive excitations for re-sonification. To work toward robust recording-based modeling and re-sonification of objects, new implementations of banded waveguide (BWG) models are proposed for object modeling and sound synthesis. Previous implementations of BWGs use arbitrary model parameters and may produce a range of simulations that do not match digital waveguide or modal models of the same design. Subject to linear excitations, some models proposed here behave identically to other equivalently designed physical models. Under nonlinear interactions, such as bowing, many of the proposed implementations exhibit improvements in the attack characteristics of synthesized sounds.

  13. The opponent channel population code of sound location is an efficient representation of natural binaural sounds.

    PubMed

    Młynarski, Wiktor

    2015-05-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.

  14. OBSERVATIONS OF PHYTOPLANKTON SIZE STRUCTURE, NUTRIENTS, VARIABLE FLOURESCENCE, AND ALGAL PHOSPHATASE ACTIVITY IN A GULF COAST ESTUARY

    EPA Science Inventory

    During 2001, phytoplankton dynamics, physiology, and related environmental conditions were studied in Santa Rosa Sound, Florida, USA, at near-weekly intervals. Santa Rosa Sound is a component of the Pensacola Bay system located in the northern Gulf of Mexico. Environmental parame...

  15. Sonification Design for Complex Work Domains: Dimensions and Distractors

    ERIC Educational Resources Information Center

    Anderson, Janet E.; Sanderson, Penelope

    2009-01-01

    Sonification--representing data in sound--is a potential method for supporting human operators who have to monitor dynamic processes. Previous research has investigated a limited number of sound dimensions and has not systematically investigated the impact of dimensional interactions on sonification effectiveness. In three experiments the authors…

  16. Loudness Change in Response to Dynamic Acoustic Intensity

    ERIC Educational Resources Information Center

    Olsen, Kirk N.; Stevens, Catherine J.; Tardieu, Julien

    2010-01-01

    Three experiments investigate psychological, methodological, and domain-specific characteristics of loudness change in response to sounds that continuously increase in intensity (up-ramps), relative to sounds that decrease (down-ramps). Timbre (vowel, violin), layer (monotone, chord), and duration (1.8 s, 3.6 s) were manipulated in Experiment 1.…

  17. Reduced order modeling of head related transfer functions for virtual acoustic displays

    NASA Astrophysics Data System (ADS)

    Willhite, Joel A.; Frampton, Kenneth D.; Grantham, D. Wesley

    2003-04-01

    The purpose of this work is to improve the computational efficiency in acoustic virtual applications by creating and testing reduced order models of the head related transfer functions used in localizing sound sources. State space models of varying order were generated from zero-elevation Head Related Impulse Responses (HRIRs) using Kungs Single Value Decomposition (SVD) technique. The inputs to the models are the desired azimuths of the virtual sound sources (from minus 90 deg to plus 90 deg, in 10 deg increments) and the outputs are the left and right ear impulse responses. Trials were conducted in an anechoic chamber in which subjects were exposed to real sounds that were emitted by individual speakers across a numbered speaker array, phantom sources generated from the original HRIRs, and phantom sound sources generated with the different reduced order state space models. The error in the perceived direction of the phantom sources generated from the reduced order models was compared to errors in localization using the original HRIRs.

  18. The GISS sounding temperature impact test

    NASA Technical Reports Server (NTRS)

    Halem, M.; Ghil, M.; Atlas, R.; Susskind, J.; Quirk, W. J.

    1978-01-01

    The impact of DST 5 and DST 6 satellite sounding data on mid-range forecasting was studied. The GISS temperature sounding technique, the GISS time-continuous four-dimensional assimilation procedure based on optimal statistical analysis, the GISS forecast model, and the verification techniques developed, including impact on local precipitation forecasts are described. It is found that the impact of sounding data was substantial and beneficial for the winter test period, Jan. 29 - Feb. 21. 1976. Forecasts started from initial state obtained with the aid of satellite data showed a mean improvement of about 4 points in the 48 and 772 hours Sub 1 scores as verified over North America and Europe. This corresponds to an 8 to 12 hour forecast improvement in the forecast range at 48 hours. An automated local precipitation forecast model applied to 128 cities in the United States showed on an average 15% improvement when satellite data was used for numerical forecasts. The improvement was 75% in the midwest.

  19. Understanding and mimicking the dual optimality of the fly ear

    NASA Astrophysics Data System (ADS)

    Liu, Haijun; Currano, Luke; Gee, Danny; Helms, Tristan; Yu, Miao

    2013-08-01

    The fly Ormia ochracea has the remarkable ability, given an eardrum separation of only 520 μm, to pinpoint the 5 kHz chirp of its cricket host. Previous research showed that the two eardrums are mechanically coupled, which amplifies the directional cues. We have now performed a mechanics and optimization analysis which reveals that the right coupling strength is key: it results in simultaneously optimized directional sensitivity and directional cue linearity at 5 kHz. We next demonstrated that this dual optimality is replicable in a synthetic device and can be tailored for a desired frequency. Finally, we demonstrated a miniature sensor endowed with this dual-optimality at 8 kHz with unparalleled sound localization. This work provides a quantitative and mechanistic explanation for the fly's sound-localization ability from a new perspective, and it provides a framework for the development of fly-ear inspired sensors to overcoming a previously-insurmountable size constraint in engineered sound-localization systems.

  20. Dynamics and Instabilities of Acoustically Stressed Interfaces

    NASA Astrophysics Data System (ADS)

    Shi, William Tao

    An intense sound field exerts acoustic radiation pressure on a transitional layer between two continuous fluid media, leading to the unconventional dynamical behavior of the interface in the presence of the sound field. An understanding of this behavior has applications in the study of drop dynamics and surface rheology. Acoustic fields have also been utilized in the generation of interfacial instability, which may further encourage the dispersion or coalescence of liquids. Therefore, the study of the dynamics of the acoustically stressed interfaces is essential to infer the mechanism of the various phenomena related to interfacial dynamics and to acquire the properties of liquid surfaces. This thesis studies the dynamics of acoustically stressed interfaces through a theoretical model of surface interactions on both closed and open interfaces. Accordingly, a boundary integral method is developed to simulate the motions of a stressed interface. The method has been employed to determine the deformation, oscillation and instability of acoustically levitated drops. The generalized computations are found to be in good agreement with available experimental results. The linearized theory is also derived to predict the instability threshold of the flat interface, and is then compared with experiments conducted to observe and measure the unstable motions of the horizontal interface. This thesis is devoted to describing and classifying the simplest mechanisms by which acoustic fields provide a surface interaction with a fluid. A physical picture of the competing processes introduced by the evolution of an interface in a sound field is presented. The development of an initial small perturbation into a sharp form is observed on either a drop surface or a horizontal interface, indicating a strong focusing of acoustic energy at certain spots of the interface. Emphasis is placed on understanding the basic coupling mechanisms, rather than on particular applications that may involve this coupling. The dynamical behavior of a stressed drop can be determined in terms of a given form of an incident sound field and three dimensionless quantities. Thus, the behavior of a complex dynamic system has been clarified, permitting the exploration and interpretation of the nature of liquid surface phenomena.

  1. Prediction of far-field wind turbine noise propagation with parabolic equation.

    PubMed

    Lee, Seongkyu; Lee, Dongjai; Honhoff, Saskia

    2016-08-01

    Sound propagation of wind farms is typically simulated by the use of engineering tools that are neglecting some atmospheric conditions and terrain effects. Wind and temperature profiles, however, can affect the propagation of sound and thus the perceived sound in the far field. A better understanding and application of those effects would allow a more optimized farm operation towards meeting noise regulations and optimizing energy yield. This paper presents the parabolic equation (PE) model development for accurate wind turbine noise propagation. The model is validated against analytic solutions for a uniform sound speed profile, benchmark problems for nonuniform sound speed profiles, and field sound test data for real environmental acoustics. It is shown that PE provides good agreement with the measured data, except upwind propagation cases in which turbulence scattering is important. Finally, the PE model uses computational fluid dynamics results as input to accurately predict sound propagation for complex flows such as wake flows. It is demonstrated that wake flows significantly modify the sound propagation characteristics.

  2. Acoustic Performance of a Real-Time Three-Dimensional Sound-Reproduction System

    NASA Technical Reports Server (NTRS)

    Faller, Kenneth J., II; Rizzi, Stephen A.; Aumann, Aric R.

    2013-01-01

    The Exterior Effects Room (EER) is a 39-seat auditorium at the NASA Langley Research Center and was built to support psychoacoustic studies of aircraft community noise. The EER has a real-time simulation environment which includes a three-dimensional sound-reproduction system. This system requires real-time application of equalization filters to compensate for spectral coloration of the sound reproduction due to installation and room effects. This paper describes the efforts taken to develop the equalization filters for use in the real-time sound-reproduction system and the subsequent analysis of the system s acoustic performance. The acoustic performance of the compensated and uncompensated sound-reproduction system is assessed for its crossover performance, its performance under stationary and dynamic conditions, the maximum spatialized sound pressure level it can produce from a single virtual source, and for the spatial uniformity of a generated sound field. Additionally, application examples are given to illustrate the compensated sound-reproduction system performance using recorded aircraft flyovers

  3. Statistics of natural binaural sounds.

    PubMed

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  4. Statistics of Natural Binaural Sounds

    PubMed Central

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction. PMID:25285658

  5. Population Dynamics and Parasite Load of a Foraminifer on Its Antarctic Scallop Host with Their Carbonate Biomass Contributions.

    PubMed

    Hancock, Leanne G; Walker, Sally E; Pérez-Huerta, Alberto; Bowser, Samuel S

    2015-01-01

    We studied the population dynamics and parasite load of the foraminifer Cibicides antarcticus on its host the Antarctic scallop Adamussium colbecki from three localities differing by sea ice cover within western McMurdo Sound, Ross Sea, Antarctica: Explorers Cove, Bay of Sails and Herbertson Glacier. We also estimated CaCO3 biomass and annual production for both species. Cibicides populations varied by locality, valve type, and depth. Explorers Cove with multiannual sea ice had larger populations than the two annual sea ice localities, likely related to differences in nutrients. Populations were higher on Adamussium top valves, a surface that is elevated above the sediment. Depth did not affect Cibicides distributions except at Bay of Sails. Cibicides parasite load (the number of complete boreholes in Adamussium valves) varied by locality between 2% and 50%. For most localities the parasite load was < 20%, contrary to a previous report that ~50% of Cibicides were parasitic. The highest and lowest parasite load occurred at annual sea ice localities, suggesting that sea ice condition is not important. Rather, the number of adults that are parasitic could account for these differences. Cibicides bioerosion traces were categorized into four ontogenetic stages, ranging from newly attached recruits to parasitic adults. These traces provide an excellent proxy for population structure, revealing that Explorers Cove had a younger population than Bay of Sails. Both species are important producers of CaCO3. Cibicides CaCO3 biomass averaged 47-73 kg ha(-1) and Adamussium averaged 4987-6806 kg ha(-1) by locality. Annual production rates were much higher. Moreover, Cibicides represents 1.0-2.3% of the total host-parasite CaCO3 biomass. Despite living in the coldest waters on Earth, these species can contribute a substantial amount of CaCO3 to the Ross Sea and need to be incorporated into food webs, ecosystem models, and carbonate budgets for Antarctica.

  6. Population Dynamics and Parasite Load of a Foraminifer on Its Antarctic Scallop Host with Their Carbonate Biomass Contributions

    PubMed Central

    Pérez-Huerta, Alberto; Bowser, Samuel S.

    2015-01-01

    We studied the population dynamics and parasite load of the foraminifer Cibicides antarcticus on its host the Antarctic scallop Adamussium colbecki from three localities differing by sea ice cover within western McMurdo Sound, Ross Sea, Antarctica: Explorers Cove, Bay of Sails and Herbertson Glacier. We also estimated CaCO3 biomass and annual production for both species. Cibicides populations varied by locality, valve type, and depth. Explorers Cove with multiannual sea ice had larger populations than the two annual sea ice localities, likely related to differences in nutrients. Populations were higher on Adamussium top valves, a surface that is elevated above the sediment. Depth did not affect Cibicides distributions except at Bay of Sails. Cibicides parasite load (the number of complete boreholes in Adamussium valves) varied by locality between 2% and 50%. For most localities the parasite load was < 20%, contrary to a previous report that ~50% of Cibicides were parasitic. The highest and lowest parasite load occurred at annual sea ice localities, suggesting that sea ice condition is not important. Rather, the number of adults that are parasitic could account for these differences. Cibicides bioerosion traces were categorized into four ontogenetic stages, ranging from newly attached recruits to parasitic adults. These traces provide an excellent proxy for population structure, revealing that Explorers Cove had a younger population than Bay of Sails. Both species are important producers of CaCO3. Cibicides CaCO3 biomass averaged 47-73 kg ha-1 and Adamussium averaged 4987-6806 kg ha-1 by locality. Annual production rates were much higher. Moreover, Cibicides represents 1.0-2.3% of the total host-parasite CaCO3 biomass. Despite living in the coldest waters on Earth, these species can contribute a substantial amount of CaCO3 to the Ross Sea and need to be incorporated into food webs, ecosystem models, and carbonate budgets for Antarctica. PMID:26186724

  7. Remote Sensing of Suspended Sediment Dynamics in the Mississippi Sound

    NASA Astrophysics Data System (ADS)

    Merritt, D. N.; Skarke, A. D.; Silwal, S.; Dash, P.

    2016-02-01

    The Mississippi Sound is a semi-enclosed estuary between the coast of Mississippi and a chain of offshore barrier islands with relatively shallow water depths and high marine biodiversity that is wildly utilized for commercial fishing and public recreation. The discharge of sediment-laden rivers into the Mississippi Sound and the adjacent Northern Gulf of Mexico creates turbid plumes that can extend hundreds of square kilometers along the coast and persist for multiple days. The concentration of suspended sediment in these coastal waters is an important parameter in the calculation of regional sediment budgets as well as analysis of water-quality factors such as primary productivity, nutrient dynamics, and the transport of pollutants as well as pathogens. The spectral resolution, sampling frequency, and regional scale spatial domain associated with satellite based sensors makes remote sensing an ideal tool to monitor suspended sediment dynamics in the Northern Gulf of Mexico. Accordingly, the presented research evaluates the validity of published models that relate remote sensing reflectance with suspended sediment concentrations (SSC), for similar environmental settings, with 51 in situ observations of SSC from the Mississippi Sound. Additionally, regression analysis is used to correlate additional in situ observations of SSC in Mississippi Sound with coincident observations of visible and near-infrared band reflectance collected by the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor aboard the Aqua satellite, in order to develop a site-specific empirical predictive model for SSC. Finally, specific parameters of the sampled suspended sediment such as grain size and mineralogy are analyzed in order to quantify their respective contributions to total remotely sensed reflectance.

  8. A review of the perceptual effects of hearing loss for frequencies above 3 kHz.

    PubMed

    Moore, Brian C J

    2016-12-01

    Hearing loss caused by exposure to intense sounds usually has its greatest effects on audiometric thresholds at 4 and 6 kHz. However, in several countries compensation for occupational noise-induced hearing loss is calculated using the average of audiometric thresholds for selected frequencies up to 3 kHz, based on the implicit assumption that hearing loss for frequencies above 3 kHz has no material adverse consequences. This paper assesses whether this assumption is correct. Studies are reviewed that evaluate the role of hearing for frequencies above 3 kHz. Several studies show that frequencies above 3 kHz are important for the perception of speech, especially when background sounds are present. Hearing at high frequencies is also important for sound localization, especially for resolving front-back confusions. Hearing for frequencies above 3 kHz is important for the ability to understand speech in background sounds and for the ability to localize sounds. The audiometric threshold at 4 kHz and perhaps 6 kHz should be taken into account when assessing hearing in a medico-legal context.

  9. Demonstrations of simple and complex auditory psychophysics for multiple platforms and environments

    NASA Astrophysics Data System (ADS)

    Horowitz, Seth S.; Simmons, Andrea M.; Blue, China

    2005-09-01

    Sound is arguably the most widely perceived and pervasive form of energy in our world, and among the least understood, in part due to the complexity of its underlying principles. A series of interactive displays has been developed which demonstrates that the nature of sound involves the propagation of energy through space, and illustrates the definition of psychoacoustics, which is how listeners map the physical aspects of sound and vibration onto their brains. These displays use auditory illusions and commonly experienced music and sound in novel presentations (using interactive computer algorithms) to show that what you hear is not always what you get. The areas covered in these demonstrations range from simple and complex auditory localization, which illustrate why humans are bad at echolocation but excellent at determining the contents of auditory space, to auditory illusions that manipulate fine phase information and make the listener think their head is changing size. Another demonstration shows how auditory and visual localization coincide and sound can be used to change visual tracking. These demonstrations are designed to run on a wide variety of student accessible platforms including web pages, stand-alone presentations, or even hardware-based systems for museum displays.

  10. Preface

    USGS Publications Warehouse

    Baum, Rex L.; Godt, Jonathan W.; Highland, Lynn M.

    2008-01-01

    The idea for Landslides and Engineering Geology of the Seattle, Washington, Areagrew out of a major landslide disaster that occurred in the Puget Sound region at the beginning of 1997. Unusually heavy snowfall in late December 1996 followed by warm, intense rainfall on 31 December through 2 January 1997 produced hundreds of damaging landslides in communities surrounding Puget Sound. This disaster resulted in significant efforts of the local geotechnical community and local governments to repair the damage and to mitigate the effects of future landslides. The magnitude of the disaster attracted the attention of the U.S. Geological Survey (USGS), which was just beginning a large multihazards project for Puget Sound. The USGS immediately added a regional study of landslides to that project. Soon a partnership formed between the City of Seattle and the USGS to assess landslide hazards of Seattle.

  11. Near-Field Sound Localization Based on the Small Profile Monaural Structure

    PubMed Central

    Kim, Youngwoong; Kim, Keonwook

    2015-01-01

    The acoustic wave around a sound source in the near-field area presents unconventional properties in the temporal, spectral, and spatial domains due to the propagation mechanism. This paper investigates a near-field sound localizer in a small profile structure with a single microphone. The asymmetric structure around the microphone provides a distinctive spectral variation that can be recognized by the dedicated algorithm for directional localization. The physical structure consists of ten pipes of different lengths in a vertical fashion and rectangular wings positioned between the pipes in radial directions. The sound from an individual direction travels through the nearest open pipe, which generates the particular fundamental frequency according to the acoustic resonance. The Cepstral parameter is modified to evaluate the fundamental frequency. Once the system estimates the fundamental frequency of the received signal, the length of arrival and angle of arrival (AoA) are derived by the designed model. From an azimuthal distance of 3–15 cm from the outer body of the pipes, the extensive acoustic experiments with a 3D-printed structure show that the direct and side directions deliver average hit rates of 89% and 73%, respectively. The closer positions to the system demonstrate higher accuracy, and the overall hit rate performance is 78% up to 15 cm away from the structure body. PMID:26580618

  12. GABAergic Neural Activity Involved in Salicylate-Induced Auditory Cortex Gain Enhancement

    PubMed Central

    Lu, Jianzhong; Lobarinas, Edward; Deng, Anchun; Goodey, Ronald; Stolzberg, Daniel; Salvi, Richard J.; Sun, Wei

    2011-01-01

    Although high doses of sodium salicylate impair cochlear function, it paradoxically enhances sound-evoked activity in the auditory cortex (AC) and augments acoustic startle reflex responses, neural and behavioral metrics associated with hyperexcitability and hyperacusis. To explore the neural mechanisms underlying salicylate-induced hyperexcitability and “increased central gain”, we examined the effects of γ-aminobutyric acid (GABA) receptor agonists and antagonists on salicylate-induced hyperexcitability in the AC and startle reflex responses. Consistent with our previous findings, local or systemic application of salicylate significantly increased the amplitude of sound-evoked AC neural activity, but generally reduced spontaneous activity in the AC. Systemic injection of salicylate also significantly increased the acoustic startle reflex. S-baclofen or R-baclofen, GABA-B agonists, which suppressed sound-evoked AC neural firing rate and local field potentials, also suppressed the salicylate-induced enhancement of the AC field potential and the acoustic startle reflex. Local application of vigabatrin, which enhances GABA concentration in the brain, suppressed the salicylate-induced enhancement of AC firing rate. Systemic injection of vigabatrin also reduced the salicylate-induced enhancement of acoustic startle reflex. Collectively, these results suggest that the sound-evoked behavioral and neural hyperactivity induced by salicylate may arise from a salicylate-induced suppression GABAergic inhibition in the AC. PMID:21664433

  13. How to generate a sound-localization map in fish

    NASA Astrophysics Data System (ADS)

    van Hemmen, J. Leo

    2015-03-01

    How sound localization is represented in the fish brain is a research field largely unbiased by theoretical analysis and computational modeling. Yet, there is experimental evidence that the axes of particle acceleration due to underwater sound are represented through a map in the midbrain of fish, e.g., in the torus semicircularis of the rainbow trout (Wubbels et al. 1997). How does such a map arise? Fish perceive pressure gradients by their three otolithic organs, each of which comprises a dense, calcareous, stone that is bathed in endolymph and attached to a sensory epithelium. In rainbow trout, the sensory epithelia of left and right utricle lie in the horizontal plane and consist of hair cells with equally distributed preferred orientations. We model the neuronal response of this system on the basis of Schuijf's vector detection hypothesis (Schuijf et al. 1975) and introduce a temporal spike code of sound direction, where optimality of hair cell orientation θj with respect to the acceleration direction θs is mapped onto spike phases via a von-Mises distribution. By learning to tune in to the earliest synchronized activity, nerve cells in the midbrain generate a map under the supervision of a locally excitatory, yet globally inhibitory visual teacher. Work done in collaboration with Daniel Begovic. Partially supported by BCCN - Munich.

  14. Experimental localization of an acoustic sound source in a wind-tunnel flow by using a numerical time-reversal technique.

    PubMed

    Padois, Thomas; Prax, Christian; Valeau, Vincent; Marx, David

    2012-10-01

    The possibility of using the time-reversal technique to localize acoustic sources in a wind-tunnel flow is investigated. While the technique is widespread, it has scarcely been used in aeroacoustics up to now. The proposed method consists of two steps: in a first experimental step, the acoustic pressure fluctuations are recorded over a linear array of microphones; in a second numerical step, the experimental data are time-reversed and used as input data for a numerical code solving the linearized Euler equations. The simulation achieves the back-propagation of the waves from the array to the source and takes into account the effect of the mean flow on sound propagation. The ability of the method to localize a sound source in a typical wind-tunnel flow is first demonstrated using simulated data. A generic experiment is then set up in an anechoic wind tunnel to validate the proposed method with a flow at Mach number 0.11. Monopolar sources are first considered that are either monochromatic or have a narrow or wide-band frequency content. The source position estimation is well-achieved with an error inferior to the wavelength. An application to a dipolar sound source shows that this type of source is also very satisfactorily characterized.

  15. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    PubMed Central

    Song, Kai; Liu, Qi; Wang, Qi

    2011-01-01

    Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401

  16. 33 CFR 154.1125 - Additional response plan requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Prince William Sound, Alaska § 154.1125 Additional response plan requirements. (a) The owner or operator of a TAPAA facility shall include the following information in the Prince William Sound appendix to... for personnel, including local residents and fishermen, from the following locations in Prince William...

  17. Molecular dynamics simulations of classical sound absorption in a monatomic gas

    NASA Astrophysics Data System (ADS)

    Ayub, M.; Zander, A. C.; Huang, D. M.; Cazzolato, B. S.; Howard, C. Q.

    2018-05-01

    Sound wave propagation in argon gas is simulated using molecular dynamics (MD) in order to determine the attenuation of acoustic energy due to classical (viscous and thermal) losses at high frequencies. In addition, a method is described to estimate attenuation of acoustic energy using the thermodynamic concept of exergy. The results are compared against standing wave theory and the predictions of the theory of continuum mechanics. Acoustic energy losses are studied by evaluating various attenuation parameters and by comparing the changes in behavior at three different frequencies. This study demonstrates acoustic absorption effects in a gas simulated in a thermostatted molecular simulation and quantifies the classical losses in terms of the sound attenuation constant. The approach can be extended to further understanding of acoustic loss mechanisms in the presence of nanoscale porous materials in the simulation domain.

  18. 40 CFR 205.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... in decibels. (10) Highway means the streets, roads, and public ways in any State. (11) Fast Meter Response means that the fast dynamic response of the sound level meter shall be used. The fast dynamic...

  19. 40 CFR 205.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... in decibels. (10) Highway means the streets, roads, and public ways in any State. (11) Fast Meter Response means that the fast dynamic response of the sound level meter shall be used. The fast dynamic...

  20. 40 CFR 205.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... in decibels. (10) Highway means the streets, roads, and public ways in any State. (11) Fast Meter Response means that the fast dynamic response of the sound level meter shall be used. The fast dynamic...

  1. 40 CFR 205.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... in decibels. (10) Highway means the streets, roads, and public ways in any State. (11) Fast Meter Response means that the fast dynamic response of the sound level meter shall be used. The fast dynamic...

  2. Concert halls with strong lateral reflections enhance musical dynamics.

    PubMed

    Pätynen, Jukka; Tervo, Sakari; Robinson, Philip W; Lokki, Tapio

    2014-03-25

    One of the most thrilling cultural experiences is to hear live symphony-orchestra music build up from a whispering passage to a monumental fortissimo. The impact of such a crescendo has been thought to depend only on the musicians' skill, but here we show that interactions between the concert-hall acoustics and listeners' hearing also play a major role in musical dynamics. These interactions contribute to the shoebox-type concert hall's established success, but little prior research has been devoted to dynamic expression in this three-part transmission chain as a complete system. More forceful orchestral playing disproportionately excites high frequency harmonics more than those near the note's fundamental. This effect results in not only more sound energy, but also a different tone color. The concert hall transmits this sound, and the room geometry defines from which directions acoustic reflections arrive at the listener. Binaural directional hearing emphasizes high frequencies more when sound arrives from the sides of the head rather than from the median plane. Simultaneously, these same frequencies are emphasized by higher orchestral-playing dynamics. When the room geometry provides reflections from these directions, the perceived dynamic range is enhanced. Current room-acoustic evaluation methods assume linear behavior and thus neglect this effect. The hypothesis presented here is that the auditory excitation by reflections is emphasized with an orchestra forte most in concert halls with strong lateral reflections. The enhanced dynamic range provides an explanation for the success of rectangularly shaped concert-hall geometry.

  3. dTULP, the Drosophila melanogaster Homolog of Tubby, Regulates Transient Receptor Potential Channel Localization in Cilia

    PubMed Central

    Shim, Jaewon; Han, Woongsu; Lee, Jinu; Bae, Yong Chul; Chung, Yun Doo; Kim, Chul Hoon; Moon, Seok Jun

    2013-01-01

    Mechanically gated ion channels convert sound into an electrical signal for the sense of hearing. In Drosophila melanogaster, several transient receptor potential (TRP) channels have been implicated to be involved in this process. TRPN (NompC) and TRPV (Inactive) channels are localized in the distal and proximal ciliary zones of auditory receptor neurons, respectively. This segregated ciliary localization suggests distinct roles in auditory transduction. However, the regulation of this localization is not fully understood. Here we show that the Drosophila Tubby homolog, King tubby (hereafter called dTULP) regulates ciliary localization of TRPs. dTULP-deficient flies show uncoordinated movement and complete loss of sound-evoked action potentials. Inactive and NompC are mislocalized in the cilia of auditory receptor neurons in the dTulp mutants, indicating that dTULP is required for proper cilia membrane protein localization. This is the first demonstration that dTULP regulates TRP channel localization in cilia, and suggests that dTULP is a protein that regulates ciliary neurosensory functions. PMID:24068974

  4. SABER Observations of the OH Meinel Airglow Variability Near the Mesopause

    NASA Technical Reports Server (NTRS)

    Marsh, Daniel R.; Smith, Anne K.; Mlynczak, Martin G.

    2005-01-01

    The Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument, one of four on board the TIMED satellite, observes the OH Meinel emission at 2.0 m that peaks near the mesopause. The emission results from reactions between members of the oxygen and hydrogen chemical families that can be significantly affected by mesopause dynamics. In this study we compare SABER measurements of OH Meinel emission rates and temperatures with predictions from a 3-dimensional chemical dynamical model. In general, the model is capable of reproducing both the observed diurnal and seasonal OH Meinel emission variability. The results indicate that the diurnal tide has a large effect on the overall magnitude and temporal variation of the emission in low latitudes. This tidal variability is so dominant that the seasonal cycle in the nighttime emission depends very strongly on the local time of the analysis. At higher latitudes, the emission has an annual cycle that is due mainly to transport of oxygen by the seasonally reversing mean circulation.

  5. Neural plasticity associated with recently versus often heard objects.

    PubMed

    Bourquin, Nathalie M-P; Spierer, Lucas; Murray, Micah M; Clarke, Stephanie

    2012-09-01

    In natural settings the same sound source is often heard repeatedly, with variations in spectro-temporal and spatial characteristics. We investigated how such repetitions influence sound representations and in particular how auditory cortices keep track of recently vs. often heard objects. A set of 40 environmental sounds was presented twice, i.e. as prime and as repeat, while subjects categorized the corresponding sound sources as living vs. non-living. Electrical neuroimaging analyses were applied to auditory evoked potentials (AEPs) comparing primes vs. repeats (effect of presentation) and the four experimental sections. Dynamic analysis of distributed source estimations revealed i) a significant main effect of presentation within the left temporal convexity at 164-215 ms post-stimulus onset; and ii) a significant main effect of section in the right temporo-parietal junction at 166-213 ms. A 3-way repeated measures ANOVA (hemisphere×presentation×section) applied to neural activity of the above clusters during the common time window confirmed the specificity of the left hemisphere for the effect of presentation, but not that of the right hemisphere for the effect of section. In conclusion, spatio-temporal dynamics of neural activity encode the temporal history of exposure to sound objects. Rapidly occurring plastic changes within the semantic representations of the left hemisphere keep track of objects heard a few seconds before, independent of the more general sound exposure history. Progressively occurring and more long-lasting plastic changes occurring predominantly within right hemispheric networks, which are known to code for perceptual, semantic and spatial aspects of sound objects, keep track of multiple exposures. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. A Low Cost GPS System for Real-Time Tracking of Sounding Rockets

    NASA Technical Reports Server (NTRS)

    Markgraf, M.; Montenbruck, O.; Hassenpflug, F.; Turner, P.; Bull, B.; Bauer, Frank (Technical Monitor)

    2001-01-01

    In an effort to minimize the need for costly, complex, tracking radars, the German Space Operations Center has set up a research project for GPS based tracking of sounding rockets. As part of this project, a GPS receiver based on commercial technology for terrestrial applications has been modified to allow its use under the highly dynamical conditions of a sounding rocket flight. In addition, new antenna concepts are studied as an alternative to proven but costly wrap-around antennas.

  7. Developing a system for blind acoustic source localization and separation

    NASA Astrophysics Data System (ADS)

    Kulkarni, Raghavendra

    This dissertation presents innovate methodologies for locating, extracting, and separating multiple incoherent sound sources in three-dimensional (3D) space; and applications of the time reversal (TR) algorithm to pinpoint the hyper active neural activities inside the brain auditory structure that are correlated to the tinnitus pathology. Specifically, an acoustic modeling based method is developed for locating arbitrary and incoherent sound sources in 3D space in real time by using a minimal number of microphones, and the Point Source Separation (PSS) method is developed for extracting target signals from directly measured mixed signals. Combining these two approaches leads to a novel technology known as Blind Sources Localization and Separation (BSLS) that enables one to locate multiple incoherent sound signals in 3D space and separate original individual sources simultaneously, based on the directly measured mixed signals. These technologies have been validated through numerical simulations and experiments conducted in various non-ideal environments where there are non-negligible, unspecified sound reflections and reverberation as well as interferences from random background noise. Another innovation presented in this dissertation is concerned with applications of the TR algorithm to pinpoint the exact locations of hyper-active neurons in the brain auditory structure that are directly correlated to the tinnitus perception. Benchmark tests conducted on normal rats have confirmed the localization results provided by the TR algorithm. Results demonstrate that the spatial resolution of this source localization can be as high as the micrometer level. This high precision localization may lead to a paradigm shift in tinnitus diagnosis, which may in turn produce a more cost-effective treatment for tinnitus than any of the existing ones.

  8. Early enhanced processing and delayed habituation to deviance sounds in autism spectrum disorder.

    PubMed

    Hudac, Caitlin M; DesChamps, Trent D; Arnett, Anne B; Cairney, Brianna E; Ma, Ruqian; Webb, Sara Jane; Bernier, Raphael A

    2018-06-01

    Children with autism spectrum disorder (ASD) exhibit difficulties processing and encoding sensory information in daily life. Cognitive response to environmental change in control individuals is naturally dynamic, meaning it habituates or reduces over time as one becomes accustomed to the deviance. The origin of atypical response to deviance in ASD may relate to differences in this dynamic habituation. The current study of 133 children and young adults with and without ASD examined classic electrophysiological responses (MMN and P3a), as well as temporal patterns of habituation (i.e., N1 and P3a change over time) in response to a passive auditory oddball task. Individuals with ASD showed an overall heightened sensitivity to change as exhibited by greater P3a amplitude to novel sounds. Moreover, youth with ASD showed dynamic ERP differences, including slower attenuation of the N1 response to infrequent tones and the P3a response to novel sounds. Dynamic ERP responses were related to parent ratings of auditory sensory-seeking behaviors, but not general cognition. As the first large-scale study to characterize temporal dynamics of auditory ERPs in ASD, our results provide compelling evidence that heightened response to auditory deviance in ASD is largely driven by early sensitivity and prolonged processing of auditory deviance. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Active control and sound synthesis--two different ways to investigate the influence of the modal parameters of a guitar on its sound.

    PubMed

    Benacchio, Simon; Mamou-Mani, Adrien; Chomette, Baptiste; Finel, Victor

    2016-03-01

    The vibrational behavior of musical instruments is usually studied using physical modeling and simulations. Recently, active control has proven its efficiency to experimentally modify the dynamical behavior of musical instruments. This approach could also be used as an experimental tool to systematically study fine physical phenomena. This paper proposes to use modal active control as an alternative to sound simulation to study the complex case of the coupling between classical guitar strings and soundboard. A comparison between modal active control and sound simulation investigates the advantages, the drawbacks, and the limits of these two approaches.

  10. Dynamic Structure Factor and Transport Coefficients of a Homogeneously Driven Granular Fluid in Steady State

    NASA Astrophysics Data System (ADS)

    Vollmayr-Lee, Katharina; Zippelius, Annette; Aspelmeier, Timo

    2011-03-01

    We study the dynamic structure factor of a granular fluid of hard spheres, driven into a stationary nonequilibrium state by balancing the energy loss due to inelastic collisions with the energy input due to driving. The driving is chosen to conserve momentum, so that fluctuating hydrodynamics predicts the existence of sound modes. We present results of computer simulations which are based on an event driven algorithm. The dynamic structure factor F (q , ω) is determined for volume fractions 0.05, 0.1 and 0.2 and coefficients of normal restitution 0.8 and 0.9. We observe sound waves, and compare our results for F (q , ω) with the predictions of generalized fluctuating hydrodynamics which takes into account that temperature fluctuations decay either diffusively or with a finite relaxation rate, depending on wave number and inelasticity. We determine the speed of sound and the transport coefficients and compare them to the results of kinetic theory. K.V.L. thanks the Institute of Theoretical Physics, University of Goettingen, for financial support and hospitality.

  11. 12 CFR 228.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... obtained from community organizations, state, local, and tribal governments, economic development agencies... condition of the bank, the economic climate (national, regional, and local), safety and soundness...

  12. 12 CFR 25.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations...

  13. 12 CFR 345.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations...

  14. Anomalous sound absorption in the Voronoi liquid

    NASA Astrophysics Data System (ADS)

    Farago, Jean; Ruscher, CéLine; Semenov, Alexandr; Baschnagel, Joerg

    The physics of simple fluids in the hydrodynamic limit, and notably the connection between the proper microscopic scales and the macroscopic hydrodynamical description are nowadays well understood. In particular, the three peak shape of the dynamical structure factor S (k , ω) is a universal feature, as well as the k-dependence of the peak position ( k), and width k2 , the latter accounting for the sound attenuation rate. In this talk, I will present a theoretical model of monodisperse fluid, whose interactions are defined via the Voronoi tessellations of the configurations (called the Voronoi liquid and first studied in), which displays at low temperatures a marked violation of the universal features of S (k , ω) with sound attenuation rate only k . This anomalous behaviour, which apparently violates the basic symmetries of the liquid state, is traced back to the existence of a timescale which is both (1) short enough for the viscoelastic features of the liquid to impact the relaxational dynamics and (2) long enough for the momentum diffusion to be substantially slower than the sound propagation on that characteristic time.

  15. How Internally Coupled Ears Generate Temporal and Amplitude Cues for Sound Localization.

    PubMed

    Vedurmudi, A P; Goulet, J; Christensen-Dalsgaard, J; Young, B A; Williams, R; van Hemmen, J L

    2016-01-15

    In internally coupled ears, displacement of one eardrum creates pressure waves that propagate through air-filled passages in the skull and cause displacement of the opposing eardrum, and conversely. By modeling the membrane, passages, and propagating pressure waves, we show that internally coupled ears generate unique amplitude and temporal cues for sound localization. The magnitudes of both these cues are directionally dependent. The tympanic fundamental frequency segregates a low-frequency regime with constant time-difference magnification from a high-frequency domain with considerable amplitude magnification.

  16. Examination of propeller sound production using large eddy simulation

    NASA Astrophysics Data System (ADS)

    Keller, Jacob; Kumar, Praveen; Mahesh, Krishnan

    2018-06-01

    The flow field of a five-bladed marine propeller operating at design condition, obtained using large eddy simulation, is used to calculate the resulting far-field sound. The results of three acoustic formulations are compared, and the effects of the underlying assumptions are quantified. The integral form of the Ffowcs-Williams and Hawkings (FW-H) equation is solved on the propeller surface, which is discretized into a collection of N radial strips. Further assumptions are made to reduce FW-H to a Curle acoustic analogy and a point-force dipole model. Results show that although the individual blades are strongly tonal in the rotor plane, the propeller is acoustically compact at low frequency and the tonal sound interferes destructively in the far field. The propeller is found to be acoustically compact for frequencies up to 100 times the rotation rate. The overall far-field acoustic signature is broadband. Locations of maximum sound of the propeller occur along the axis of rotation both up and downstream. The propeller hub is found to be a source of significant sound to observers in the rotor plane, due to flow separation and interaction with the blade-root wakes. The majority of the propeller sound is generated by localized unsteadiness at the blade tip, which is caused by shedding of the tip vortex. Tonal blade sound is found to be caused by the periodic motion of the loaded blades. Turbulence created in the blade boundary layer is convected past the blade trailing edge leading to generation of broadband noise along the blade. Acoustic energy is distributed among higher frequencies as local Reynolds number increases radially along the blades. Sound source correlation and spectra are examined in the context of noise modeling.

  17. How far away is plug 'n' play? Assessing the near-term potential of sonification and auditory display

    NASA Technical Reports Server (NTRS)

    Bargar, Robin

    1995-01-01

    The commercial music industry offers a broad range of plug 'n' play hardware and software scaled to music professionals and scaled to a broad consumer market. The principles of sound synthesis utilized in these products are relevant to application in virtual environments (VE). However, the closed architectures used in commercial music synthesizers are prohibitive to low-level control during real-time rendering, and the algorithms and sounds themselves are not standardized from product to product. To bring sound into VE requires a new generation of open architectures designed for human-controlled performance from interfaces embedded in immersive environments. This presentation addresses the state of the sonic arts in scientific computing and VE, analyzes research challenges facing sound computation, and offers suggestions regarding tools we might expect to become available during the next few years. A list of classes of audio functionality in VE includes sonification -- the use of sound to represent data from numerical models; 3D auditory display (spatialization and localization, also called externalization); navigation cues for positional orientation and for finding items or regions inside large spaces; voice recognition for controlling the computer; external communications between users in different spaces; and feedback to the user concerning his own actions or the state of the application interface. To effectively convey this considerable variety of signals, we apply principles of acoustic design to ensure the messages are neither confusing nor competing. We approach the design of auditory experience through a comprehensive structure for messages, and message interplay we refer to as an Automated Sound Environment. Our research addresses real-time sound synthesis, real-time signal processing and localization, interactive control of high-dimensional systems, and synchronization of sound and graphics.

  18. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    PubMed Central

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  19. Development and Testing of a High Level Axial Array Duct Sound Source for the NASA Flow Impedance Test Facility

    NASA Technical Reports Server (NTRS)

    Johnson, Marty E.; Fuller, Chris R.; Jones, Michael G. (Technical Monitor)

    2000-01-01

    In this report both a frequency domain method for creating high level harmonic excitation and a time domain inverse method for creating large pulses in a duct are developed. To create controllable, high level sound an axial array of six JBL-2485 compression drivers was used. The pressure downstream is considered as input voltages to the sources filtered by the natural dynamics of the sources and the duct. It is shown that this dynamic behavior can be compensated for by filtering the inputs such that both time delays and phase changes are taken into account. The methods developed maximize the sound output while (i) keeping within the power constraints of the sources and (ii) maintaining a suitable level of reproduction accuracy. Harmonic excitation pressure levels of over 155dB were created experimentally over a wide frequency range (1000-4000Hz). For pulse excitation there is a tradeoff between accuracy of reproduction and sound level achieved. However, the accurate reproduction of a pulse with a maximum pressure level over 6500Pa was achieved experimentally. It was also shown that the throat connecting the driver to the duct makes it difficult to inject sound just below the cut-on of each acoustic mode (pre cut-on loading effect).

  20. Perception of Animacy from the Motion of a Single Sound Object.

    PubMed

    Nielsen, Rasmus Høll; Vuust, Peter; Wallentin, Mikkel

    2015-02-01

    Research in the visual modality has shown that the presence of certain dynamics in the motion of an object has a strong effect on whether or not the entity is perceived as animate. Cues for animacy are, among others, self-propelled motion and direction changes that are seemingly not caused by entities external to, or in direct contact with, the moving object. The present study aimed to extend this research into the auditory domain by determining if similar dynamics could influence the perceived animacy of a sound source. In two experiments, participants were presented with single, synthetically generated 'mosquito' sounds moving along trajectories in space, and asked to rate how certain they were that each sound-emitting entity was alive. At a random point on a linear motion trajectory, the sound source would deviate from its initial path and speed. Results confirm findings from the visual domain that a change in the velocity of motion is positively correlated with perceived animacy, and changes in direction were found to influence animacy judgment as well. This suggests that an ability to facilitate and sustain self-movement is perceived as a living quality not only in the visual domain, but in the auditory domain as well. © 2015 SAGE Publications.

  1. Cherenkov sound on a surface of a topological insulator

    NASA Astrophysics Data System (ADS)

    Smirnov, Sergey

    2013-11-01

    Topological insulators are currently of considerable interest due to peculiar electronic properties originating from helical states on their surfaces. Here we demonstrate that the sound excited by helical particles on surfaces of topological insulators has several exotic properties fundamentally different from sound propagating in nonhelical or even isotropic helical systems. Specifically, the sound may have strictly forward propagation absent for isotropic helical states. Its dependence on the anisotropy of the realistic surface states is of distinguished behavior which may be used as an alternative experimental tool to measure the anisotropy strength. Fascinating from the fundamental point of view backward, or anomalous, Cherenkov sound is excited above the critical angle π/2 when the anisotropy exceeds a critical value. Strikingly, at strong anisotropy the sound localizes into a few forward and backward beams propagating along specific directions.

  2. Evolutionary trends in directional hearing

    PubMed Central

    Carr, Catherine E.; Christensen-Dalsgaard, Jakob

    2016-01-01

    Tympanic hearing is a true evolutionary novelty that arose in parallel within early tetrapods. We propose that in these tetrapods, selection for sound localization in air acted upon pre-existing directionally sensitive brainstem circuits, similar to those in fishes. Auditory circuits in birds and lizards resemble this ancestral, directionally sensitive framework. Despite this anatomically similarity, coding of sound source location differs between birds and lizards. In birds, brainstem circuits compute sound location from interaural cues. Lizards, however, have coupled ears, and do not need to compute source location in the brain. Thus their neural processing of sound direction differs, although all show mechanisms for enhancing sound source directionality. Comparisons with mammals reveal similarly complex interactions between coding strategies and evolutionary history. PMID:27448850

  3. Automatic estimation of dynamics of ionospheric disturbances with 1–15 minute lifetimes as derived from ISTP SB RAS fast chirp-ionosonde data

    NASA Astrophysics Data System (ADS)

    Berngardt, Oleg; Bubnova, Tatyana; Podlesnyi, Aleksey

    2018-03-01

    We propose and test a method of analyzing ionograms of vertical ionospheric sounding, which is based on detecting deviations of the shape of an ionogram from its regular (averaged) shape. We interpret these deviations in terms of reflection from the electron density irregularities at heights corresponding to the effective height. We examine the irregularities thus discovered within the framework of a model of a localized uniformly moving irregularity, and determine their characteristic parameters: effective heights and observed vertical velocities. We analyze selected experimental data for three seasons (spring, winter, autumn) obtained nearby Irkutsk with a fast chirp ionosonde of ISTP SB RAS in 2013-2015. The analysis of six days of observations conducted in these seasons has shown that in the observed vertical drift of the irregularities there are two characteristic distributions: wide velocity distribution with nearly 0 m/s mean and with the standard deviation of ∼250 m/s and narrow distribution with nearly -160 m/s mean. The analysis has demonstrated the effectiveness of the proposed algorithm for the automatic analysis of vertical sounding data with high repetition rate.

  4. Correlating Solar Wind Modulation with Ionospheric Variability at Mars from MEX and MAVEN Observations

    NASA Astrophysics Data System (ADS)

    Kopf, A. J.; Morgan, D. D.; Halekas, J. S.; Ruhunusiri, S.; Gurnett, D. A.; Connerney, J. E. P.

    2017-12-01

    The synthesis of observations by the Mars Express and Mars Atmosphere and Volatiles Evolution (MAVEN) spacecraft allows for a unique opportunity to study variability in the Martian ionosphere from multiple perspectives. One major source for this variability is the solar wind. Due to its elliptical orbit which precesses over time, MAVEN periodically spends part of its orbit outside the Martian bow shock, allowing for direct measurements of the solar wind impacting the Martian plasma environment. When the Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS) instrument aboard Mars Express is simultaneously sounding the ionosphere, the influence from changes in the solar wind can be observed. Previous studies have suggested a positive correlation, connecting ionospheric density to the solar wind proton flux, but depended on Earth-based measurements for solar wind conditions. More recently, research has indicated that observations of ionospheric variability from these two spacecraft can be connected in special cases, such as shock wave impacts or specific solar wind magnetic field orientations. Here we extend this to more general solar wind conditions and examine how changes in the solar wind properties measured by MAVEN instruments correlate with ionospheric structure and dynamics observed simultaneously in MARSIS remote and local measurements.

  5. Neural effects of cognitive control load on auditory selective attention

    PubMed Central

    Sabri, Merav; Humphries, Colin; Verber, Matthew; Liebenthal, Einat; Binder, Jeffrey R.; Mangalathu, Jain; Desai, Anjali

    2014-01-01

    Whether and how working memory disrupts or alters auditory selective attention is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across high and low working memory load in a dichotic-listening paradigm. Participants performed n-back tasks (1-back, 2-back) in one ear (Attend ear) while ignoring task-irrelevant speech sounds in the other ear (Ignore ear). The effects of working memory load on selective attention were observed at 130-210 msec, with higher load resulting in greater irrelevant syllable-related activation in localizer-defined regions in auditory cortex. The interaction between memory load and presence of irrelevant information revealed stronger activations primarily in frontal and parietal areas due to presence of irrelevant information in the higher memory load. Joint independent component analysis of ERP and fMRI data revealed that the ERP component in the N1 time-range is associated with activity in superior temporal gyrus and medial prefrontal cortex. These results demonstrate a dynamic relationship between working memory load and auditory selective attention, in agreement with the load model of attention and the idea of common neural resources for memory and attention. PMID:24946314

  6. Behavior and modeling of two-dimensional precedence effect in head-unrestrained cats

    PubMed Central

    Ruhland, Janet L.; Yin, Tom C. T.

    2015-01-01

    The precedence effect (PE) is an auditory illusion that occurs when listeners localize nearly coincident and similar sounds from different spatial locations, such as a direct sound and its echo. It has mostly been studied in humans and animals with immobile heads in the horizontal plane; speaker pairs were often symmetrically located in the frontal hemifield. The present study examined the PE in head-unrestrained cats for a variety of paired-sound conditions along the horizontal, vertical, and diagonal axes. Cats were trained with operant conditioning to direct their gaze to the perceived sound location. Stereotypical PE-like behaviors were observed for speaker pairs placed in azimuth or diagonally in the frontal hemifield as the interstimulus delay was varied. For speaker pairs in the median sagittal plane, no clear PE-like behavior occurred. Interestingly, when speakers were placed diagonally in front of the cat, certain PE-like behavior emerged along the vertical dimension. However, PE-like behavior was not observed when both speakers were located in the left hemifield. A Hodgkin-Huxley model was used to simulate responses of neurons in the medial superior olive (MSO) to sound pairs in azimuth. The novel simulation incorporated a low-threshold potassium current and frequency mismatches to generate internal delays. The model exhibited distinct PE-like behavior, such as summing localization and localization dominance. The simulation indicated that certain encoding of the PE could have occurred before information reaches the inferior colliculus, and MSO neurons with binaural inputs having mismatched characteristic frequencies may play an important role. PMID:26133795

  7. Short-Latency, Goal-Directed Movements of the Pinnae to Sounds That Produce Auditory Spatial Illusions

    PubMed Central

    McClaine, Elizabeth M.; Yin, Tom C. T.

    2010-01-01

    The precedence effect (PE) is an auditory spatial illusion whereby two identical sounds presented from two separate locations with a delay between them are perceived as a fused single sound source whose position depends on the value of the delay. By training cats using operant conditioning to look at sound sources, we have previously shown that cats experience the PE similarly to humans. For delays less than ±400 μs, cats exhibit summing localization, the perception of a “phantom” sound located between the sources. Consistent with localization dominance, for delays from 400 μs to ∼10 ms, cats orient toward the leading source location only, with little influence of the lagging source. Finally, echo threshold was reached for delays >10 ms, where cats first began to orient to the lagging source. It has been hypothesized by some that the neural mechanisms that produce facets of the PE, such as localization dominance and echo threshold, must likely occur at cortical levels. To test this hypothesis, we measured both pinnae position, which were not under any behavioral constraint, and eye position in cats and found that the pinnae orientations to stimuli that produce each of the three phases of the PE illusion was similar to the gaze responses. Although both eye and pinnae movements behaved in a manner that reflected the PE, because the pinnae moved with strikingly short latencies (∼30 ms), these data suggest a subcortical basis for the PE and that the cortex is not likely to be directly involved. PMID:19889848

  8. Short-latency, goal-directed movements of the pinnae to sounds that produce auditory spatial illusions.

    PubMed

    Tollin, Daniel J; McClaine, Elizabeth M; Yin, Tom C T

    2010-01-01

    The precedence effect (PE) is an auditory spatial illusion whereby two identical sounds presented from two separate locations with a delay between them are perceived as a fused single sound source whose position depends on the value of the delay. By training cats using operant conditioning to look at sound sources, we have previously shown that cats experience the PE similarly to humans. For delays less than +/-400 mus, cats exhibit summing localization, the perception of a "phantom" sound located between the sources. Consistent with localization dominance, for delays from 400 mus to approximately 10 ms, cats orient toward the leading source location only, with little influence of the lagging source. Finally, echo threshold was reached for delays >10 ms, where cats first began to orient to the lagging source. It has been hypothesized by some that the neural mechanisms that produce facets of the PE, such as localization dominance and echo threshold, must likely occur at cortical levels. To test this hypothesis, we measured both pinnae position, which were not under any behavioral constraint, and eye position in cats and found that the pinnae orientations to stimuli that produce each of the three phases of the PE illusion was similar to the gaze responses. Although both eye and pinnae movements behaved in a manner that reflected the PE, because the pinnae moved with strikingly short latencies ( approximately 30 ms), these data suggest a subcortical basis for the PE and that the cortex is not likely to be directly involved.

  9. Effect of background noise on neuronal coding of interaural level difference cues in rat inferior colliculus

    PubMed Central

    Mokri, Yasamin; Worland, Kate; Ford, Mark; Rajan, Ramesh

    2015-01-01

    Humans can accurately localize sounds even in unfavourable signal-to-noise conditions. To investigate the neural mechanisms underlying this, we studied the effect of background wide-band noise on neural sensitivity to variations in interaural level difference (ILD), the predominant cue for sound localization in azimuth for high-frequency sounds, at the characteristic frequency of cells in rat inferior colliculus (IC). Binaural noise at high levels generally resulted in suppression of responses (55.8%), but at lower levels resulted in enhancement (34.8%) as well as suppression (30.3%). When recording conditions permitted, we then examined if any binaural noise effects were related to selective noise effects at each of the two ears, which we interpreted in light of well-known differences in input type (excitation and inhibition) from each ear shaping particular forms of ILD sensitivity in the IC. At high signal-to-noise ratios (SNR), in most ILD functions (41%), the effect of background noise appeared to be due to effects on inputs from both ears, while for a large percentage (35.8%) appeared to be accounted for by effects on excitatory input. However, as SNR decreased, change in excitation became the dominant contributor to the change due to binaural background noise (63.6%). These novel findings shed light on the IC neural mechanisms for sound localization in the presence of continuous background noise. They also suggest that some effects of background noise on encoding of sound location reported to be emergent in upstream auditory areas can also be observed at the level of the midbrain. PMID:25865218

  10. Personal sleep pattern visualization using sequence-based kernel self-organizing map on sound data.

    PubMed

    Wu, Hongle; Kato, Takafumi; Yamada, Tomomi; Numao, Masayuki; Fukui, Ken-Ichi

    2017-07-01

    We propose a method to discover sleep patterns via clustering of sound events recorded during sleep. The proposed method extends the conventional self-organizing map algorithm by kernelization and sequence-based technologies to obtain a fine-grained map that visualizes the distribution and changes of sleep-related events. We introduced features widely applied in sound processing and popular kernel functions to the proposed method to evaluate and compare performance. The proposed method provides a new aspect of sleep monitoring because the results demonstrate that sound events can be directly correlated to an individual's sleep patterns. In addition, by visualizing the transition of cluster dynamics, sleep-related sound events were found to relate to the various stages of sleep. Therefore, these results empirically warrant future study into the assessment of personal sleep quality using sound data. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Sound and vision: visualization of music with a soap film

    NASA Astrophysics Data System (ADS)

    Gaulon, C.; Derec, C.; Combriat, T.; Marmottant, P.; Elias, F.

    2017-07-01

    A vertical soap film, freely suspended at the end of a tube, is vibrated by a sound wave that propagates in the tube. If the sound wave is a piece of music, the soap film ‘comes alive’: colours, due to iridescences in the soap film, swirl, split and merge in time with the music (see the snapshots in figure 1 below). In this article, we analyse the rich physics behind these fascinating dynamical patterns: it combines the acoustic propagation in a tube, the light interferences, and the static and dynamic properties of soap films. The interaction between the acoustic wave and the liquid membrane results in capillary waves on the soap film, as well as non-linear effects leading to a non-oscillatory flow of liquid in the plane of the film, which induces several spectacular effects: generation of vortices, diphasic dynamical patterns inside the film, and swelling of the soap film under certain conditions. Each of these effects is associated with a characteristic time scale, which interacts with the characteristic time of the music play. This article shows the richness of those characteristic times that lead to dynamical patterns. Through its artistic interest, the experiments presented in this article provide a tool for popularizing and demonstrating science in the classroom or to a broader audience.

  12. Auditory Power-Law Activation Avalanches Exhibit a Fundamental Computational Ground State

    NASA Astrophysics Data System (ADS)

    Stoop, Ruedi; Gomez, Florian

    2016-07-01

    The cochlea provides a biological information-processing paradigm that we are only beginning to understand in its full complexity. Our work reveals an interacting network of strongly nonlinear dynamical nodes, on which even a simple sound input triggers subnetworks of activated elements that follow power-law size statistics ("avalanches"). From dynamical systems theory, power-law size distributions relate to a fundamental ground state of biological information processing. Learning destroys these power laws. These results strongly modify the models of mammalian sound processing and provide a novel methodological perspective for understanding how the brain processes information.

  13. A low order flow/acoustics interaction method for the prediction of sound propagation using 3D adaptive hybrid grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kallinderis, Yannis, E-mail: kallind@otenet.gr; Vitsas, Panagiotis A.; Menounou, Penelope

    2012-07-15

    A low-order flow/acoustics interaction method for the prediction of sound propagation and diffraction in unsteady subsonic compressible flow using adaptive 3-D hybrid grids is investigated. The total field is decomposed into the flow field described by the Euler equations, and the acoustics part described by the Nonlinear Perturbation Equations. The method is shown capable of predicting monopole sound propagation, while employment of acoustics-guided adapted grid refinement improves the accuracy of capturing the acoustic field. Interaction of sound with solid boundaries is also examined in terms of reflection, and diffraction. Sound propagation through an unsteady flow field is examined using staticmore » and dynamic flow/acoustics coupling demonstrating the importance of the latter.« less

  14. A Sound Therapy-Based Intervention to Expand the Auditory Dynamic Range for Loudness among Persons with Sensorineural Hearing Losses: A Randomized Placebo-Controlled Clinical Trial

    PubMed Central

    Formby, Craig; Hawley, Monica L.; Sherlock, LaGuinn P.; Gold, Susan; Payne, JoAnne; Brooks, Rebecca; Parton, Jason M.; Juneau, Roger; Desporte, Edward J.; Siegle, Gregory R.

    2015-01-01

    The primary aim of this research was to evaluate the validity, efficacy, and generalization of principles underlying a sound therapy–based treatment for promoting expansion of the auditory dynamic range (DR) for loudness. The basic sound therapy principles, originally devised for treatment of hyperacusis among patients with tinnitus, were evaluated in this study in a target sample of unsuccessfully fit and/or problematic prospective hearing aid users with diminished DRs (owing to their elevated audiometric thresholds and reduced sound tolerance). Secondary aims included: (1) delineation of the treatment contributions from the counseling and sound therapy components to the full-treatment protocol and, in turn, the isolated treatment effects from each of these individual components to intervention success; and (2) characterization of the respective dynamics for full, partial, and control treatments. Thirty-six participants with bilateral sensorineural hearing losses and reduced DRs, which affected their actual or perceived ability to use hearing aids, were enrolled in and completed a placebo-controlled (for sound therapy) randomized clinical trial. The 2 × 2 factorial trial design was implemented with or without various assignments of counseling and sound therapy. Specifically, participants were assigned randomly to one of four treatment groups (nine participants per group), including: (1) group 1—full treatment achieved with scripted counseling plus sound therapy implemented with binaural sound generators; (2) group 2—partial treatment achieved with counseling and placebo sound generators (PSGs); (3) group 3—partial treatment achieved with binaural sound generators alone; and (4) group 4—a neutral control treatment implemented with the PSGs alone. Repeated measurements of categorical loudness judgments served as the primary outcome measure. The full-treatment categorical-loudness judgments for group 1, measured at treatment termination, were significantly greater than the corresponding pretreatment judgments measured at baseline at 500, 2,000, and 4,000 Hz. Moreover, increases in their “uncomfortably loud” judgments (∼12 dB over the range from 500 to 4,000 Hz) were superior to those measured for either of the partial-treatment groups 2 and 3 or for control group 4. Efficacy, assessed by treatment-related criterion increases ≥ 10 dB for judgments of uncomfortable loudness, was superior for full treatment (82% efficacy) compared with that for either of the partial treatments (25% and 40% for counseling combined with the placebo sound therapy and sound therapy alone, respectively) or for the control treatment (50%). The majority of the group 1 participants achieved their criterion improvements within 3 months of beginning treatment. The treatment effect from sound therapy was much greater than that for counseling, which was statistically indistinguishable in most of our analyses from the control treatment. The basic principles underlying the full-treatment protocol are valid and have general applicability for expanding the DR among individuals with sensorineural hearing losses, who may often report aided loudness problems. The positive full-treatment effects were superior to those achieved for either counseling or sound therapy in virtual or actual isolation, respectively; however, the delivery of both components in the full-treatment approach was essential for an optimum treatment outcome. PMID:27516711

  15. A Sound Therapy-Based Intervention to Expand the Auditory Dynamic Range for Loudness among Persons with Sensorineural Hearing Losses: A Randomized Placebo-Controlled Clinical Trial.

    PubMed

    Formby, Craig; Hawley, Monica L; Sherlock, LaGuinn P; Gold, Susan; Payne, JoAnne; Brooks, Rebecca; Parton, Jason M; Juneau, Roger; Desporte, Edward J; Siegle, Gregory R

    2015-05-01

    The primary aim of this research was to evaluate the validity, efficacy, and generalization of principles underlying a sound therapy-based treatment for promoting expansion of the auditory dynamic range (DR) for loudness. The basic sound therapy principles, originally devised for treatment of hyperacusis among patients with tinnitus, were evaluated in this study in a target sample of unsuccessfully fit and/or problematic prospective hearing aid users with diminished DRs (owing to their elevated audiometric thresholds and reduced sound tolerance). Secondary aims included: (1) delineation of the treatment contributions from the counseling and sound therapy components to the full-treatment protocol and, in turn, the isolated treatment effects from each of these individual components to intervention success; and (2) characterization of the respective dynamics for full, partial, and control treatments. Thirty-six participants with bilateral sensorineural hearing losses and reduced DRs, which affected their actual or perceived ability to use hearing aids, were enrolled in and completed a placebo-controlled (for sound therapy) randomized clinical trial. The 2 × 2 factorial trial design was implemented with or without various assignments of counseling and sound therapy. Specifically, participants were assigned randomly to one of four treatment groups (nine participants per group), including: (1) group 1-full treatment achieved with scripted counseling plus sound therapy implemented with binaural sound generators; (2) group 2-partial treatment achieved with counseling and placebo sound generators (PSGs); (3) group 3-partial treatment achieved with binaural sound generators alone; and (4) group 4-a neutral control treatment implemented with the PSGs alone. Repeated measurements of categorical loudness judgments served as the primary outcome measure. The full-treatment categorical-loudness judgments for group 1, measured at treatment termination, were significantly greater than the corresponding pretreatment judgments measured at baseline at 500, 2,000, and 4,000 Hz. Moreover, increases in their "uncomfortably loud" judgments (∼12 dB over the range from 500 to 4,000 Hz) were superior to those measured for either of the partial-treatment groups 2 and 3 or for control group 4. Efficacy, assessed by treatment-related criterion increases ≥ 10 dB for judgments of uncomfortable loudness, was superior for full treatment (82% efficacy) compared with that for either of the partial treatments (25% and 40% for counseling combined with the placebo sound therapy and sound therapy alone, respectively) or for the control treatment (50%). The majority of the group 1 participants achieved their criterion improvements within 3 months of beginning treatment. The treatment effect from sound therapy was much greater than that for counseling, which was statistically indistinguishable in most of our analyses from the control treatment. The basic principles underlying the full-treatment protocol are valid and have general applicability for expanding the DR among individuals with sensorineural hearing losses, who may often report aided loudness problems. The positive full-treatment effects were superior to those achieved for either counseling or sound therapy in virtual or actual isolation, respectively; however, the delivery of both components in the full-treatment approach was essential for an optimum treatment outcome.

  16. Dynamics of soundscape in a shallow water marine environment: a study of the habitat of the Indo-Pacific humpback dolphin.

    PubMed

    Guan, Shane; Lin, Tzu-Hao; Chou, Lien-Siang; Vignola, Joseph; Judge, John; Turo, Diego

    2015-05-01

    The underwater acoustic field is an important ecological element for many aquatic animals. This research examines the soundscape of a critically endangered Indo-Pacific humpback dolphin population in the shallow water environment off the west coast of Taiwan. Underwater acoustic recordings were conducted between late spring and late fall in 2012 at Yunlin (YL), which is close to a shipping lane, and Waisanding (WS), which is relatively pristine. Site-specific analyses were performed on the dynamics of the temporal and spectral acoustic characteristics for both locations. The results highlight the dynamics of the soundscape in two major octave bands: 150-300 Hz and 1.2-2.4 kHz. The acoustic energy in the former frequency band is mainly associated with passing container vessels near YL, while the latter frequency band is from sonic fish chorus at nighttime in both recording sites. In addition, large variation of low frequency acoustic energy throughout the study period was noticed at WS, where the water depths ranged between 1.5 and 4.5 m depending on tidal cycle. This phenomenon suggests that besides certain sound sources in the environment, the coastal soundscape may also be influenced by its local bathymetry and the dynamics of the physical environment.

  17. Dynamics and Chemistry in Jovian Atmospheres: 2D Hydrodynamical Simulations

    NASA Astrophysics Data System (ADS)

    Bordwell, B. R.; Brown, B. P.; Oishi, J.

    2016-12-01

    A key component of our understanding of the formation and evolution of planetary systems is chemical composition. Problematically, however, in the atmospheres of cooler gas giants, dynamics on the same timescale as chemical reactions pull molecular abundances out of thermochemical equilibrium. These disequilibrium abundances are treated using what is known as the "quench" approximation, based upon the mixing length theory of convection. The validity of this approximation is questionable, though, as the atmospheres of gas giants encompass two distinct dynamic regimes: convective and radiative. To resolve this issue, we conduct 2D hydrodynamical simulations using the state-of-the-art pseudospectral simulation framework Dedalus. In these simulations, we solve the fully compressible equations of fluid motion in a local slab geometry that mimics the structure of a planetary atmosphere (convective zone underlying a radiative zone). Through the inclusion of passive tracers, we explore the transport properties of both regimes, and assess the validity of the classical eddy diffusion parameterization. With the addition of active tracers, we examine the interactions between dynamical and chemical processes, and generate prescriptions for the observational community. By providing insight into mixing and feedback mechanisms in Jovian atmospheres, this research lays a solid foundation for future global simulations and the construction of physically-sound models for current and future observations.

  18. Top-down modulation of auditory processing: effects of sound context, musical expertise and attentional focus.

    PubMed

    Tervaniemi, M; Kruck, S; De Baene, W; Schröger, E; Alter, K; Friederici, A D

    2009-10-01

    By recording auditory electrical brain potentials, we investigated whether the basic sound parameters (frequency, duration and intensity) are differentially encoded among speech vs. music sounds by musicians and non-musicians during different attentional demands. To this end, a pseudoword and an instrumental sound of comparable frequency and duration were presented. The accuracy of neural discrimination was tested by manipulations of frequency, duration and intensity. Additionally, the subjects' attentional focus was manipulated by instructions to ignore the sounds while watching a silent movie or to attentively discriminate the different sounds. In both musicians and non-musicians, the pre-attentively evoked mismatch negativity (MMN) component was larger to slight changes in music than in speech sounds. The MMN was also larger to intensity changes in music sounds and to duration changes in speech sounds. During attentional listening, all subjects more readily discriminated changes among speech sounds than among music sounds as indexed by the N2b response strength. Furthermore, during attentional listening, musicians displayed larger MMN and N2b than non-musicians for both music and speech sounds. Taken together, the data indicate that the discriminative abilities in human audition differ between music and speech sounds as a function of the sound-change context and the subjective familiarity of the sound parameters. These findings provide clear evidence for top-down modulatory effects in audition. In other words, the processing of sounds is realized by a dynamically adapting network considering type of sound, expertise and attentional demands, rather than by a strictly modularly organized stimulus-driven system.

  19. Bionic Modeling of Knowledge-Based Guidance in Automated Underwater Vehicles.

    DTIC Science & Technology

    1987-06-24

    bugs and their foraging movements are heard by the sound of rustling leaves or rhythmic wing beats . ASYMMETRY OF EARS The faces of owls have captured...sound source without moving. The barn owl has binaural and monaural cues as well as cues that operate in relative motion when either the target or the...owl moves. Table 1 lists the cues. 7 TM No. 87- 2068 fTable 1. Sound Localization Parameters Used by the Barn Owl I BINAURAL PARAMETERS: 1. the

  20. Lummi Bay Marina, Whatcom County, Washington. Draft Detailed Project Report and Draft Environmental Impact Statement.

    DTIC Science & Technology

    1983-12-01

    observations of qray whales from the waters inside of Wasbington including the eastern Strait of Juan de ruca, the San Juan Islands, Puget Sound , and Hood...waters in winter. in the North Pacific this E.ecies is presently estimated tc number about 17,000 animals. One fin whale was pursued in Puget Sound i...owns submerged lands from tideland elevation -4.5 feet MLLW to deep water in Puget Sound . The Lummi Tribe (local sponsor) owns Reservation lands above

  1. Two-photon imaging and analysis of neural network dynamics

    NASA Astrophysics Data System (ADS)

    Lütcke, Henry; Helmchen, Fritjof

    2011-08-01

    The glow of a starry night sky, the smell of a freshly brewed cup of coffee or the sound of ocean waves breaking on the beach are representations of the physical world that have been created by the dynamic interactions of thousands of neurons in our brains. How the brain mediates perceptions, creates thoughts, stores memories and initiates actions remains one of the most profound puzzles in biology, if not all of science. A key to a mechanistic understanding of how the nervous system works is the ability to measure and analyze the dynamics of neuronal networks in the living organism in the context of sensory stimulation and behavior. Dynamic brain properties have been fairly well characterized on the microscopic level of individual neurons and on the macroscopic level of whole brain areas largely with the help of various electrophysiological techniques. However, our understanding of the mesoscopic level comprising local populations of hundreds to thousands of neurons (so-called 'microcircuits') remains comparably poor. Predominantly, this has been due to the technical difficulties involved in recording from large networks of neurons with single-cell spatial resolution and near-millisecond temporal resolution in the brain of living animals. In recent years, two-photon microscopy has emerged as a technique which meets many of these requirements and thus has become the method of choice for the interrogation of local neural circuits. Here, we review the state-of-research in the field of two-photon imaging of neuronal populations, covering the topics of microscope technology, suitable fluorescent indicator dyes, staining techniques, and in particular analysis techniques for extracting relevant information from the fluorescence data. We expect that functional analysis of neural networks using two-photon imaging will help to decipher fundamental operational principles of neural microcircuits.

  2. Broadband sound blocking in phononic crystals with rotationally symmetric inclusions.

    PubMed

    Lee, Joong Seok; Yoo, Sungmin; Ahn, Young Kwan; Kim, Yoon Young

    2015-09-01

    This paper investigates the feasibility of broadband sound blocking with rotationally symmetric extensible inclusions introduced in phononic crystals. By varying the size of four equally shaped inclusions gradually, the phononic crystal experiences remarkable changes in its band-stop properties, such as shifting/widening of multiple Bragg bandgaps and evolution to resonance gaps. Necessary extensions of the inclusions to block sound effectively can be determined for given incident frequencies by evaluating power transmission characteristics. By arraying finite dissimilar unit cells, the resulting phononic crystal exhibits broadband sound blocking from combinational effects of multiple Bragg scattering and local resonances even with small-numbered cells.

  3. Preliminary laboratory testing on the sound absorption of coupled cavity sonic crystal

    NASA Astrophysics Data System (ADS)

    Kristiani, R.; Yahya, I.; Harjana; Suparmi

    2016-11-01

    This paper focuses on the sound absorption performance of coupled cavity sonic crystal. It constructed by a pair of a cylindrical tube with different values in diameters. A laboratory test procedure after ASTM E1050 has been conducted to measure the sound absorption of the sonic crystal elements. The test procedures were implemented to a single coupled scatterer and also to a pair of similar structure. The results showed that using the paired structure bring a better possibility for increase the sound absorption to a wider absorption range. It also bring a practical advantage for setting the local Helmholtz resonant frequency to certain intended frequency.

  4. Distinct Correlation Structure Supporting a Rate-Code for Sound Localization in the Owl’s Auditory Forebrain

    PubMed Central

    2017-01-01

    Abstract While a topographic map of auditory space exists in the vertebrate midbrain, it is absent in the forebrain. Yet, both brain regions are implicated in sound localization. The heterogeneous spatial tuning of adjacent sites in the forebrain compared to the midbrain reflects different underlying circuitries, which is expected to affect the correlation structure, i.e., signal (similarity of tuning) and noise (trial-by-trial variability) correlations. Recent studies have drawn attention to the impact of response correlations on the information readout from a neural population. We thus analyzed the correlation structure in midbrain and forebrain regions of the barn owl’s auditory system. Tetrodes were used to record in the midbrain and two forebrain regions, Field L and the downstream auditory arcopallium (AAr), in anesthetized owls. Nearby neurons in the midbrain showed high signal and noise correlations (RNCs), consistent with shared inputs. As previously reported, Field L was arranged in random clusters of similarly tuned neurons. Interestingly, AAr neurons displayed homogeneous monotonic azimuth tuning, while response variability of nearby neurons was significantly less correlated than the midbrain. Using a decoding approach, we demonstrate that low RNC in AAr restricts the potentially detrimental effect it can have on information, assuming a rate code proposed for mammalian sound localization. This study harnesses the power of correlation structure analysis to investigate the coding of auditory space. Our findings demonstrate distinct correlation structures in the auditory midbrain and forebrain, which would be beneficial for a rate-code framework for sound localization in the nontopographic forebrain representation of auditory space. PMID:28674698

  5. Seismic Study of the Dynamics of the Solar Subsurface from SoHO Observations

    NASA Technical Reports Server (NTRS)

    Korzennik, Sylvain G.; Wagner, William J. (Technical Monitor)

    2001-01-01

    In collaboration with Dr. Baudin, we have developed and refined the new observational methodology for local helioseismology known as time-distance analysis. Global helioseismology study the solar oscillations as a superposition of resonant modes, whose properties (mode frequencies) reflect the global structure of the sun (sound speed stratification, rotation rate, etc). In contrast, local helioseismology look at the solar oscillations as wave packets whose propagation will be affected by perturbations of the media sampled. These local perturbations (sound speed or velocity flows) will modify the propagation time, that in turn can be used as a diagnostic tool for a given region. From a data reduction perspective, the processing of solar dopplergrams that result in time-distance maps, i.e. propagation times as a function of distance between bounces at the surface, is radically different from the methodology used for global mode analysis. We have, in a first step, further develop the programs needed to carry out such analysis. We have then applied them to NMI data set, and explore the trade-off between various averaging and filtering approaches - steps required to improve the signal-to-noise ratio of correlation maps - and the resulting stability and precision of the fitted propagation times. While excessive averaging (whether over space, propagation distance, or time) will reduce the diagnostic potential of the method, insufficient averaging lead to unstable fits, or uncertainties so large as to hide the information we seek. In a second phase, we have developed the analysis methodology required to infer local properties from perturbation in time propagation. Namely, we have developed time-distance inversion techniques, with an emphasis on inferences of velocity flows from time anomalies. Note also that during the period covered by this grant, all the investigators on this proposal (i.e., Drs. Baudin, Eff-Darwich, Korzennik, and Noyes) took part in the organization of the SOHO 6 /GONG 99 Workshop: Structure and Dynamics of the Interior of the Sun and Sunlike Stars, held on June 1-4 1999 at the Boston Park Plaza Hotel in Boston, Massachusetts, USA. it was very well attended by more than 160 participants from 26 countries from all over the world. The proceedings were published in two volumes as ESA SP-418, with Sessions I-III in Volume 1, and Sessions IV-VI in Volume 2 (1,000 pages in total). The complete contents are also included in digital form on a CD-ROM included with Volume 1. This CD-ROM also contains additional multi-media material that complements some of the contributions.

  6. Diversity of fish sound types in the Pearl River Estuary, China

    PubMed Central

    Wang, Zhi-Tao; Nowacek, Douglas P.; Akamatsu, Tomonari; Liu, Jian-Chang; Duan, Guo-Qin; Cao, Han-Jiang

    2017-01-01

    Background Repetitive species-specific sound enables the identification of the presence and behavior of soniferous species by acoustic means. Passive acoustic monitoring has been widely applied to monitor the spatial and temporal occurrence and behavior of calling species. Methods Underwater biological sounds in the Pearl River Estuary, China, were collected using passive acoustic monitoring, with special attention paid to fish sounds. A total of 1,408 suspected fish calls comprising 18,942 pulses were qualitatively analyzed using a customized acoustic analysis routine. Results We identified a diversity of 66 types of fish sounds. In addition to single pulse, the sounds tended to have a pulse train structure. The pulses were characterized by an approximate 8 ms duration, with a peak frequency from 500 to 2,600 Hz and a majority of the energy below 4,000 Hz. The median inter-pulsepeak interval (IPPI) of most call types was 9 or 10 ms. Most call types with median IPPIs of 9 ms and 10 ms were observed at times that were exclusive from each other, suggesting that they might be produced by different species. According to the literature, the two section signal types of 1 + 1 and 1 + N10 might belong to big-snout croaker (Johnius macrorhynus), and 1 + N19 might be produced by Belanger’s croaker (J. belangerii). Discussion Categorization of the baseline ambient biological sound is an important first step in mapping the spatial and temporal patterns of soniferous fishes. The next step is the identification of the species producing each sound. The distribution pattern of soniferous fishes will be helpful for the protection and management of local fishery resources and in marine environmental impact assessment. Since the local vulnerable Indo-Pacific humpback dolphin (Sousa chinensis) mainly preys on soniferous fishes, the fine-scale distribution pattern of soniferous fishes can aid in the conservation of this species. Additionally, prey and predator relationships can be observed when a database of species-identified sounds is completed. PMID:29085746

  7. Diversity of fish sound types in the Pearl River Estuary, China.

    PubMed

    Wang, Zhi-Tao; Nowacek, Douglas P; Akamatsu, Tomonari; Wang, Ke-Xiong; Liu, Jian-Chang; Duan, Guo-Qin; Cao, Han-Jiang; Wang, Ding

    2017-01-01

    Repetitive species-specific sound enables the identification of the presence and behavior of soniferous species by acoustic means. Passive acoustic monitoring has been widely applied to monitor the spatial and temporal occurrence and behavior of calling species. Underwater biological sounds in the Pearl River Estuary, China, were collected using passive acoustic monitoring, with special attention paid to fish sounds. A total of 1,408 suspected fish calls comprising 18,942 pulses were qualitatively analyzed using a customized acoustic analysis routine. We identified a diversity of 66 types of fish sounds. In addition to single pulse, the sounds tended to have a pulse train structure. The pulses were characterized by an approximate 8 ms duration, with a peak frequency from 500 to 2,600 Hz and a majority of the energy below 4,000 Hz. The median inter-pulsepeak interval (IPPI) of most call types was 9 or 10 ms. Most call types with median IPPIs of 9 ms and 10 ms were observed at times that were exclusive from each other, suggesting that they might be produced by different species. According to the literature, the two section signal types of 1 + 1 and 1 + N 10 might belong to big-snout croaker ( Johnius macrorhynus ), and 1 + N 19 might be produced by Belanger's croaker ( J. belangerii ). Categorization of the baseline ambient biological sound is an important first step in mapping the spatial and temporal patterns of soniferous fishes. The next step is the identification of the species producing each sound. The distribution pattern of soniferous fishes will be helpful for the protection and management of local fishery resources and in marine environmental impact assessment. Since the local vulnerable Indo-Pacific humpback dolphin ( Sousa chinensis ) mainly preys on soniferous fishes, the fine-scale distribution pattern of soniferous fishes can aid in the conservation of this species. Additionally, prey and predator relationships can be observed when a database of species-identified sounds is completed.

  8. The Importance of "What": Infants Use Featural Information to Index Events

    ERIC Educational Resources Information Center

    Kirkham, Natasha Z.; Richardson, Daniel C.; Wu, Rachel; Johnson, Scott P.

    2012-01-01

    Dynamic spatial indexing is the ability to encode, remember, and track the location of complex events. For example, in a previous study, 6-month-old infants were familiarized to a toy making a particular sound in a particular location, and later they fixated that empty location when they heard the sound presented alone ("Journal of Experimental…

  9. 77 FR 37600 - Safety Zone; Arctic Drilling and Support Vessels, Puget Sound, WA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-22

    ... made local inquiries and chartered a vessel to observe the mobile offshore drilling unit (MODU) KULLUK... 1625-AA00 Safety Zone; Arctic Drilling and Support Vessels, Puget Sound, WA AGENCY: Coast Guard, DHS... nineteen vessels associated with Arctic drilling as well as their lead towing vessels while those vessels...

  10. 76 FR 42542 - Special Local Regulations for Marine Events, Bogue Sound; Morehead City, NC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-19

    .... The likely combination of large numbers of recreational vessels, powerboats traveling at high speeds... Bogue Sound, adjacent to Morehead City from the southern tip of Sugar Loaf Island approximate position...'' N, longitude 076[deg]42'12'' W, thence westerly to the southern tip of Sugar Loaf Island the point...

  11. Complex auditory behaviour emerges from simple reactive steering

    NASA Astrophysics Data System (ADS)

    Hedwig, Berthold; Poulet, James F. A.

    2004-08-01

    The recognition and localization of sound signals is fundamental to acoustic communication. Complex neural mechanisms are thought to underlie the processing of species-specific sound patterns even in animals with simple auditory pathways. In female crickets, which orient towards the male's calling song, current models propose pattern recognition mechanisms based on the temporal structure of the song. Furthermore, it is thought that localization is achieved by comparing the output of the left and right recognition networks, which then directs the female to the pattern that most closely resembles the species-specific song. Here we show, using a highly sensitive method for measuring the movements of female crickets, that when walking and flying each sound pulse of the communication signal releases a rapid steering response. Thus auditory orientation emerges from reactive motor responses to individual sound pulses. Although the reactive motor responses are not based on the song structure, a pattern recognition process may modulate the gain of the responses on a longer timescale. These findings are relevant to concepts of insect auditory behaviour and to the development of biologically inspired robots performing cricket-like auditory orientation.

  12. Integrating sensorimotor systems in a robot model of cricket behavior

    NASA Astrophysics Data System (ADS)

    Webb, Barbara H.; Harrison, Reid R.

    2000-10-01

    The mechanisms by which animals manage sensorimotor integration and coordination of different behaviors can be investigated in robot models. In previous work the first author has build a robot that localizes sound based on close modeling of the auditory and neural system in the cricket. It is known that the cricket combines its response to sound with other sensorimotor activities such as an optomotor reflex and reactions to mechanical stimulation for the antennae and cerci. Behavioral evidence suggests some ways these behaviors may be integrated. We have tested the addition of an optomotor response, using an analog VLSI circuit developed by the second author, to the sound localizing behavior and have shown that it can, as in the cricket, improve the directness of the robot's path to sound. In particular it substantially improves behavior when the robot is subject to a motor disturbance. Our aim is to better understand how the insect brain functions in controlling complex combinations of behavior, with the hope that this will also suggest novel mechanisms for sensory integration on robots.

  13. The natural history of sound localization in mammals--a story of neuronal inhibition.

    PubMed

    Grothe, Benedikt; Pecka, Michael

    2014-01-01

    Our concepts of sound localization in the vertebrate brain are widely based on the general assumption that both the ability to detect air-borne sounds and the neuronal processing are homologous in archosaurs (present day crocodiles and birds) and mammals. Yet studies repeatedly report conflicting results on the neuronal circuits and mechanisms, in particular the role of inhibition, as well as the coding strategies between avian and mammalian model systems. Here we argue that mammalian and avian phylogeny of spatial hearing is characterized by a convergent evolution of hearing air-borne sounds rather than by homology. In particular, the different evolutionary origins of tympanic ears and the different availability of binaural cues in early mammals and archosaurs imposed distinct constraints on the respective binaural processing mechanisms. The role of synaptic inhibition in generating binaural spatial sensitivity in mammals is highlighted, as it reveals a unifying principle of mammalian circuit design for encoding sound position. Together, we combine evolutionary, anatomical and physiological arguments for making a clear distinction between mammalian processing mechanisms and coding strategies and those of archosaurs. We emphasize that a consideration of the convergent nature of neuronal mechanisms will significantly increase the explanatory power of studies of spatial processing in both mammals and birds.

  14. Depth dependence of wind-driven, broadband ambient noise in the Philippine Sea.

    PubMed

    Barclay, David R; Buckingham, Michael J

    2013-01-01

    In 2009, as part of PhilSea09, the instrument platform known as Deep Sound was deployed in the Philippine Sea, descending under gravity to a depth of 6000 m, where it released a drop weight, allowing buoyancy to return it to the surface. On the descent and ascent, at a speed of 0.6 m/s, Deep Sound continuously recorded broadband ambient noise on two vertically aligned hydrophones separated by 0.5 m. For frequencies between 1 and 10 kHz, essentially all the noise was found to be downward traveling, exhibiting a depth-independent directional density function having the simple form cos θ, where θ ≤ 90° is the polar angle measured from the zenith. The spatial coherence and cross-spectral density of the noise show no change in character in the vicinity of the critical depth, consistent with a local, wind-driven surface-source distribution. The coherence function accurately matches that predicted by a simple model of deep-water, wind-generated noise, provided that the theoretical coherence is evaluated using the local sound speed. A straightforward inversion procedure is introduced for recovering the sound speed profile from the cross-correlation function of the noise, returning sound speeds with a root-mean-square error relative to an independently measured profile of 8.2 m/s.

  15. The natural history of sound localization in mammals – a story of neuronal inhibition

    PubMed Central

    Grothe, Benedikt; Pecka, Michael

    2014-01-01

    Our concepts of sound localization in the vertebrate brain are widely based on the general assumption that both the ability to detect air-borne sounds and the neuronal processing are homologous in archosaurs (present day crocodiles and birds) and mammals. Yet studies repeatedly report conflicting results on the neuronal circuits and mechanisms, in particular the role of inhibition, as well as the coding strategies between avian and mammalian model systems. Here we argue that mammalian and avian phylogeny of spatial hearing is characterized by a convergent evolution of hearing air-borne sounds rather than by homology. In particular, the different evolutionary origins of tympanic ears and the different availability of binaural cues in early mammals and archosaurs imposed distinct constraints on the respective binaural processing mechanisms. The role of synaptic inhibition in generating binaural spatial sensitivity in mammals is highlighted, as it reveals a unifying principle of mammalian circuit design for encoding sound position. Together, we combine evolutionary, anatomical and physiological arguments for making a clear distinction between mammalian processing mechanisms and coding strategies and those of archosaurs. We emphasize that a consideration of the convergent nature of neuronal mechanisms will significantly increase the explanatory power of studies of spatial processing in both mammals and birds. PMID:25324726

  16. The contribution of two ears to the perception of vertical angle in sagittal planes.

    PubMed

    Morimoto, M

    2001-04-01

    Because the input signals to the left and right ears are not identical, it is important to clarify the role of these signals in the perception of the vertical angle of a sound source at any position in the upper hemisphere. To obtain basic findings on upper hemisphere localization, this paper investigates the contribution of each pinna to the perception of vertical angle. Tests measured localization of the vertical angle in five planes parallel to the median plane. In the localization tests, the pinna cavities of one or both ears were occluded. Results showed that pinna cavities of both the near and far ears play a role in determining the perceived vertical angle of a sound source in any plane, including the median plane. As a sound source shifts laterally away from the median plane, the contribution of the near ear increases and, conversely, that of the far ear decreases. For saggital planes at azimuths greater than 60 degrees from midline, the far ear no longer contributes measurably to the determination of vertical angle.

  17. Photoacoustic emission from Au nanoparticles arrayed on thermal insulation layer.

    PubMed

    Namura, Kyoko; Suzuki, Motofumi; Nakajima, Kaoru; Kimura, Kenji

    2013-04-08

    Efficient photoacoustic emission from Au nanoparticles on a porous SiO(2) layer was investigated experimentally and theoretically. The Au nanoparticle arrays/porous SiO(2)/SiO(2)/Ag mirror sandwiches, namely, local plasmon resonators, were prepared by dynamic oblique deposition (DOD). Photoacoustic measurements were performed on the local plasmon resonators, whose optical absorption was varied from 0.03 (3%) to 0.95 by varying the thickness of the dielectric SiO(2) layer. The sample with high absorption (0.95) emitted a sound that was eight times stronger than that emitted by graphite (0.94) and three times stronger than that emitted by the sample without the porous SiO(2) layer (0.93). The contribution of the porous SiO(2) layer to the efficient photoacoustic emission was analyzed by means of a numerical method based on a one-dimensional heat transfer model. The result suggested that the low thermal conductivity of the underlying porous layer reduces the amount of heat escaping from the substrate and contributes to the efficient photoacoustic emission from Au nanoparticle arrays. Because both the thermal conductivity and the spatial distribution of the heat generation can be controlled by DOD, the local plasmon resonators produced by DOD are suitable for the spatio-temporal modulation of the local temperature.

  18. Perception of touch quality in piano tones.

    PubMed

    Goebl, Werner; Bresin, Roberto; Fujinaga, Ichiro

    2014-11-01

    Both timbre and dynamics of isolated piano tones are determined exclusively by the speed with which the hammer hits the strings. This physical view has been challenged by pianists who emphasize the importance of the way the keyboard is touched. This article presents empirical evidence from two perception experiments showing that touch-dependent sound components make sounds with identical hammer velocities but produced with different touch forms clearly distinguishable. The first experiment focused on finger-key sounds: musicians could identify pressed and struck touches. When the finger-key sounds were removed from the sounds, the effect vanished, suggesting that these sounds were the primary identification cue. The second experiment looked at key-keyframe sounds that occur when the key reaches key-bottom. Key-bottom impact was identified from key motion measured by a computer-controlled piano. Musicians were able to discriminate between piano tones that contain a key-bottom sound from those that do not. However, this effect might be attributable to sounds associated with the mechanical components of the piano action. In addition to the demonstrated acoustical effects of different touch forms, visual and tactile modalities may play important roles during piano performance that influence the production and perception of musical expression on the piano.

  19. The influence of bat echolocation call duration and timing on auditory encoding of predator distance in noctuoid moths.

    PubMed

    Gordon, Shira D; Ter Hofstede, Hannah M

    2018-03-22

    Animals co-occur with multiple predators, making sensory systems that can encode information about diverse predators advantageous. Moths in the families Noctuidae and Erebidae have ears with two auditory receptor cells (A1 and A2) used to detect the echolocation calls of predatory bats. Bat communities contain species that vary in echolocation call duration, and the dynamic range of A1 is limited by the duration of sound, suggesting that A1 provides less information about bats with shorter echolocation calls. To test this hypothesis, we obtained intensity-response functions for both receptor cells across many moth species for sound pulse durations representing the range of echolocation call durations produced by bat species in northeastern North America. We found that the threshold and dynamic range of both cells varied with sound pulse duration. The number of A1 action potentials per sound pulse increases linearly with increasing amplitude for long-duration pulses, saturating near the A2 threshold. For short sound pulses, however, A1 saturates with only a few action potentials per pulse at amplitudes far lower than the A2 threshold for both single sound pulses and pulse sequences typical of searching or approaching bats. Neural adaptation was only evident in response to approaching bat sequences at high amplitudes, not search-phase sequences. These results show that, for short echolocation calls, a large range of sound levels cannot be coded by moth auditory receptor activity, resulting in no information about the distance of a bat, although differences in activity between ears might provide information about direction. © 2018. Published by The Company of Biologists Ltd.

  20. Slow-wave metamaterial open panels for efficient reduction of low-frequency sound transmission

    NASA Astrophysics Data System (ADS)

    Yang, Jieun; Lee, Joong Seok; Lee, Hyeong Rae; Kang, Yeon June; Kim, Yoon Young

    2018-02-01

    Sound transmission reduction is typically governed by the mass law, requiring thicker panels to handle lower frequencies. When open holes must be inserted in panels for heat transfer, ventilation, or other purposes, the efficient reduction of sound transmission through holey panels becomes difficult, especially in the low-frequency ranges. Here, we propose slow-wave metamaterial open panels that can dramatically lower the working frequencies of sound transmission loss. Global resonances originating from slow waves realized by multiply inserted, elaborately designed subwavelength rigid partitions between two thin holey plates contribute to sound transmission reductions at lower frequencies. Owing to the dispersive characteristics of the present metamaterial panels, local resonances that trap sound in the partitions also occur at higher frequencies, exhibiting negative effective bulk moduli and zero effective velocities. As a result, low-frequency broadened sound transmission reduction is realized efficiently in the present metamaterial panels. The theoretical model of the proposed metamaterial open panels is derived using an effective medium approach and verified by numerical and experimental investigations.

  1. Salient sounds activate human visual cortex automatically.

    PubMed

    McDonald, John J; Störmer, Viola S; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A

    2013-05-22

    Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, this study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2-4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of colocalized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task.

  2. A Numerical Experiment on the Role of Surface Shear Stress in the Generation of Sound

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Wang, Meng; Merriam, Marshal (Technical Monitor)

    1996-01-01

    The sound generated due to a localized flow over an infinite flat surface is considered. It is known that the unsteady surface pressure, while appearing in a formal solution to the Lighthill equation, does not constitute a source of sound but rather represents the effect of image quadrupoles. The question of whether a similar surface shear stress term constitutes a true source of dipole sound is less settled. Some have boldly assumed it is a true source while others have argued that, like the surface pressure, it depends on the sound field (via an acoustic boundary layer) and is therefore not a true source. A numerical experiment based on the viscous, compressible Navier-Stokes equations was undertaken to investigate the issue. A small region of a wall was oscillated tangentially. The directly computed sound field was found to to agree with an acoustic analogy based calculation which regards the surface shear as an acoustically compact dipole source of sound.

  3. Drift and geodesic effects on the ion sound eigenmode in tokamak plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elfimov, A. G., E-mail: elfimov@if.usp.br; Smolyakov, A. I., E-mail: andrei.smolyakov@usask.ca; Melnikov, A. V.

    A kinetic treatment of geodesic acoustic modes (GAMs), taking into account ion parallel dynamics, drift and the second poloidal harmonic effects is presented. It is shown that first and second harmonics of the ion sound modes, which have respectively positive and negative radial dispersion, can be coupled due to the geodesic and drift effects. This coupling results in the drift geodesic ion sound eigenmode with a frequency below the standard GAM continuum frequency. Such eigenmode may be able to explain the split modes observed in some experiments.

  4. Sound produced by an oscillating arc in a high-pressure gas

    NASA Astrophysics Data System (ADS)

    Popov, Fedor K.; Shneider, Mikhail N.

    2017-08-01

    We suggest a simple theory to describe the sound generated by small periodic perturbations of a cylindrical arc in a dense gas. Theoretical analysis was done within the framework of the non-self-consistent channel arc model and supplemented with time-dependent gas dynamic equations. It is shown that an arc with power amplitude oscillations on the order of several percent is a source of sound whose intensity is comparable with external ultrasound sources used in experiments to increase the yield of nanoparticles in the high pressure arc systems for nanoparticle synthesis.

  5. The laminar and temporal structure of stimulus information in the phase of field potentials of auditory cortex.

    PubMed

    Szymanski, Francois D; Rabinowitz, Neil C; Magri, Cesare; Panzeri, Stefano; Schnupp, Jan W H

    2011-11-02

    Recent studies have shown that the phase of low-frequency local field potentials (LFPs) in sensory cortices carries a significant amount of information about complex naturalistic stimuli, yet the laminar circuit mechanisms and the aspects of stimulus dynamics responsible for generating this phase information remain essentially unknown. Here we investigated these issues by means of an information theoretic analysis of LFPs and current source densities (CSDs) recorded with laminar multi-electrode arrays in the primary auditory area of anesthetized rats during complex acoustic stimulation (music and broadband 1/f stimuli). We found that most LFP phase information originated from discrete "CSD events" consisting of granular-superficial layer dipoles of short duration and large amplitude, which we hypothesize to be triggered by transient thalamocortical activation. These CSD events occurred at rates of 2-4 Hz during both stimulation with complex sounds and silence. During stimulation with complex sounds, these events reliably reset the LFP phases at specific times during the stimulation history. These facts suggest that the informativeness of LFP phase in rat auditory cortex is the result of transient, large-amplitude events, of the "evoked" or "driving" type, reflecting strong depolarization in thalamo-recipient layers of cortex. Finally, the CSD events were characterized by a small number of discrete types of infragranular activation. The extent to which infragranular regions were activated was stimulus dependent. These patterns of infragranular activations may reflect a categorical evaluation of stimulus episodes by the local circuit to determine whether to pass on stimulus information through the output layers.

  6. Influence of airfoil thickness on convected gust interaction noise

    NASA Technical Reports Server (NTRS)

    Kerschen, E. J.; Tsai, C. T.

    1989-01-01

    The case of a symmetric airfoil at zero angle of attack is considered in order to determine the influence of airfoil thickness on sound generated by interaction with convected gusts. The analysis is based on a linearization of the Euler equations about the subsonic mean flow past the airfoil. Primary sound generation is found to occur in a local region surrounding the leading edge, with the size of the local region scaling on the gust wavelength. For a parabolic leading edge, moderate leading edge thickness is shown to decrease the noise level in the low Mach number limit.

  7. Impedance measurement of non-locally reactive samples and the influence of the assumption of local reaction.

    PubMed

    Brandão, Eric; Mareze, Paulo; Lenzi, Arcanjo; da Silva, Andrey R

    2013-05-01

    In this paper, the measurement of the absorption coefficient of non-locally reactive sample layers of thickness d1 backed by a rigid wall is investigated. The investigation is carried out with the aid of real and theoretical experiments, which assume a monopole sound source radiating sound above an infinite non-locally reactive layer. A literature search revealed that the number of papers devoted to this matter is rather limited in comparison to those which address the measurement of locally reactive samples. Furthermore, the majority of papers published describe the use of two or more microphones whereas this paper focuses on the measurement with the pressure-particle velocity sensor (PU technique). For these reasons, the assumption that the sample is locally reactive is initially explored, so that the associated measurement errors can be quantified. Measurements in the impedance tube and in a semi-anechoic room are presented to validate the theoretical experiment. For samples with a high non-local reaction behavior, for which the measurement errors tend to be high, two different algorithms are proposed in order to minimize the associated errors.

  8. Sound synchronization of bubble trains in a viscous fluid: experiment and modeling.

    PubMed

    Pereira, Felipe Augusto Cardoso; Baptista, Murilo da Silva; Sartorelli, José Carlos

    2014-10-01

    We investigate the dynamics of formation of air bubbles expelled from a nozzle immersed in a viscous fluid under the influence of sound waves. We have obtained bifurcation diagrams by measuring the time between successive bubbles, having the air flow (Q) as a parameter control for many values of the sound wave amplitude (A), the height (H) of the solution above the top of the nozzle, and three values of the sound frequency (fs). Our parameter spaces (Q,A) revealed a scenario for the onset of synchronization dominated by Arnold tongues (frequency locking) which gives place to chaotic phase synchronization for sufficiently large A. The experimental results were accurately reproduced by numerical simulations of a model combining a simple bubble growth model for the bubble train and a coupling term with the sound wave added to the equilibrium pressure.

  9. A Neural Mechanism for Time-Window Separation Resolves Ambiguity of Adaptive Coding

    PubMed Central

    Hildebrandt, K. Jannis; Ronacher, Bernhard; Hennig, R. Matthias; Benda, Jan

    2015-01-01

    The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task—namely, the reliable encoding of the pattern of an acoustic signal—but detrimental for another—the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097

  10. Assessment and improvement of sound quality in cochlear implant users

    PubMed Central

    Caldwell, Meredith T.; Jiam, Nicole T.

    2017-01-01

    Objectives Cochlear implants (CIs) have successfully provided speech perception to individuals with sensorineural hearing loss. Recent research has focused on more challenging acoustic stimuli such as music and voice emotion. The purpose of this review is to evaluate and describe sound quality in CI users with the purposes of summarizing novel findings and crucial information about how CI users experience complex sounds. Data Sources Here we review the existing literature on PubMed and Scopus to present what is known about perceptual sound quality in CI users, discuss existing measures of sound quality, explore how sound quality may be effectively studied, and examine potential strategies of improving sound quality in the CI population. Results Sound quality, defined here as the perceived richness of an auditory stimulus, is an attribute of implant‐mediated listening that remains poorly studied. Sound quality is distinct from appraisal, which is generally defined as the subjective likability or pleasantness of a sound. Existing studies suggest that sound quality perception in the CI population is limited by a range of factors, most notably pitch distortion and dynamic range compression. Although there are currently very few objective measures of sound quality, the CI‐MUSHRA has been used as a means of evaluating sound quality. There exist a number of promising strategies to improve sound quality perception in the CI population including apical cochlear stimulation, pitch tuning, and noise reduction processing strategies. Conclusions In the published literature, sound quality perception is severely limited among CI users. Future research should focus on developing systematic, objective, and quantitative sound quality metrics and designing therapies to mitigate poor sound quality perception in CI users. Level of Evidence NA PMID:28894831

  11. A Framework for Speech Activity Detection Using Adaptive Auditory Receptive Fields.

    PubMed

    Carlin, Michael A; Elhilali, Mounya

    2015-12-01

    One of the hallmarks of sound processing in the brain is the ability of the nervous system to adapt to changing behavioral demands and surrounding soundscapes. It can dynamically shift sensory and cognitive resources to focus on relevant sounds. Neurophysiological studies indicate that this ability is supported by adaptively retuning the shapes of cortical spectro-temporal receptive fields (STRFs) to enhance features of target sounds while suppressing those of task-irrelevant distractors. Because an important component of human communication is the ability of a listener to dynamically track speech in noisy environments, the solution obtained by auditory neurophysiology implies a useful adaptation strategy for speech activity detection (SAD). SAD is an important first step in a number of automated speech processing systems, and performance is often reduced in highly noisy environments. In this paper, we describe how task-driven adaptation is induced in an ensemble of neurophysiological STRFs, and show how speech-adapted STRFs reorient themselves to enhance spectro-temporal modulations of speech while suppressing those associated with a variety of nonspeech sounds. We then show how an adapted ensemble of STRFs can better detect speech in unseen noisy environments compared to an unadapted ensemble and a noise-robust baseline. Finally, we use a stimulus reconstruction task to demonstrate how the adapted STRF ensemble better captures the spectrotemporal modulations of attended speech in clean and noisy conditions. Our results suggest that a biologically plausible adaptation framework can be applied to speech processing systems to dynamically adapt feature representations for improving noise robustness.

  12. 12 CFR 345.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations... in Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  13. 12 CFR 25.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations... Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  14. 12 CFR 345.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations... in Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  15. 12 CFR 345.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations... in Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  16. 12 CFR 228.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... obtained from community organizations, state, local, and tribal governments, economic development agencies... condition of the bank, the economic climate (national, regional, and local), safety and soundness... in Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  17. 12 CFR 345.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations... in Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  18. 12 CFR 25.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations... Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  19. 12 CFR 228.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... obtained from community organizations, state, local, and tribal governments, economic development agencies... condition of the bank, the economic climate (national, regional, and local), safety and soundness... in Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  20. 12 CFR 25.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations... Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  1. 12 CFR 228.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... obtained from community organizations, state, local, and tribal governments, economic development agencies... condition of the bank, the economic climate (national, regional, and local), safety and soundness... in Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  2. 12 CFR 228.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... obtained from community organizations, state, local, and tribal governments, economic development agencies... condition of the bank, the economic climate (national, regional, and local), safety and soundness... in Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  3. 12 CFR 25.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations... Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  4. The influence of sea ice, wind speed and marine mammals on Southern Ocean ambient sound.

    PubMed

    Menze, Sebastian; Zitterbart, Daniel P; van Opzeeland, Ilse; Boebel, Olaf

    2017-01-01

    This paper describes the natural variability of ambient sound in the Southern Ocean, an acoustically pristine marine mammal habitat. Over a 3-year period, two autonomous recorders were moored along the Greenwich meridian to collect underwater passive acoustic data. Ambient sound levels were strongly affected by the annual variation of the sea-ice cover, which decouples local wind speed and sound levels during austral winter. With increasing sea-ice concentration, area and thickness, sound levels decreased while the contribution of distant sources increased. Marine mammal sounds formed a substantial part of the overall acoustic environment, comprising calls produced by Antarctic blue whales ( Balaenoptera musculus intermedia ), fin whales ( Balaenoptera physalus ), Antarctic minke whales ( Balaenoptera bonaerensis ) and leopard seals ( Hydrurga leptonyx ). The combined sound energy of a group or population vocalizing during extended periods contributed species-specific peaks to the ambient sound spectra. The temporal and spatial variation in the contribution of marine mammals to ambient sound suggests annual patterns in migration and behaviour. The Antarctic blue and fin whale contributions were loudest in austral autumn, whereas the Antarctic minke whale contribution was loudest during austral winter and repeatedly showed a diel pattern that coincided with the diel vertical migration of zooplankton.

  5. The influence of sea ice, wind speed and marine mammals on Southern Ocean ambient sound

    NASA Astrophysics Data System (ADS)

    Menze, Sebastian; Zitterbart, Daniel P.; van Opzeeland, Ilse; Boebel, Olaf

    2017-01-01

    This paper describes the natural variability of ambient sound in the Southern Ocean, an acoustically pristine marine mammal habitat. Over a 3-year period, two autonomous recorders were moored along the Greenwich meridian to collect underwater passive acoustic data. Ambient sound levels were strongly affected by the annual variation of the sea-ice cover, which decouples local wind speed and sound levels during austral winter. With increasing sea-ice concentration, area and thickness, sound levels decreased while the contribution of distant sources increased. Marine mammal sounds formed a substantial part of the overall acoustic environment, comprising calls produced by Antarctic blue whales (Balaenoptera musculus intermedia), fin whales (Balaenoptera physalus), Antarctic minke whales (Balaenoptera bonaerensis) and leopard seals (Hydrurga leptonyx). The combined sound energy of a group or population vocalizing during extended periods contributed species-specific peaks to the ambient sound spectra. The temporal and spatial variation in the contribution of marine mammals to ambient sound suggests annual patterns in migration and behaviour. The Antarctic blue and fin whale contributions were loudest in austral autumn, whereas the Antarctic minke whale contribution was loudest during austral winter and repeatedly showed a diel pattern that coincided with the diel vertical migration of zooplankton.

  6. Computational performance of Free Mesh Method applied to continuum mechanics problems

    PubMed Central

    YAGAWA, Genki

    2011-01-01

    The free mesh method (FMM) is a kind of the meshless methods intended for particle-like finite element analysis of problems that are difficult to handle using global mesh generation, or a node-based finite element method that employs a local mesh generation technique and a node-by-node algorithm. The aim of the present paper is to review some unique numerical solutions of fluid and solid mechanics by employing FMM as well as the Enriched Free Mesh Method (EFMM), which is a new version of FMM, including compressible flow and sounding mechanism in air-reed instruments as applications to fluid mechanics, and automatic remeshing for slow crack growth, dynamic behavior of solid as well as large-scale Eigen-frequency of engine block as applications to solid mechanics. PMID:21558753

  7. Development of Time-Distance Helioseismology Data Analysis Pipeline for SDO/HMI

    NASA Technical Reports Server (NTRS)

    DuVall, T. L., Jr.; Zhao, J.; Couvidat, S.; Parchevsky, K. V.; Beck, J.; Kosovichev, A. G.; Scherrer, P. H.

    2008-01-01

    The Helioseismic and Magnetic Imager of SDO will provide uninterrupted 4k x 4k-pixel Doppler-shift images of the Sun with approximately 40 sec cadence. These data will have a unique potential for advancing local helioseismic diagnostics of the Sun's interior structure and dynamics. They will help to understand the basic mechanisms of solar activity and develop predictive capabilities for NASA's Living with a Star program. Because of the tremendous amount of data the HMI team is developing a data analysis pipeline, which will provide maps of subsurface flows and sound-speed distributions inferred form the Doppler data by the time-distance technique. We discuss the development plan, methods, and algorithms, and present the status of the pipeline, testing results and examples of the data products.

  8. Listening to sound patterns as a dynamic activity

    NASA Astrophysics Data System (ADS)

    Jones, Mari Riess

    2003-04-01

    The act of listening to a series of sounds created by some natural event is described as involving an entrainmentlike process that transpires in real time. Some aspects of this dynamic process are suggested. In particular, real-time attending is described in terms of an adaptive synchronization activity that permits a listener to target attending energy to forthcoming elements within an acoustical pattern (e.g., music, speech, etc.). Also described are several experiments that illustrate features of this approach as it applies to attending to musiclike patterns. These involve listeners' responses to changes in either the timing or the pitch structure (or both) of various acoustical sequences.

  9. An ultrasound look at Korotkoff sounds: the role of pulse wave velocity and flow turbulence.

    PubMed

    Benmira, Amir; Perez-Martin, Antonia; Schuster, Iris; Veye, Florent; Triboulet, Jean; Berron, Nicolas; Aichoun, Isabelle; Coudray, Sarah; Laurent, Jérémy; Bereksi-Reguig, Fethi; Dauzat, Michel

    2017-04-01

    The aim of this study was to analyze the temporal relationships between pressure, flow, and Korotkoff sounds, providing clues for their comprehensive interpretation. When measuring blood pressure in a group of 23 volunteers, we used duplex Doppler ultrasonography to assess, under the arm-cuff, the brachial artery flow, diameter changes, and local pulse wave velocity (PWV), while recording Korotkoff sounds 10 cm downstream together with cuff pressure and ECG. The systolic (SBP) and diastolic (DBP) blood pressures were 118.8±17.7 and 65.4±10.4 mmHg, respectively (n=23). The brachial artery lumen started opening when cuff pressure decreased below the SBP and opened for an increasing length of time until cuff pressure reached the DBP, and then remained open but pulsatile. A high-energy low-frequency Doppler signal, starting a few milliseconds before flow, appeared and disappeared together with Korotkoff sounds at the SBP and DBP, respectively. Its median duration was 42.7 versus 41.1 ms for Korotkoff sounds (P=0.54; n=17). There was a 2.20±1.54 ms/mmHg decrement in the time delay between the ECG R-wave and the Korotkoff sounds during cuff deflation (n=18). The PWV was 10±4.48 m/s at null cuff pressure and showed a 0.62% decrement per mmHg when cuff pressure increased (n=13). Korotkoff sounds are associated with a high-energy low-frequency Doppler signal of identical duration, typically resulting from wall vibrations, followed by flow turbulence. Local arterial PWV decreases when cuff pressure increases. Exploiting these changes may help improve SBP assessment, which remains a challenge for oscillometric techniques.

  10. Indexical Relations and Sound Motion Pictures in L2 Curricula: The Dynamic Role of the Teacher

    ERIC Educational Resources Information Center

    Chen, Liang; Oller, John W., Jr.

    2005-01-01

    Well-chosen sound motion pictures (SMPs) can be excellent language teaching tools for presenting facts and providing comprehensible input in the target language. They give access to content and authentic surface forms in the target language as well as to the associations between them. SMPs also allow repeated exposures, but they are rarely…

  11. TABLE D - WMO AND LOCAL (NCEP) DESCRIPTORS AS WELL AS THOSE AWAITING

    Science.gov Websites

    sequences common to satellite observations None 3 05 Meteorological or hydrological sequences common to Vertical sounding sequences (conventional data) None 3 10 Vertical sounding sequences (satellite data) None (satellite data) None 3 13 Sequences common to image data None 3 14 Reserved None 3 15 Oceanographic report

  12. A Place for Sound: Raising Children's Awareness of Their Sonic Environment

    ERIC Educational Resources Information Center

    Deans, Jan; Brown, Robert; Dilkes, Helen

    2005-01-01

    This paper reports on an experiential project that involved a group of children aged four to five years and their teachers in an investigation of sounds in their local environment. It describes the key elements of an eight-week teaching and learning program that encouraged children to experience and re-experience their surrounding sound…

  13. Recovery monitoring of pigeon guillemot populations in Prince William Sound, Alaska. Restoration project 94173. Exxon Valdez oil spill restoration project final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayes, D.L.

    1995-05-01

    The population of pigeon guillemots in Prince William Sound decreased from about 15,000 (1970`s) to about 5,000 (present). Some local populations were affected by the U/V Exxon Valdez oil spill in 1989, but there is evidence suggesting the Sound-wide population was already declining. Predation was the cause of numerous nesting failures. Abandonment of eggs was also high. Changes in the relative proportions of benthic and schooling fish in the diet of guillemot chicks might represent a key change in the ecosystem that is affecting other species of marine birds and mammals in the Sound.

  14. Combined visualization for noise mapping of industrial facilities based on ray-tracing and thin plate splines

    NASA Astrophysics Data System (ADS)

    Ovsiannikov, Mikhail; Ovsiannikov, Sergei

    2017-01-01

    The paper presents the combined approach to noise mapping and visualizing of industrial facilities sound pollution using forward ray tracing method and thin-plate spline interpolation. It is suggested to cauterize industrial area in separate zones with similar sound levels. Equivalent local source is defined for range computation of sanitary zones based on ray tracing algorithm. Computation of sound pressure levels within clustered zones are based on two-dimension spline interpolation of measured data on perimeter and inside the zone.

  15. Stability and nonlinear adjustment of vortices in Keplerian flows

    NASA Astrophysics Data System (ADS)

    Bodo, G.; Tevzadze, A.; Chagelishvili, G.; Mignone, A.; Rossi, P.; Ferrari, A.

    2007-11-01

    Aims:We investigate the stability, nonlinear development and equilibrium structure of vortices in a background shearing Keplerian flow Methods: We make use of high-resolution global two-dimensional compressible hydrodynamic simulations. We introduce the concept of nonlinear adjustment to describe the transition of unbalanced vortical fields to a long-lived configuration. Results: We discuss the conditions under which vortical perturbations evolve into long-lived persistent structures and we describe the properties of these equilibrium vortices. The properties of equilibrium vortices appear to be independent from the initial conditions and depend only on the local disk parameters. In particular we find that the ratio of the vortex size to the local disk scale height increases with the decrease of the sound speed, reaching values well above the unity. The process of spiral density wave generation by the vortex, discussed in our previous work, appear to maintain its efficiency also at nonlinear amplitudes and we observe the formation of spiral shocks attached to the vortex. The shocks may have important consequences on the long term vortex evolution and possibly on the global disk dynamics. Conclusions: Our study strengthens the arguments in favor of anticyclonic vortices as the candidates for the promotion of planetary formation. Hydrodynamic shocks that are an intrinsic property of persistent vortices in compressible Keplerian flows are an important contributor to the overall balance. These shocks support vortices against viscous dissipation by generating local potential vorticity and should be responsible for the eventual fate of the persistent anticyclonic vortices. Numerical codes have be able to resolve shock waves to describe the vortex dynamics correctly.

  16. Lightweight fiber optic microphones and accelerometers

    NASA Astrophysics Data System (ADS)

    Bucaro, J. A.; Lagakos, N.

    2001-06-01

    We have designed, fabricated, and tested two lightweight fiber optic sensors for the dynamic measurement of acoustic pressure and acceleration. These sensors, one a microphone and the other an accelerometer, are required for active blanket sound control technology under development in our laboratory. The sensors were designed to perform to certain specifications dictated by our active sound control application and to do so without exhibiting sensitivity to the high electrical voltages expected to be present. Furthermore, the devices had to be small (volumes less than 1.5 cm3) and light (less than 2 g). To achieve these design criteria, we modified and extended fiber optic reflection microphone and fiber microbend displacement device designs reported in the literature. After fabrication, the performances of each sensor type were determined from measurements made in a dynamic pressure calibrator and on a shaker table. The fiber optic microbend accelerometer, which weighs less than 1.8 g, was found to meet all performance goals including 1% linearity, 90 dB dynamic range, and a minimum detectable acceleration of 0.2 mg/√Hz . The fiber optic microphone, which weighs less than 1.3 g, also met all goals including 1% linearity, 85 dB dynamic range, and a minimum detectable acoustic pressure level of 0.016 Pa/√Hz . In addition to our specific use in active sound control, these sensors appear to have application in a variety of other areas.

  17. Diurnal Cycle of ITCZ Convection during the MJO Suppressed Phase in DYNAMO

    NASA Astrophysics Data System (ADS)

    Ciesielski, P. E.; Johnson, R. H.; Schubert, W. H.

    2017-12-01

    During the special observing period of the Dynamics of the MJO (DYNAMO) experiment, conducted over the Indian Ocean from 1 October to 30 November 2011, two sounding arrays - one north and one south of the equator, referred to here as the NSA and SSA, respectively - took 4-8 soundings/day. We augment this 3-h dataset with observations of radiation and rainfall to investigate the diurnal cycle of convection during the suppressed phase of the October MJO. During this 14-day period when convection was suppressed over the NSA but prominent over the SSA, the circulation over the sounding arrays could be characterized as a local Hadley cell embedded within a monsoonal flow. Strong rising motion was present within the ITCZ and compensating subsidence over the NSA. A prominent diurnal pulsing of this cell was observed, impacting conditions on both sides of the equator, with the cell running strongest in the early morning hours (05-08 LT) and notably weakening later in the day (17-20LT). The reduction in evening subsidence over the NSA may have assisted the moistening of the low to mid-troposphere there during the pre-onset stage of the MJO. Apparent heating Q1 within the ITCZ exhibits a diurnal evolution from early morning bottom-heavy profiles to weaker daytime top-heavy profiles. Making use of the weak temperature gradient approximation, results suggest that direct radiative effects played a dominant role in controlling diurnal variations of vertical motion and convection within the ITCZ while non-radiative processes were more prominent over the NSA.

  18. Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.

    PubMed

    Vercillo, Tiziana; Gori, Monica

    2015-01-01

    The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.

  19. Hydrodynamic phonon drift and second sound in a (20,20) single-wall carbon nanotube

    NASA Astrophysics Data System (ADS)

    Lee, Sangyeop; Lindsay, Lucas

    2017-05-01

    Two hydrodynamic features of phonon transport, phonon drift and second sound, in a (20,20) single-wall carbon nanotube (SWCNT) are discussed using lattice dynamics calculations employing an optimized Tersoff potential for atomic interactions. We formally derive a formula for the contribution of drift motion of phonons to total heat flux at steady state. It is found that the drift motion of phonons carries more than 70 % and 90 % of heat at 300 and 100 K, respectively, indicating that phonon flow can be reasonably approximated as hydrodynamic if the SWCNT is long enough to avoid ballistic phonon transport. The dispersion relation of second sound is derived from the Peierls-Boltzmann transport equation with Callaway's scattering model and quantifies the speed of second sound and its relaxation. The speed of second sound is around 4000 m/s in a (20,20) SWCNT and the second sound can propagate more than 10 µm in an isotopically pure (20,20) SWCNT for frequency around 1 GHz at 100 K.

  20. Spatiotemporally Resolved Acoustics in a Photoelastic Granular Material

    NASA Astrophysics Data System (ADS)

    Owens, Eli; Daniels, Karen

    2010-03-01

    In granular materials, stress transmission is manifested as force chains that propagate through the material in a branching structure. We send acoustic pulses into a two dimensional photoelastic granular material in which force chains are visible and investigate how the force chains influence the amplitude, speed, and dispersion of the sound waves. We observe particle scale dynamics using two methods, movies which provide spatiotemporally resolved measurements and accelerometers within individual grains. The movies allow us to visualize the sound's path through the material, revealing that the sound travels primarily along the force chains. Using the brightness of the photoelastic particles as a measure of the force chain strength, we observe that the sound travels both faster and at higher amplitude along the strong force chains. An exception to this trend is seen in transient force chains that only exist while the sound is closing particle contacts. We also measure the frequency dependence of the amplitude, speed, and dispersion of the sound wave.

  1. Relative size of auditory pathways in symmetrically and asymmetrically eared owls.

    PubMed

    Gutiérrez-Ibáñez, Cristián; Iwaniuk, Andrew N; Wylie, Douglas R

    2011-01-01

    Owls are highly efficient predators with a specialized auditory system designed to aid in the localization of prey. One of the most unique anatomical features of the owl auditory system is the evolution of vertically asymmetrical ears in some species, which improves their ability to localize the elevational component of a sound stimulus. In the asymmetrically eared barn owl, interaural time differences (ITD) are used to localize sounds in azimuth, whereas interaural level differences (ILD) are used to localize sounds in elevation. These two features are processed independently in two separate neural pathways that converge in the external nucleus of the inferior colliculus to form an auditory map of space. Here, we present a comparison of the relative volume of 11 auditory nuclei in both the ITD and the ILD pathways of 8 species of symmetrically and asymmetrically eared owls in order to investigate evolutionary changes in the auditory pathways in relation to ear asymmetry. Overall, our results indicate that asymmetrically eared owls have much larger auditory nuclei than owls with symmetrical ears. In asymmetrically eared owls we found that both the ITD and ILD pathways are equally enlarged, and other auditory nuclei, not directly involved in binaural comparisons, are also enlarged. We suggest that the hypertrophy of auditory nuclei in asymmetrically eared owls likely reflects both an improved ability to precisely locate sounds in space and an expansion of the hearing range. Additionally, our results suggest that the hypertrophy of nuclei that compute space may have preceded that of the expansion of the hearing range and evolutionary changes in the size of the auditory system occurred independently of phylogeny. Copyright © 2011 S. Karger AG, Basel.

  2. Acoustic properties of a short-finned pilot whale head with insight into temperature influence on tissues' sound velocity.

    PubMed

    Dong, Jianchen; Song, Zhongchang; Li, Songhai; Gong, Zining; Li, Kuan; Zhang, Peijun; Zhang, Yu; Zhang, Meng

    2017-10-01

    Acoustic properties of odontocete head tissues, including sound velocity, density, and acoustic impedance, are important parameters to understand dynamics of its echolocation. In this paper, acoustic properties of head tissues from a freshly dead short-finned pilot whale (Globicephala macrorhynchus) were reconstructed using computed tomography (CT) and ultrasound. The animal's forehead soft tissues were cut into 188 ordered samples. Sound velocity, density, and acoustic impedance of each sample were either directly measured or calculated by formula, and Hounsfield Unit values (HUs) were obtained from CT scanning. According to relationships between HUs and sound velocity, HUs and density, as well as HUs and acoustic impedance, distributions of acoustic properties in the head were reconstructed. The inner core in the melon with low-sound velocity and low-density is an evidence for its potential function of sound focusing. The increase in acoustic impedance of forehead tissues from inner core to outer layer may be important for the acoustic impedance matching between the outer layer tissue and seawater. In addition, temperature dependence of sound velocity in soft tissues was also examined. The results provide a guide to the simulation of the sound emission of the short-finned pilot whale.

  3. How learning to abstract shapes neural sound representations

    PubMed Central

    Ley, Anke; Vroomen, Jean; Formisano, Elia

    2014-01-01

    The transformation of acoustic signals into abstract perceptual representations is the essence of the efficient and goal-directed neural processing of sounds in complex natural environments. While the human and animal auditory system is perfectly equipped to process the spectrotemporal sound features, adequate sound identification and categorization require neural sound representations that are invariant to irrelevant stimulus parameters. Crucially, what is relevant and irrelevant is not necessarily intrinsic to the physical stimulus structure but needs to be learned over time, often through integration of information from other senses. This review discusses the main principles underlying categorical sound perception with a special focus on the role of learning and neural plasticity. We examine the role of different neural structures along the auditory processing pathway in the formation of abstract sound representations with respect to hierarchical as well as dynamic and distributed processing models. Whereas most fMRI studies on categorical sound processing employed speech sounds, the emphasis of the current review lies on the contribution of empirical studies using natural or artificial sounds that enable separating acoustic and perceptual processing levels and avoid interference with existing category representations. Finally, we discuss the opportunities of modern analyses techniques such as multivariate pattern analysis (MVPA) in studying categorical sound representations. With their increased sensitivity to distributed activation changes—even in absence of changes in overall signal level—these analyses techniques provide a promising tool to reveal the neural underpinnings of perceptually invariant sound representations. PMID:24917783

  4. Mesoscale temperature and moisture fields from satellite infrared soundings

    NASA Technical Reports Server (NTRS)

    Hillger, D. W.; Vonderhaar, T. H.

    1976-01-01

    The combined use of radiosonde and satellite infrared soundings can provide mesoscale temperature and moisture fields at the time of satellite coverage. Radiance data from the vertical temperature profile radiometer on NOAA polar-orbiting satellites can be used along with a radiosonde sounding as an initial guess in an iterative retrieval algorithm. The mesoscale temperature and moisture fields at local 9 - 10 a.m., which are produced by retrieving temperature profiles at each scan spot for the BTPR (every 70 km), can be used for analysis or as a forecasting tool for subsequent weather events during the day. The advantage of better horizontal resolution of satellite soundings can be coupled with the radiosonde temperature and moisture profile both as a best initial guess profile and as a means of eliminating problems due to the limited vertical resolution of satellite soundings.

  5. A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing

    NASA Astrophysics Data System (ADS)

    Cobos, Maximo; Lopez, JoseJ; Spors, Sascha

    2010-12-01

    Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.

  6. ESCOMPTE 2001: multi-scale modelling and experimental validation

    NASA Astrophysics Data System (ADS)

    Cousin, F.; Tulet, P.; Rosset, R.

    2003-04-01

    ESCOMPTE is a European pollution field experiment located in the Marseille / Fos-Berre area in the summer 2001.This Mediterranean area, with frequent pollution peaks, is characterized by a complex topography subject to sea breeze regimes, together with intense localized urban, industrial and biogenic sources. Four POI have been selected, the most significant being POI2a / b, a 6-day pollution episode extensively documented for dynamics, radiation, gas phase and aerosols, with surface measurements (including measurements at sea in the gulf of Genoa, on board instrumented ferries between Marseille and Corsica), 7 aircrafts, lidar, radar and constant-level flight balloon soundings. The two-way mesoscale model MESO-NH-C (MNH-C), with horizontal resolutions of 9 and 3 km and high vertical resolution (up to 40 levels in the first 2 km), embedded in the global CTM Mocage, has been run for all POIs, with a focus here on POI2b (June 24-27,2001), a typical high pollution episode. The multi-scale modelling system MNH-C+MOCAGE allows to simulate local and regional pollution issued from emission sources in the Marseille / Fos-Berre area as well as from remote sources (e.g. the Po Valley and / or western Mediterranean sources) and their associated transboundary pollution fluxes. Detailed dynamical, chemical and aerosol (both modal and sectional spectra with organics and inorganics) simulations generally favorably compare to surface(continental and on ships), lidar and along-flight aircraft measurements.

  7. An observation of LHR noise with banded structure by the sounding rocket S29 Barium-GEOS

    NASA Technical Reports Server (NTRS)

    Koskinen, H. E. J.; Holmgren, G.; Kintner, P. M.

    1982-01-01

    The measurement of electrostatic and obviously locally produced noise near the lower hybrid frequency made by the sounding rocket S29 Barium-GEOS is reported. The noise is strongly related to the spin of the rocket and reaches well below the local lower hybrid resonance frequency. Above the altitude of 300 km the noise shows banded structure roughly organized by the hydrogen cyclotron frequency. Simultaneously with the banded structure, a signal near the hydrogen cyclotron frequency is detected. This signal is also spin related. The characteristics of the noise suggest that it is locally generated by the rocket payload disturbing the plasma. If this interpretation is correct we expect plasma wave experiments on other spacecrafts, e.g., the space shuttle to observe similar phenomena.

  8. Effect of Dual Sensory Loss on Auditory Localization: Implications for Intervention

    PubMed Central

    Simon, Helen J.; Levitt, Harry

    2007-01-01

    Our sensory systems are remarkable in several respects. They are extremely sensitive, they each perform more than one function, and they interact in a complementary way, thereby providing a high degree of redundancy that is particularly helpful should one or more sensory systems be impaired. In this article, the problem of dual hearing and vision loss is addressed. A brief description is provided on the use of auditory cues in vision loss, the use of visual cues in hearing loss, and the additional difficulties encountered when both sensory systems are impaired. A major focus of this article is the use of sound localization by normal hearing, hearing impaired, and blind individuals and the special problem of sound localization in people with dual sensory loss. PMID:18003869

  9. Activity in Human Auditory Cortex Represents Spatial Separation Between Concurrent Sounds.

    PubMed

    Shiell, Martha M; Hausfeld, Lars; Formisano, Elia

    2018-05-23

    The primary and posterior auditory cortex (AC) are known for their sensitivity to spatial information, but how this information is processed is not yet understood. AC that is sensitive to spatial manipulations is also modulated by the number of auditory streams present in a scene (Smith et al., 2010), suggesting that spatial and nonspatial cues are integrated for stream segregation. We reasoned that, if this is the case, then it is the distance between sounds rather than their absolute positions that is essential. To test this hypothesis, we measured human brain activity in response to spatially separated concurrent sounds with fMRI at 7 tesla in five men and five women. Stimuli were spatialized amplitude-modulated broadband noises recorded for each participant via in-ear microphones before scanning. Using a linear support vector machine classifier, we investigated whether sound location and/or location plus spatial separation between sounds could be decoded from the activity in Heschl's gyrus and the planum temporale. The classifier was successful only when comparing patterns associated with the conditions that had the largest difference in perceptual spatial separation. Our pattern of results suggests that the representation of spatial separation is not merely the combination of single locations, but rather is an independent feature of the auditory scene. SIGNIFICANCE STATEMENT Often, when we think of auditory spatial information, we think of where sounds are coming from-that is, the process of localization. However, this information can also be used in scene analysis, the process of grouping and segregating features of a soundwave into objects. Essentially, when sounds are further apart, they are more likely to be segregated into separate streams. Here, we provide evidence that activity in the human auditory cortex represents the spatial separation between sounds rather than their absolute locations, indicating that scene analysis and localization processes may be independent. Copyright © 2018 the authors 0270-6474/18/384977-08$15.00/0.

  10. The effect of local circulations on the variation of atmospheric pollutants in the northwestern Taiwan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pay-Liam Lin; Hsin-Chih Lai

    1996-12-31

    A field experiment was held in the northwestern Taiwan as a part of a long-term research program for studying Taiwan`s local circulation. The program has been named as Taiwan Regional-circulation Experiment (TREX). The particular goal of this research is to investigate characteristics of boundary layer and local Circulation and their impact on the distribution and Variation of pollutants in the northwestern Taiwan during Mei-Yu season. It has been known for quite sometime that land-sea breeze is very pronounced under hot and humid conditions. Extensive network includes 11 pilot ballon stations, 3 acoustic sounding sites, and 14 surface stations in aboutmore » 20 km by 20 km area centered at National Central University, Chung-Li. In addition, there are ground temperature measurements at 3 sites, Integrated Sounding System (ISS) at NCU, air plane observation, tracer experiment with 10 collecting stations, 3 background upper-air sounding stations, 2 towers etc. NOAA and GMS satellite data, sea surface temperature radar, and precipitation data are collected. The local circulations such as land/sea breezes and mountain/valley winds, induced by thermal and topographical effects often play an important role in transporting, redistributing and transforming atmospheric pollutants. This study documents the effects of the development of local circulations and the accompanying evolution of boundary layer on the distribution and the variation of the atmospheric pollutants in the north western Taiwan during Mei-Yu season.« less

  11. A spatially collocated sound thrusts a flash into awareness

    PubMed Central

    Aller, Máté; Giani, Anette; Conrad, Verena; Watanabe, Masataka; Noppeney, Uta

    2015-01-01

    To interact effectively with the environment the brain integrates signals from multiple senses. It is currently unclear to what extent spatial information can be integrated across different senses in the absence of awareness. Combining dynamic continuous flash suppression (CFS) and spatial audiovisual stimulation, the current study investigated whether a sound facilitates a concurrent visual flash to elude flash suppression and enter perceptual awareness depending on audiovisual spatial congruency. Our results demonstrate that a concurrent sound boosts unaware visual signals into perceptual awareness. Critically, this process depended on the spatial congruency of the auditory and visual signals pointing towards low level mechanisms of audiovisual integration. Moreover, the concurrent sound biased the reported location of the flash as a function of flash visibility. The spatial bias of sounds on reported flash location was strongest for flashes that were judged invisible. Our results suggest that multisensory integration is a critical mechanism that enables signals to enter conscious perception. PMID:25774126

  12. Ultra-thin smart acoustic metasurface for low-frequency sound insulation

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Xiao, Yong; Wen, Jihong; Yu, Dianlong; Wen, Xisen

    2016-04-01

    Insulating low-frequency sound is a conventional challenge due to the high areal mass required by mass law. In this letter, we propose a smart acoustic metasurface consisting of an ultra-thin aluminum foil bonded with piezoelectric resonators. Numerical and experimental results show that the metasurface can break the conventional mass law of sound insulation by 30 dB in the low frequency regime (<1000 Hz), with an ultra-light areal mass density (<1.6 kg/m2) and an ultra-thin thickness (1000 times smaller than the operating wavelength). The underlying physical mechanism of such extraordinary sound insulation performance is attributed to the infinite effective dynamic mass density produced by the smart resonators. It is also demonstrated that the excellent sound insulation property can be conveniently tuned by simply adjusting the external circuits instead of modifying the structure of the metasurface.

  13. Modulation of electrocortical brain activity by attention in individuals with and without tinnitus.

    PubMed

    Paul, Brandon T; Bruce, Ian C; Bosnyak, Daniel J; Thompson, David C; Roberts, Larry E

    2014-01-01

    Age and hearing-level matched tinnitus and control groups were presented with a 40 Hz AM sound using a carrier frequency of either 5 kHz (in the tinnitus frequency region of the tinnitus subjects) or 500 Hz (below this region). On attended blocks subjects pressed a button after each sound indicating whether a single 40 Hz AM pulse of variable increased amplitude (target, probability 0.67) had or had not occurred. On passive blocks subjects rested and ignored the sounds. The amplitude of the 40 Hz auditory steady-state response (ASSR) localizing to primary auditory cortex (A1) increased with attention in control groups probed at 500 Hz and 5 kHz and in the tinnitus group probed at 500 Hz, but not in the tinnitus group probed at 5 kHz (128 channel EEG). N1 amplitude (this response localizing to nonprimary cortex, A2) increased with attention at both sound frequencies in controls but at neither frequency in tinnitus. We suggest that tinnitus-related neural activity occurring in the 5 kHz but not the 500 Hz region of tonotopic A1 disrupted attentional modulation of the 5 kHz ASSR in tinnitus subjects, while tinnitus-related activity in A1 distributing nontonotopically in A2 impaired modulation of N1 at both sound frequencies.

  14. Modulation of Electrocortical Brain Activity by Attention in Individuals with and without Tinnitus

    PubMed Central

    Paul, Brandon T.; Bruce, Ian C.; Bosnyak, Daniel J.; Thompson, David C.; Roberts, Larry E.

    2014-01-01

    Age and hearing-level matched tinnitus and control groups were presented with a 40 Hz AM sound using a carrier frequency of either 5 kHz (in the tinnitus frequency region of the tinnitus subjects) or 500 Hz (below this region). On attended blocks subjects pressed a button after each sound indicating whether a single 40 Hz AM pulse of variable increased amplitude (target, probability 0.67) had or had not occurred. On passive blocks subjects rested and ignored the sounds. The amplitude of the 40 Hz auditory steady-state response (ASSR) localizing to primary auditory cortex (A1) increased with attention in control groups probed at 500 Hz and 5 kHz and in the tinnitus group probed at 500 Hz, but not in the tinnitus group probed at 5 kHz (128 channel EEG). N1 amplitude (this response localizing to nonprimary cortex, A2) increased with attention at both sound frequencies in controls but at neither frequency in tinnitus. We suggest that tinnitus-related neural activity occurring in the 5 kHz but not the 500 Hz region of tonotopic A1 disrupted attentional modulation of the 5 kHz ASSR in tinnitus subjects, while tinnitus-related activity in A1 distributing nontonotopically in A2 impaired modulation of N1 at both sound frequencies. PMID:25024849

  15. Visualization of the hot chocolate sound effect by spectrograms

    NASA Astrophysics Data System (ADS)

    Trávníček, Z.; Fedorchenko, A. I.; Pavelka, M.; Hrubý, J.

    2012-12-01

    We present an experimental and a theoretical analysis of the hot chocolate effect. The sound effect is evaluated using time-frequency signal processing, resulting in a quantitative visualization by spectrograms. This method allows us to capture the whole phenomenon, namely to quantify the dynamics of the rising pitch. A general form of the time dependence volume fraction of the bubbles is proposed. We show that the effect occurs due to the nonlinear dependence of the speed of sound in the gas/liquid mixture on the volume fraction of the bubbles and the nonlinear time dependence of the volume fraction of the bubbles.

  16. Cue Reliability Represented in the Shape of Tuning Curves in the Owl's Sound Localization System

    PubMed Central

    Fischer, Brian J.; Peña, Jose L.

    2016-01-01

    Optimal use of sensory information requires that the brain estimates the reliability of sensory cues, but the neural correlate of cue reliability relevant for behavior is not well defined. Here, we addressed this issue by examining how the reliability of spatial cue influences neuronal responses and behavior in the owl's auditory system. We show that the firing rate and spatial selectivity changed with cue reliability due to the mechanisms generating the tuning to the sound localization cue. We found that the correlated variability among neurons strongly depended on the shape of the tuning curves. Finally, we demonstrated that the change in the neurons' selectivity was necessary and sufficient for a network of stochastic neurons to predict behavior when sensory cues were corrupted with noise. This study demonstrates that the shape of tuning curves can stand alone as a coding dimension of environmental statistics. SIGNIFICANCE STATEMENT In natural environments, sensory cues are often corrupted by noise and are therefore unreliable. To make the best decisions, the brain must estimate the degree to which a cue can be trusted. The behaviorally relevant neural correlates of cue reliability are debated. In this study, we used the barn owl's sound localization system to address this question. We demonstrated that the mechanisms that account for spatial selectivity also explained how neural responses changed with degraded signals. This allowed for the neurons' selectivity to capture cue reliability, influencing the population readout commanding the owl's sound-orienting behavior. PMID:26888922

  17. Bubble dynamics in drinks

    NASA Astrophysics Data System (ADS)

    Broučková, Zuzana; Trávníček, Zdeněk; Šafařík, Pavel

    2014-03-01

    This study introduces two physical effects known from beverages: the effect of sinking bubbles and the hot chocolate sound effect. The paper presents two simple "kitchen" experiments. The first and second effects are indicated by means of a flow visualization and microphone measurement, respectively. To quantify the second (acoustic) effect, sound records are analyzed using time-frequency signal processing, and the obtained power spectra and spectrograms are discussed.

  18. Beyond harmonic sounds in a simple model for birdsong production.

    PubMed

    Amador, Ana; Mindlin, Gabriel B

    2008-12-01

    In this work we present an analysis of the dynamics displayed by a simple bidimensional model of labial oscillations during birdsong production. We show that the same model capable of generating tonal sounds can present, for a wide range of parameters, solutions which are spectrally rich. The role of physiologically sensible parameters is discussed in each oscillatory regime, allowing us to interpret previously reported data.

  19. In situ analysis of measurements of auroral dynamics and structure

    NASA Astrophysics Data System (ADS)

    Mella, Meghan R.

    Two auroral sounding rocket case studies, one in the dayside and one in the nightside, explore aspects of poleward boundary aurora. The nightside sounding rocket, Cascades-2 was launched on 20 March 2009 at 11:04:00 UT from the Poker Flat Research Range in Alaska, and flew across a series of poleward boundary intensifications (PBIs). Each of the crossings have fundamentally different in situ electron energy and pitch angle structure, and different ground optics images of visible aurora. The different particle distributions show signatures of both a quasistatic acceleration mechanism and an Alfvenic acceleration mechanism, as well as combinations of both. The Cascades-2 experiment is the first sounding rocket observation of a PBI sequence, enabling a detailed investigation of the electron signatures and optical aurora associated with various stages of a PBI sequence as it evolves from an Alfvenic to a more quasistatic structure. The dayside sounding rocket, Scifer-2 was launched on 18 January 2008 at 7:30 UT from the Andoya Rocket Range in Andenes, Norway. It flew northward through the cleft region during a Poleward Moving Auroral Form (PMAF) event. Both the dayside and nightside flights observe dispersed, precipitating ions, each of a different nature. The dispersion signatures are dependent on, among other things, the MLT sector, altitude, source region, and precipitation mechanism. It is found that small changes in the shape of the dispersion have a large influence on whether the precipitation was localized or extended over a range of altitudes. It is also found that a single Maxwellian source will not replicate the data, but rather, a sum of Maxwellians of different temperature, similar to a Kappa distribution, most closely reproduces the data. The various particle signatures are used to argue that both events have similar magnetospheric drivers, that is, Bursty Bulk Flows in the magnetotail.

  20. Disruption of Spelling-to-Sound Correspondence Mapping during Single-Word Reading in Patients with Temporal Lobe Epilepsy

    ERIC Educational Resources Information Center

    Ledoux, Kerry; Gordon, Barry

    2011-01-01

    Processing and/or hemispheric differences in the neural bases of word recognition were examined in patients with long-standing, medically-intractable epilepsy localized to the left (N = 18) or right (N = 7) temporal lobe. Participants were asked to read words that varied in the frequency of their spelling-to-sound correspondences. For the right…

  1. 76 FR 36438 - Special Local Regulations; Safety and Security Zones; Recurring Events in Captain of the Port...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-22

    ... Events in Captain of the Port Long Island Sound Zone AGENCY: Coast Guard, DHS. ACTION: Notice of proposed... security zone in the Coast Guard Sector Long Island Sound Captain of the Port (COTP) Zone. When these..., call or e-mail Petty Officer Joseph Graun, Waterways Management Division at Coast Guard Sector Long...

  2. Communication Sciences Laboratory Quarterly Progress Report, Volume 9, Number 3: Research Programs of Some of the Newer Members of CSL.

    ERIC Educational Resources Information Center

    Feinstein, Stephen H.; And Others

    The research reported in these papers covers a variety of communication problems. The first paper covers research on sound navigation by the blind and involves echo perception research and relevant aspects of underwater sound localization. The second paper describes a research program in acoustic phonetics and concerns such related issues as…

  3. A Direct Experimental Evidence For the New Thermodynamic Boundary in the Supercritical State: Implications for Earth and Planetary Sciences.

    NASA Astrophysics Data System (ADS)

    Bolmatov, D.

    2015-12-01

    While scientists have a good theoretical understanding of the heat capacity of both solids and gases, a general theory of the heat capacity of liquids has always remained elusive. Apart from being an awkward hole in our knowledge, heat capacity - the amount of heat needed to change a substance's temperature by a certain amount - is a relevant quantity that it would be nice to be able to predict. I will introduce a phonon-based approach to liquids and supercritical fluids to describe its thermodynamics in terms of sound propagation. I will show that the internal liquid energy has a transverse sound propagation gaps and explain their evolution with temperature variations on the P-T diagram. I will explain how this theoretical framework covers the Debye theory of solids, the phonon theory of liquids, and thermodynamic limits such as the Delong-Petit and the ideal gas thermodynamic limits. As a results, the experimental evidence for the new thermodynamic boundary in the supercritical state (the Frenkel line) on the P-T phase diagram will be demonstrated. Then, I will report on inelastic X-ray scattering experiments combined with the molecular dynamics simulations on deeply supercritical Ar. The presented results unveil the mechanism and regimes of sound propagation in the liquid matter and provide compelling evidence for the adiabatic-to-isothermal longitudinal sound propagation transition. As a result, a universal link will be demonstrated between the positive sound dispersion (PSD) phenomenon and the origin of transverse sound propagation revealing the viscous-to-elastic crossover in compressed liquids. Both can be considered as a universal fingerprint of the dynamic response of a liquid. They can be used then for a signal detection and analysis of a dynamic response in deep water and other fluids which is relevant for describing the thermodynamics of gas giants. The consequences of this finding will be discussed, including a physically justified way to demarcate the interior and the atmosphere in gas giants such as Jupiter and Saturn.

  4. Observations of shallow water marine ambient sound: the low frequency underwater soundscape of the central Oregon coast.

    PubMed

    Haxel, Joseph H; Dziak, Robert P; Matsumoto, Haru

    2013-05-01

    A year-long experiment (March 2010 to April 2011) measuring ambient sound at a shallow water site (50 m) on the central OR coast near the Port of Newport provides important baseline information for comparisons with future measurements associated with resource development along the inner continental shelf of the Pacific Northwest. Ambient levels in frequencies affected by surf-generated noise (f < 100 Hz) characterize the site as a high-energy end member within the spectrum of shallow water coastal areas influenced by breaking waves. Dominant sound sources include locally generated ship noise (66% of total hours contain local ship noise), breaking surf, wind induced wave breaking and baleen whale vocalizations. Additionally, an increase in spectral levels for frequencies ranging from 35 to 100 Hz is attributed to noise radiated from distant commercial ship commerce. One-second root mean square (rms) sound pressure level (SPLrms) estimates calculated across the 10-840 Hz frequency band for the entire year long deployment show minimum, mean, and maximum values of 84 dB, 101 dB, and 152 dB re 1 μPa.

  5. Localizing nearby sound sources in a classroom: Binaural room impulse responses

    NASA Astrophysics Data System (ADS)

    Shinn-Cunningham, Barbara G.; Kopco, Norbert; Martin, Tara J.

    2005-05-01

    Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy. .

  6. Localizing nearby sound sources in a classroom: binaural room impulse responses.

    PubMed

    Shinn-Cunningham, Barbara G; Kopco, Norbert; Martin, Tara J

    2005-05-01

    Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy.

  7. A Robust Sound Source Localization Approach for Microphone Array with Model Errors

    NASA Astrophysics Data System (ADS)

    Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong

    In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.

  8. Virtual acoustic displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.

    1991-01-01

    A 3D auditory display can potentially enhance information transfer by combining directional and iconic information in a quite naturalistic representation of dynamic objects in the interface. Another aspect of auditory spatial clues is that, in conjunction with other modalities, it can act as a potentiator of information in the display. For example, visual and auditory cues together can reinforce the information content of the display and provide a greater sense of presence or realism in a manner not readily achievable by either modality alone. This phenomenon will be particularly useful in telepresence applications, such as advanced teleconferencing environments, shared electronic workspaces, and monitoring telerobotic activities in remote or hazardous situations. Thus, the combination of direct spatial cues with good principles of iconic design could provide an extremely powerful and information-rich display which is also quite easy to use. An alternative approach, recently developed at ARC, generates externalized, 3D sound cues over headphones in realtime using digital signal processing. Here, the synthesis technique involves the digital generation of stimuli using Head-Related Transfer Functions (HRTF's) measured in the two ear-canals of individual subjects. Other similar approaches include an analog system developed by Loomis, et. al., (1990) and digital systems which make use of transforms derived from normative mannikins and simulations of room acoustics. Such an interface also requires the careful psychophysical evaluation of listener's ability to accurately localize the virtual or synthetic sound sources. From an applied standpoint, measurement of each potential listener's HRTF's may not be possible in practice. For experienced listeners, localization performance was only slightly degraded compared to a subject's inherent ability. Alternatively, even inexperienced listeners may be able to adapt to a particular set of HRTF's as long as they provide adequate cues for localization. In general, these data suggest that most listeners can obtain useful directional information from an auditory display without requiring the use of individually-tailored HRTF's.

  9. Sounds Activate Visual Cortex and Improve Visual Discrimination

    PubMed Central

    Störmer, Viola S.; Martinez, Antigona; McDonald, John J.; Hillyard, Steven A.

    2014-01-01

    A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. PMID:25031419

  10. NPSNET: Aural cues for virtual world immersion

    NASA Astrophysics Data System (ADS)

    Dahl, Leif A.

    1992-09-01

    NPSNET is a low-cost visual and aural simulation system designed and implemented at the Naval Postgraduate School. NPSNET is an example of a virtual world simulation environment that incorporates real-time aural cues through software-hardware interaction. In the current implementation of NPSNET, a graphics workstation functions in the sound server role which involves sending and receiving networked sound message packets across a Local Area Network, composed of multiple graphics workstations. The network messages contain sound file identification information that is transmitted from the sound server across an RS-422 protocol communication line to a serial to Musical Instrument Digital Interface (MIDI) converter. The MIDI converter, in turn relays the sound byte to a sampler, an electronic recording and playback device. The sampler correlates the hexadecimal input to a specific note or stored sound and sends it as an audio signal to speakers via an amplifier. The realism of a simulation is improved by involving multiple participant senses and removing external distractions. This thesis describes the incorporation of sound as aural cues, and the enhancement they provide in the virtual simulation environment of NPSNET.

  11. Principal cells of the brainstem's interaural sound level detector are temporal differentiators rather than integrators.

    PubMed

    Franken, Tom P; Joris, Philip X; Smith, Philip H

    2018-06-14

    The brainstem's lateral superior olive (LSO) is thought to be crucial for localizing high-frequency sounds by coding interaural sound level differences (ILD). Its neurons weigh contralateral inhibition against ipsilateral excitation, making their firing rate a function of the azimuthal position of a sound source. Since the very first in vivo recordings, LSO principal neurons have been reported to give sustained and temporally integrating 'chopper' responses to sustained sounds. Neurons with transient responses were observed but largely ignored and even considered a sign of pathology. Using the Mongolian gerbil as a model system, we have obtained the first in vivo patch clamp recordings from labeled LSO neurons and find that principal LSO neurons, the most numerous projection neurons of this nucleus, only respond at sound onset and show fast membrane features suggesting an importance for timing. These results provide a new framework to interpret previously puzzling features of this circuit. © 2018, Franken et al.

  12. Protons at the speed of sound: Predicting specific biological signaling from physics.

    PubMed

    Fichtl, Bernhard; Shrivastava, Shamit; Schneider, Matthias F

    2016-05-24

    Local changes in pH are known to significantly alter the state and activity of proteins and enzymes. pH variations induced by pulses propagating along soft interfaces (e.g. membranes) would therefore constitute an important pillar towards a physical mechanism of biological signaling. Here we investigate the pH-induced physical perturbation of a lipid interface and the physicochemical nature of the subsequent acoustic propagation. Pulses are stimulated by local acidification and propagate - in analogy to sound - at velocities controlled by the interface's compressibility. With transient local pH changes of 0.6 directly observed at the interface and velocities up to 1.4 m/s this represents hitherto the fastest protonic communication observed. Furthermore simultaneously propagating mechanical and electrical changes in the lipid interface are detected, exposing the thermodynamic nature of these pulses. Finally, these pulses are excitable only beyond a threshold for protonation, determined by the pKa of the lipid head groups. This protonation-transition plus the existence of an enzymatic pH-optimum offer a physical basis for intra- and intercellular signaling via sound waves at interfaces, where not molecular structure and mechano-enyzmatic couplings, but interface thermodynamics and thermodynamic transitions are the origin of the observations.

  13. Sensitivity analysis of Repast computational ecology models with R/Repast.

    PubMed

    Prestes García, Antonio; Rodríguez-Patón, Alfonso

    2016-12-01

    Computational ecology is an emerging interdisciplinary discipline founded mainly on modeling and simulation methods for studying ecological systems. Among the existing modeling formalisms, the individual-based modeling is particularly well suited for capturing the complex temporal and spatial dynamics as well as the nonlinearities arising in ecosystems, communities, or populations due to individual variability. In addition, being a bottom-up approach, it is useful for providing new insights on the local mechanisms which are generating some observed global dynamics. Of course, no conclusions about model results could be taken seriously if they are based on a single model execution and they are not analyzed carefully. Therefore, a sound methodology should always be used for underpinning the interpretation of model results. The sensitivity analysis is a methodology for quantitatively assessing the effect of input uncertainty in the simulation output which should be incorporated compulsorily to every work based on in-silico experimental setup. In this article, we present R/Repast a GNU R package for running and analyzing Repast Simphony models accompanied by two worked examples on how to perform global sensitivity analysis and how to interpret the results.

  14. New Observations of the Martian Ionosphere and its Variability - An Overview

    NASA Astrophysics Data System (ADS)

    Kopf, Andrew J.

    2017-04-01

    The Martian ionosphere is a highly variable system, owed to the strong influence of the Sun on its properties and behavior, particularly at higher altitudes. Recent measurements from the MAVEN and Mars Express spacecraft have allowed for a more complete understanding of the ionosphere and its variability from two different perspectives. Due to the low-altitude periapsis of its orbit, MAVEN has allowed for the first in-situ ionospheric studies since Viking, yielding detailed direct measurements of the ionosphere's structure, composition, and dynamics, as well as its rate of loss to space. Mars Express has over a decade of continuous ionospheric observation of the red planet, with the unique ability to remotely sound the ionosphere. These features enable Mars Express to make long-period ionospheric measurements on each orbit, at all local times and solar zenith angles. Utilized together, these two spacecraft form a powerful observational suite that has provided new insights into this dynamic environment. This talk will highlight several important recent results in the study of the Martian ionosphere and its variability.

  15. Assessment of Closed-Loop Control Using Multi-Mode Sensor Fusion For a High Reynolds Number Transonic Jet

    NASA Astrophysics Data System (ADS)

    Low, Kerwin; Elhadidi, Basman; Glauser, Mark

    2009-11-01

    Understanding the different noise production mechanisms caused by the free shear flows in a turbulent jet flow provides insight to improve ``intelligent'' feedback mechanisms to control the noise. Towards this effort, a control scheme is based on feedback of azimuthal pressure measurements in the near field of the jet at two streamwise locations. Previous studies suggested that noise reduction can be achieved by azimuthal actuators perturbing the shear layer at the jet lip. The closed-loop actuation will be based on a low-dimensional Fourier representation of the hydrodynamic pressure measurements. Preliminary results show that control authority and reduction in the overall sound pressure level was possible. These results provide motivation to move forward with the overall vision of developing innovative multi-mode sensing methods to improve state estimation and derive dynamical systems. It is envisioned that estimating velocity-field and dynamic pressure information from various locations both local and in the far-field regions, sensor fusion techniques can be utilized to ascertain greater overall control authority.

  16. A New Mechanism of Sound Generation in Songbirds

    NASA Astrophysics Data System (ADS)

    Goller, Franz; Larsen, Ole N.

    1997-12-01

    Our current understanding of the sound-generating mechanism in the songbird vocal organ, the syrinx, is based on indirect evidence and theoretical treatments. The classical avian model of sound production postulates that the medial tympaniform membranes (MTM) are the principal sound generators. We tested the role of the MTM in sound generation and studied the songbird syrinx more directly by filming it endoscopically. After we surgically incapacitated the MTM as a vibratory source, zebra finches and cardinals were not only able to vocalize, but sang nearly normal song. This result shows clearly that the MTM are not the principal sound source. The endoscopic images of the intact songbird syrinx during spontaneous and brain stimulation-induced vocalizations illustrate the dynamics of syringeal reconfiguration before phonation and suggest a different model for sound production. Phonation is initiated by rostrad movement and stretching of the syrinx. At the same time, the syrinx is closed through movement of two soft tissue masses, the medial and lateral labia, into the bronchial lumen. Sound production always is accompanied by vibratory motions of both labia, indicating that these vibrations may be the sound source. However, because of the low temporal resolution of the imaging system, the frequency and phase of labial vibrations could not be assessed in relation to that of the generated sound. Nevertheless, in contrast to the previous model, these observations show that both labia contribute to aperture control and strongly suggest that they play an important role as principal sound generators.

  17. The Sound Generated by Mid-Ocean Ridge Black Smoker Hydrothermal Vents

    PubMed Central

    Crone, Timothy J.; Wilcock, William S.D.; Barclay, Andrew H.; Parsons, Jeffrey D.

    2006-01-01

    Hydrothermal flow through seafloor black smoker vents is typically turbulent and vigorous, with speeds often exceeding 1 m/s. Although theory predicts that these flows will generate sound, the prevailing view has been that black smokers are essentially silent. Here we present the first unambiguous field recordings showing that these vents radiate significant acoustic energy. The sounds contain a broadband component and narrowband tones which are indicative of resonance. The amplitude of the broadband component shows tidal modulation which is indicative of discharge rate variations related to the mechanics of tidal loading. Vent sounds will provide researchers with new ways to study flow through sulfide structures, and may provide some local organisms with behavioral or navigational cues. PMID:17205137

  18. Analysis of sound absorption performance of an electroacoustic absorber using a vented enclosure

    NASA Astrophysics Data System (ADS)

    Cho, Youngeun; Wang, Semyung; Hyun, Jaeyub; Oh, Seungjae; Goo, Seongyeol

    2018-03-01

    The sound absorption performance of an electroacoustic absorber (EA) is primarily influenced by the dynamic characteristics of the loudspeaker that acts as the actuator of the EA system. Therefore, the sound absorption performance of the EA is maximum at the resonance frequency of the loudspeaker and tends to degrade in the low-frequency and high-frequency bands based on this resonance frequency. In this study, to adjust the sound absorption performance of the EA system in the low-frequency band of approximately 20-80 Hz, an EA system using a vented enclosure that has previously been used to enhance the radiating sound pressure of a loudspeaker in the low-frequency band, is proposed. To verify the usefulness of the proposed system, two acoustic environments are considered. In the first acoustic environment, the vent of the vented enclosure is connected to an external sound field that is distinct from the sound field coupled to the EA. In this case, the acoustic effect of the vented enclosure on the performance of the EA is analyzed through an analytical approach using dynamic equations and an impedance-based equivalent circuit. Then, it is verified through numerical and experimental approaches. Next, in the second acoustic environment, the vent is connected to the same external sound field as the EA. In this case, the effect of the vented enclosure on the EA is investigated through an analytical approach and finally verified through a numerical approach. As a result, it is confirmed that the characteristics of the sound absorption performances of the proposed EA system using the vented enclosure in the two acoustic environments considered in this study are different from each other in the low-frequency band of approximately 20-80 Hz. Furthermore, several case studies on the change tendency of the performance of the EA using the vented enclosure according to the critical design factors or vent number for the vented enclosure are also investigated. In the future, even if the proposed EA system using a vented enclosure is extended to a large number of arrays required for 3D sound field control, it is expected to be an attractive solution that can contribute to an improvement in low-frequency noise reduction without causing economic and system complexity problems.

  19. Simulated seal scarer sounds scare porpoises, but not seals: species-specific responses to 12 kHz deterrence sounds

    PubMed Central

    Hermannsen, Line; Beedholm, Kristian

    2017-01-01

    Acoustic harassment devices (AHD) or ‘seal scarers’ are used extensively, not only to deter seals from fisheries, but also as mitigation tools to deter marine mammals from potentially harmful sound sources, such as offshore pile driving. To test the effectiveness of AHDs, we conducted two studies with similar experimental set-ups on two key species: harbour porpoises and harbour seals. We exposed animals to 500 ms tone bursts at 12 kHz simulating that of an AHD (Lofitech), but with reduced output levels (source peak-to-peak level of 165 dB re 1 µPa). Animals were localized with a theodolite before, during and after sound exposures. In total, 12 sound exposures were conducted to porpoises and 13 exposures to seals. Porpoises were found to exhibit avoidance reactions out to ranges of 525 m from the sound source. Contrary to this, seal observations increased during sound exposure within 100 m of the loudspeaker. We thereby demonstrate that porpoises and seals respond very differently to AHD sounds. This has important implications for application of AHDs in multi-species habitats, as sound levels required to deter less sensitive species (seals) can lead to excessive and unwanted large deterrence ranges on more sensitive species (porpoises). PMID:28791155

  20. The influence of sea ice, wind speed and marine mammals on Southern Ocean ambient sound

    PubMed Central

    van Opzeeland, Ilse; Boebel, Olaf

    2017-01-01

    This paper describes the natural variability of ambient sound in the Southern Ocean, an acoustically pristine marine mammal habitat. Over a 3-year period, two autonomous recorders were moored along the Greenwich meridian to collect underwater passive acoustic data. Ambient sound levels were strongly affected by the annual variation of the sea-ice cover, which decouples local wind speed and sound levels during austral winter. With increasing sea-ice concentration, area and thickness, sound levels decreased while the contribution of distant sources increased. Marine mammal sounds formed a substantial part of the overall acoustic environment, comprising calls produced by Antarctic blue whales (Balaenoptera musculus intermedia), fin whales (Balaenoptera physalus), Antarctic minke whales (Balaenoptera bonaerensis) and leopard seals (Hydrurga leptonyx). The combined sound energy of a group or population vocalizing during extended periods contributed species-specific peaks to the ambient sound spectra. The temporal and spatial variation in the contribution of marine mammals to ambient sound suggests annual patterns in migration and behaviour. The Antarctic blue and fin whale contributions were loudest in austral autumn, whereas the Antarctic minke whale contribution was loudest during austral winter and repeatedly showed a diel pattern that coincided with the diel vertical migration of zooplankton. PMID:28280544

Top