Sample records for first sound

  1. First and second sound in a strongly interacting Fermi gas

    NASA Astrophysics Data System (ADS)

    Taylor, E.; Hu, H.; Liu, X.-J.; Pitaevskii, L. P.; Griffin, A.; Stringari, S.

    2009-11-01

    Using a variational approach, we solve the equations of two-fluid hydrodynamics for a uniform and trapped Fermi gas at unitarity. In the uniform case, we find that the first and second sound modes are remarkably similar to those in superfluid helium, a consequence of strong interactions. In the presence of harmonic trapping, first and second sound become degenerate at certain temperatures. At these points, second sound hybridizes with first sound and is strongly coupled with density fluctuations, giving a promising way of observing second sound. We also discuss the possibility of exciting second sound by generating local heat perturbations.

  2. Development of an ICT-Based Air Column Resonance Learning Media

    NASA Astrophysics Data System (ADS)

    Purjiyanta, Eka; Handayani, Langlang; Marwoto, Putut

    2016-08-01

    Commonly, the sound source used in the air column resonance experiment is the tuning fork having disadvantage of unoptimal resonance results due to the sound produced which is getting weaker. In this study we made tones with varying frequency using the Audacity software which were, then, stored in a mobile phone as a source of sound. One advantage of this sound source is the stability of the resulting sound enabling it to produce the same powerful sound. The movement of water in a glass tube mounted on the tool resonance and the tone sound that comes out from the mobile phone were recorded by using a video camera. Sound resonances recorded were first, second, and third resonance, for each tone frequency mentioned. The resulting sound stays longer, so it can be used for the first, second, third and next resonance experiments. This study aimed to (1) explain how to create tones that can substitute tuning forks sound used in air column resonance experiments, (2) illustrate the sound wave that occurred in the first, second, and third resonance in the experiment, and (3) determine the speed of sound in the air. This study used an experimental method. It was concluded that; (1) substitute tones of a tuning fork sound can be made by using the Audacity software; (2) the form of sound waves that occured in the first, second, and third resonance in the air column resonance can be drawn based on the results of video recording of the air column resonance; and (3) based on the experiment result, the speed of sound in the air is 346.5 m/s, while based on the chart analysis with logger pro software, the speed of sound in the air is 343.9 ± 0.3171 m/s.

  3. Detection and generation of first sound in4He by vibrating superleak transducers

    NASA Astrophysics Data System (ADS)

    Giordano, N.; Edison, N.

    1986-07-01

    Measurement is made of the first-sound generation and detection efficiencies of vibrating superleak transducers (VSTs) operated in superfluid4He. This is accomplished by using an ordinary pressure transducer to generate first sound with a VST as the detector, and by using a pressure transducer to detect the sound generated by a VST. The results are in reasonably good agreement with the current theory of VST operation.

  4. Detection and generation of first sound in /sup 4/He by vibrating superleak transducers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giordano, N.; Edison, N.

    Measurement is made of the first-sound generation and detection efficiencies of vibrating superleak transducers (VSTs) operated in superfluid /sup 4/He. This is accomplished by using an ordinary pressure transducer to generate first sound with a VST as the detector, and by using a pressure transducer to detect the sound generated by a VST. The results are in reasonably good agreement with the current theory of VST operation.

  5. First and second sound in a two-dimensional harmonically trapped Bose gas across the Berezinskii–Kosterlitz–Thouless transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xia-Ji, E-mail: xiajiliu@swin.edu.au; Hu, Hui, E-mail: hhu@swin.edu.au

    2014-12-15

    We theoretically investigate first and second sound of a two-dimensional (2D) atomic Bose gas in harmonic traps by solving Landau’s two-fluid hydrodynamic equations. For an isotropic trap, we find that first and second sound modes become degenerate at certain temperatures and exhibit typical avoided crossings in mode frequencies. At these temperatures, second sound has significant density fluctuation due to its hybridization with first sound and has a divergent mode frequency towards the Berezinskii–Kosterlitz–Thouless (BKT) transition. For a highly anisotropic trap, we derive the simplified one-dimensional hydrodynamic equations and discuss the sound-wave propagation along the weakly confined direction. Due to themore » universal jump of the superfluid density inherent to the BKT transition, we show that the first sound velocity exhibits a kink across the transition. These predictions might be readily examined in current experimental setups for 2D dilute Bose gases with a sufficiently large number of atoms, where the finite-size effect due to harmonic traps is relatively weak.« less

  6. Method for chemically analyzing a solution by acoustic means

    DOEpatents

    Beller, Laurence S.

    1997-01-01

    A method and apparatus for determining a type of solution and the concention of that solution by acoustic means. Generally stated, the method consists of: immersing a sound focusing transducer within a first liquid filled container; locating a separately contained specimen solution at a sound focal point within the first container; locating a sound probe adjacent to the specimen, generating a variable intensity sound signal from the transducer; measuring fundamental and multiple harmonic sound signal amplitudes; and then comparing a plot of a specimen sound response with a known solution sound response, thereby determining the solution type and concentration.

  7. Frequency Dynamics of the First Heart Sound

    NASA Astrophysics Data System (ADS)

    Wood, John Charles

    Cardiac auscultation is a fundamental clinical tool but first heart sound origins and significance remain controversial. Previous clinical studies have implicated resonant vibrations of both the myocardium and the valves. Accordingly, the goals of this thesis were threefold, (1) to characterize the frequency dynamics of the first heart sound, (2) to determine the relative contribution of the myocardium and the valves in determining first heart sound frequency, and (3) to develop new tools for non-stationary signal analysis. A resonant origin for first heart sound generation was tested through two studies in an open-chest canine preparation. Heart sounds were recorded using ultralight acceleration transducers cemented directly to the epicardium. The first heart sound was observed to be non-stationary and multicomponent. The most dominant feature was a powerful, rapidly-rising frequency component that preceded mitral valve closure. Two broadband components were observed; the first coincided with mitral valve closure while the second significantly preceded aortic valve opening. The spatial frequency of left ventricular vibrations was both high and non-stationary which indicated that the left ventricle was not vibrating passively in response to intracardiac pressure fluctuations but suggested instead that the first heart sound is a propagating transient. In the second study, regional myocardial ischemia was induced by left coronary circumflex arterial occlusion. Acceleration transducers were placed on the ischemic and non-ischemic myocardium to determine whether ischemia produced local or global changes in first heart sound amplitude and frequency. The two zones exhibited disparate amplitude and frequency behavior indicating that the first heart sound is not a resonant phenomenon. To objectively quantify the presence and orientation of signal components, Radon transformation of the time -frequency plane was performed and found to have considerable potential for pattern classification. Radon transformation of the Wigner spectrum (Radon-Wigner transform) was derived to be equivalent to dechirping in the time and frequency domains. Based upon this representation, an analogy between time-frequency estimation and computed tomography was drawn. Cohen's class of time-frequency representations was subsequently shown to result from simple changes in reconstruction filtering parameters. Time-varying filtering, adaptive time-frequency transformation and linear signal synthesis were also performed from the Radon-Wigner representation.

  8. Gender Gaps in Letter-Sound Knowledge Persist Across the First School Year

    PubMed Central

    Sigmundsson, Hermundur; Dybfest Eriksen, Adrian; Ofteland, Greta S.; Haga, Monika

    2018-01-01

    Literacy is the cornerstone of a primary school education and enables the intellectual and social development of young children. Letter-sound knowledge has been identified as critical for developing proficiency in reading. This study explored the development of letter-sound knowledge in relation to gender during the first year of primary school. 485 Norwegian children aged 5–6 years completed assessment of letter-sound knowledge, i.e., uppercase letters- name; uppercase letter -sound; lowercase letters- name; lowercase letter-sound. The children were tested in the beginning, middle, and end of their first school year. The results revealed a clear gender difference in all four variables in favor of the girls which were relatively constant over time. Implications for understanding the role of gender and letter-sound knowledge for later reading performance are discussed. PMID:29662461

  9. Elementary Yoruba: Sound Drills and Greetings. Occasional Publication No. 18.

    ERIC Educational Resources Information Center

    Armstrong, Robert G.; Awujoola, Robert L.

    This introduction to elementary Yoruba is divided into two parts. The first section is on sound drills, and the second section concerns Yoruba greetings. The first part includes exercises to enable the student to master the Yoruba sound system. Emphasis is on pronunciation and recognition of the sounds and tones, but not memorization. A tape is…

  10. 37 CFR 201.22 - Advance notices of potential infringement of works consisting of sounds, images, or both.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... infringement of works consisting of sounds, images, or both. 201.22 Section 201.22 Patents, Trademarks, and... Advance notices of potential infringement of works consisting of sounds, images, or both. (a) Definitions... after the first fixation of a work consisting of sounds, images, or both that is first fixed...

  11. 37 CFR 201.22 - Advance notices of potential infringement of works consisting of sounds, images, or both.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... infringement of works consisting of sounds, images, or both. 201.22 Section 201.22 Patents, Trademarks, and... Advance notices of potential infringement of works consisting of sounds, images, or both. (a) Definitions... after the first fixation of a work consisting of sounds, images, or both that is first fixed...

  12. 37 CFR 201.22 - Advance notices of potential infringement of works consisting of sounds, images, or both.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... infringement of works consisting of sounds, images, or both. 201.22 Section 201.22 Patents, Trademarks, and... Advance notices of potential infringement of works consisting of sounds, images, or both. (a) Definitions... after the first fixation of a work consisting of sounds, images, or both that is first fixed...

  13. Method for chemically analyzing a solution by acoustic means

    DOEpatents

    Beller, L.S.

    1997-04-22

    A method and apparatus are disclosed for determining a type of solution and the concentration of that solution by acoustic means. Generally stated, the method consists of: immersing a sound focusing transducer within a first liquid filled container; locating a separately contained specimen solution at a sound focal point within the first container; locating a sound probe adjacent to the specimen, generating a variable intensity sound signal from the transducer; measuring fundamental and multiple harmonic sound signal amplitudes; and then comparing a plot of a specimen sound response with a known solution sound response, thereby determining the solution type and concentration. 10 figs.

  14. Encoding of sound envelope transients in the auditory cortex of juvenile rats and adult rats.

    PubMed

    Lu, Qi; Jiang, Cuiping; Zhang, Jiping

    2016-02-01

    Accurate neural processing of time-varying sound amplitude and spectral information is vital for species-specific communication. During postnatal development, cortical processing of sound frequency undergoes progressive refinement; however, it is not clear whether cortical processing of sound envelope transients also undergoes age-related changes. We determined the dependence of neural response strength and first-spike latency on sound rise-fall time across sound levels in the primary auditory cortex (A1) of juvenile (P20-P30) rats and adult (8-10 weeks) rats. A1 neurons were categorized as "all-pass", "short-pass", or "mixed" ("all-pass" at high sound levels to "short-pass" at lower sound levels) based on the normalized response strength vs. rise-fall time functions across sound levels. The proportions of A1 neurons within each of the three categories in juvenile rats were similar to that in adult rats. In general, with increasing rise-fall time, the average response strength decreased and the average first-spike latency increased in A1 neurons of both groups. At a given sound level and rise-fall time, the average normalized neural response strength did not differ significantly between the two age groups. However, the A1 neurons in juvenile rats showed greater absolute response strength, longer first-spike latency compared to those in adult rats. In addition, at a constant sound level, the average first-spike latency of juvenile A1 neurons was more sensitive to changes in rise-fall time. Our results demonstrate the dependence of the responses of rat A1 neurons on sound rise-fall time, and suggest that the response latency exhibit some age-related changes in cortical representation of sound envelope rise time. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. [Application of the computer-based respiratory sound analysis system based on Mel-frequency cepstral coefficient and dynamic time warping in healthy children].

    PubMed

    Yan, W Y; Li, L; Yang, Y G; Lin, X L; Wu, J Z

    2016-08-01

    We designed a computer-based respiratory sound analysis system to identify pediatric normal lung sound. To verify the validity of the computer-based respiratory sound analysis system. First we downloaded the standard lung sounds from the network database (website: http: //www.easyauscultation.com/lung-sounds-reference-guide) and recorded 3 samples of abnormal loud sound (rhonchi, wheeze and crackles) from three patients of The Department of Pediatrics, the First Affiliated Hospital of Xiamen University. We regarded such lung sounds as"reference lung sounds". The"test lung sounds"were recorded from 29 children form Kindergarten of Xiamen University. we recorded lung sound by portable electronic stethoscope and valid lung sounds were selected by manual identification. We introduced Mel-frequency cepstral coefficient (MFCC) to extract lung sound features and dynamic time warping (DTW) for signal classification. We had 39 standard lung sounds, recorded 58 test lung sounds. This computer-based respiratory sound analysis system was carried out in 58 lung sound recognition, correct identification of 52 times, error identification 6 times. Accuracy was 89.7%. Based on MFCC and DTW, our computer-based respiratory sound analysis system can effectively identify healthy lung sounds of children (accuracy can reach 89.7%), fully embodies the reliability of the lung sounds analysis system.

  16. Second sound and the density response function in uniform superfluid atomic gases

    NASA Astrophysics Data System (ADS)

    Hu, H.; Taylor, E.; Liu, X.-J.; Stringari, S.; Griffin, A.

    2010-04-01

    Recently, there has been renewed interest in second sound in superfluid Bose and Fermi gases. By using two-fluid hydrodynamic theory, we review the density response χnn(q, ω) of these systems as a tool to identify second sound in experiments based on density probes. Our work generalizes the well-known studies of the dynamic structure factor S(q, ω) in superfluid 4He in the critical region. We show that, in the unitary limit of uniform superfluid Fermi gases, the relative weight of second versus first sound in the compressibility sum rule is given by the Landau-Placzek ratio \\epsilon_{\\mathrm{LP}}\\equiv (\\bar{c}_p-\\bar{c}_v)/\\bar{c}_v for all temperatures below Tc. In contrast to superfluid 4He, epsilonLP is much larger in strongly interacting Fermi gases, being already of order unity for T~0.8Tc, thereby providing promising opportunities to excite second sound with density probes. The relative weights of first and second sound are quite different in S(q, ω) (measured in pulse propagation studies) as compared with Imχnn(q, ω) (measured in two-photon Bragg scattering). We show that first and second sound in S(q, ω) in a strongly interacting Bose-condensed gas are similar to those in a Fermi gas at unitarity. However, in a weakly interacting Bose gas, first and second sound are mainly uncoupled oscillations of the thermal cloud and condensate, respectively, and second sound has most of the spectral weight in S(q, ω). We also discuss the behaviour of the superfluid and normal fluid velocity fields involved in first and second sound.

  17. The Advanced Technology Microwave Sounder (ATMS): A New Operational Sensor Series

    NASA Technical Reports Server (NTRS)

    Kim, Edward; Lyu, Cheng-H Joseph; Leslie, R. Vince; Baker, Neal; Mo, Tsan; Sun, Ninghai; Bi, Li; Anderson, Mike; Landrum, Mike; DeAmici, Giovanni; hide

    2012-01-01

    ATMS is a new satellite microwave sounding sensor designed to provide operational weather agencies with atmospheric temperature and moisture profile information for global weather forecasting and climate applications. ATMS will continue the microwave sounding capabilities first provided by its predecessors, the Microwave Sounding Unit (MSU) and Advanced Microwave Sounding Unit (AMSU). The first ATMS was launched October 28, 2011 on board the Suomi National Polar-orbiting Partnership (S-NPP) satellite. Microwave soundings by themselves are the highest-impact input data used by Numerical Weather Prediction (NWP) models; and ATMS, when combined with the Cross-track Infrared Sounder (CrIS), forms the Cross-track Infrared and Microwave Sounding Suite (CrIMSS). The microwave soundings help meet NWP sounding requirements under cloudy sky conditions and provide key profile information near the surface

  18. Method and apparatus for inspecting conduits

    DOEpatents

    Spisak, Michael J.; Nance, Roy A.

    1997-01-01

    An apparatus and method for ultrasonic inspection of a conduit are provided. The method involves directing a first ultrasonic pulse at a particular area of the conduit at a first angle, receiving the reflected sound from the first ultrasonic pulse, substantially simultaneously or subsequently in very close time proximity directing a second ultrasonic pulse at said area of the conduit from a substantially different angle than said first angle, receiving the reflected sound from the second ultrasonic pulse, and comparing the received sounds to determine if there is a defect in that area of the conduit. The apparatus of the invention is suitable for carrying out the above-described method. The method and apparatus of the present invention provide the ability to distinguish between sounds reflected by defects in a conduit and sounds reflected by harmless deposits associated with the conduit.

  19. A novel method for pediatric heart sound segmentation without using the ECG.

    PubMed

    Sepehri, Amir A; Gharehbaghi, Arash; Dutoit, Thierry; Kocharian, Armen; Kiani, A

    2010-07-01

    In this paper, we propose a novel method for pediatric heart sounds segmentation by paying special attention to the physiological effects of respiration on pediatric heart sounds. The segmentation is accomplished in three steps. First, the envelope of a heart sounds signal is obtained with emphasis on the first heart sound (S(1)) and the second heart sound (S(2)) by using short time spectral energy and autoregressive (AR) parameters of the signal. Then, the basic heart sounds are extracted taking into account the repetitive and spectral characteristics of S(1) and S(2) sounds by using a Multi-Layer Perceptron (MLP) neural network classifier. In the final step, by considering the diastolic and systolic intervals variations due to the effect of a child's respiration, a complete and precise heart sounds end-pointing and segmentation is achieved. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  20. Continuous robust sound event classification using time-frequency features and deep learning

    PubMed Central

    Song, Yan; Xiao, Wei; Phan, Huy

    2017-01-01

    The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification. PMID:28892478

  1. Continuous robust sound event classification using time-frequency features and deep learning.

    PubMed

    McLoughlin, Ian; Zhang, Haomin; Xie, Zhipeng; Song, Yan; Xiao, Wei; Phan, Huy

    2017-01-01

    The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification.

  2. Effect of additional warning sounds on pedestrians' detection of electric vehicles: An ecological approach.

    PubMed

    Fleury, Sylvain; Jamet, Éric; Roussarie, Vincent; Bosc, Laure; Chamard, Jean-Christophe

    2016-12-01

    Virtually silent electric vehicles (EVs) may pose a risk for pedestrians. This paper describes two studies that were conducted to assess the influence of different types of external sounds on EV detectability. In the first study, blindfolded participants had to detect an approaching EV with either no warning sounds at all or one of three types of sound we tested. In the second study, designed to replicate the results of the first one in an ecological setting, the EV was driven along a road and the experimenters counted the number of people who turned their heads in its direction. Results of the first study showed that adding external sounds improve EV detection, and modulating the frequency and increasing the pitch of these sounds makes them more effective. This improvement was confirmed in the ecological context. Consequently, pitch variation and frequency modulation should both be taken into account in future AVAS design. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Using Incremental Rehearsal to Teach Letter Sounds to English Language Learners

    ERIC Educational Resources Information Center

    Rahn, Naomi L.; Wilson, Jennifer; Egan, Andrea; Brandes, Dana; Kunkel, Amy; Peterson, Meredith; McComas, Jennifer

    2015-01-01

    This study examined the effects of incremental rehearsal (IR) on letter sound expression for one kindergarten and one first grade English learner who were below district benchmark for letter sound fluency. A single-subject multiple-baseline design across sets of unknown letter sounds was used to evaluate the effect of IR on letter-sound expression…

  4. Acoustic development of a neonatal beluga whale (Delphinapterus leucas) at the John G. Shedd Aquarium

    NASA Astrophysics Data System (ADS)

    Carneiro, Brooke Elizabeth

    Beluga whales (Delphinapterus leucas) were one of the first marine mammals to be in captivity and currently, nine zoological institutions in North America house belugas (Robeck et al., 2005). Despite their accessibility within these facilities, very little research has been done on the beluga whale that is related to their acoustic development or communication sounds. A male beluga calf named "Nunavik" was born at the John G. Shedd Aquarium on 14 December 2009, which provided an opportunity to examine the ontogeny of underwater sounds by a neonatal beluga from the birth throughout the first year of life. The objectives of the study were to: 1) collect underwater sound recordings of the beluga pod prior to the birth of the calf, 2) collect underwater sound recordings of the neonate during the first year of life, 3) document when and what types of sounds were produced by the calf, 4) compare sounds produced by the calf during agonistic and non-agonistic interactions, and 5) compare the acoustic features of sounds produced by the calf to sounds from the mother, a male beluga calf born at the Vancouver Aquarium in 2002, and other belugas at the John G. Shedd Aquarium. The first recordings of the beluga calf took place six hours following the birth for a two hour period. Subsequent recordings were made daily for one hour for the first two weeks of the calf's life and then twice per week until the calf was about six months of age. Later recordings were done less frequently; about once every other week, with no recordings during a 2-month period due to equipment failure. In total, sixty hours of underwater recordings of the belugas were collected from 26 September 2009 to 27 December 2010. Sounds were audibly and visually examined using Raven Pro version 1.4, a real-time sound analysis software application (Cornell Laboratory of Ornithology), and categorized into three categories (tones, noise, and noise with tones) based on the characteristics of underwater sounds from the same adult beluga whales recorded by Melissa Kelly (2009) at the John G. Shedd Aquarium in 2008. The first recorded sound produced by the calf was a low frequency, pulsed signal which was extremely weak in amplitude and almost seven times lower in frequency compared to similar sounds from a male beluga calf born at the Vancouver Aquarium in 2002. As he grew, the calf steadily increased the complexity and adult-like characteristics in all sound types. He decreased the peak frequency of tones, but increased the peak frequency of noise and noise with tones sounds. Using analysis of variance, sounds produced during the younger age class (0 to 6 months) were significantly longer in duration than during the older age class (6 to 12 months). There was no statistical difference in peak frequency of tones or tones with noise between the two age groups. The peak frequencies of both tones and tones with noise were significantly higher during agonistic contexts compared to non-agonistic contexts. Finally, the age at which the calf was first recorded using echolocation was at about five months. Future studies on the underwater acoustic behavior of beluga whale calves are necessary to identify developmental milestones in their repertoire.

  5. Sexual dimorphism of sonic apparatus and extreme intersexual variation of sounds in Ophidion rochei (Ophidiidae): first evidence of a tight relationship between morphology and sound characteristics in Ophidiidae

    PubMed Central

    2012-01-01

    Background Many Ophidiidae are active in dark environments and display complex sonic apparatus morphologies. However, sound recordings are scarce and little is known about acoustic communication in this family. This paper focuses on Ophidion rochei which is known to display an important sexual dimorphism in swimbladder and anterior skeleton. The aims of this study were to compare the sound producing morphology, and the resulting sounds in juveniles, females and males of O. rochei. Results Males, females, and juveniles possessed different morphotypes. Females and juveniles contrasted with males because they possessed dramatic differences in morphology of their sonic muscles, swimbladder, supraoccipital crest, and first vertebrae and associated ribs. Further, they lacked the ‘rocker bone’ typically found in males. Sounds from each morphotype were highly divergent. Males generally produced non harmonic, multiple-pulsed sounds that lasted for several seconds (3.5 ± 1.3 s) with a pulse period of ca. 100 ms. Juvenile and female sounds were recorded for the first time in ophidiids. Female sounds were harmonic, had shorter pulse period (±3.7 ms), and never exceeded a few dozen milliseconds (18 ± 11 ms). Moreover, unlike male sounds, female sounds did not have alternating long and short pulse periods. Juvenile sounds were weaker but appear to be similar to female sounds. Conclusions Although it is not possible to distinguish externally male from female in O. rochei, they show a sonic apparatus and sounds that are dramatically different. This difference is likely due to their nocturnal habits that may have favored the evolution of internal secondary sexual characters that help to distinguish males from females and that could facilitate mate choice by females. Moreover, the comparison of different morphotypes in this study shows that these morphological differences result from a peramorphosis that takes place during the development of the gonads. PMID:23217241

  6. Effect of ultrasonic cavitation on measurement of sound pressure using hydrophone

    NASA Astrophysics Data System (ADS)

    Thanh Nguyen, Tam; Asakura, Yoshiyuki; Okada, Nagaya; Koda, Shinobu; Yasuda, Keiji

    2017-07-01

    Effect of ultrasonic cavitation on sound pressure at the fundamental, second harmonic, and first ultraharmonic frequencies was investigated from low to high ultrasonic intensities. The driving frequencies were 22, 304, and 488 kHz. Sound pressure was measured using a needle-type hydrophone and ultrasonic cavitation was estimated from the broadband integrated pressure (BIP). With increasing square root of electric power applied to a transducer, the sound pressure at the fundamental frequency linearly increased initially, dropped at approximately the electric power of cavitation inception, and afterward increased again. The sound pressure at the second harmonic frequency was detected just below the electric power of cavitation inception. The first ultraharmonic component appeared at around the electric power of cavitation inception at 304 and 488 kHz. However, at 22 kHz, the first ultraharmonic component appeared at a higher electric power than that of cavitation inception.

  7. Acoustic transducer in system for gas temperature measurement in gas turbine engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeSilva, Upul P.; Claussen, Heiko

    An apparatus for controlling operation of a gas turbine engine including at least one acoustic transmitter/receiver device located on a flow path boundary structure. The acoustic transmitter/receiver device includes an elongated sound passage defined by a surface of revolution having opposing first and second ends and a central axis extending between the first and second ends, an acoustic sound source located at the first end, and an acoustic receiver located within the sound passage between the first and second ends. The boundary structure includes an opening extending from outside the boundary structure to the flow path, and the second endmore » of the surface of revolution is affixed to the boundary structure at the opening for passage of acoustic signals between the sound passage and the flow path.« less

  8. A Lexical Analysis of Environmental Sound Categories

    ERIC Educational Resources Information Center

    Houix, Olivier; Lemaitre, Guillaume; Misdariis, Nicolas; Susini, Patrick; Urdapilleta, Isabel

    2012-01-01

    In this article we report on listener categorization of meaningful environmental sounds. A starting point for this study was the phenomenological taxonomy proposed by Gaver (1993b). In the first experimental study, 15 participants classified 60 environmental sounds and indicated the properties shared by the sounds in each class. In a second…

  9. The Early Years: Becoming Attuned to Sound

    ERIC Educational Resources Information Center

    Ashbrook, Peggy

    2014-01-01

    Exploration of making and changing sounds is part of the first-grade performance expectation 1-PS4-1, "Plan and conduct investigations to provide evidence that vibrating materials can make sound and that sound can make materials vibrate" (NGSS Lead States 2013, p. 10; see Internet Resource). Early learning experiences build toward…

  10. Maternal sounds elicit lower heart rate in preterm newborns in the first month of life

    PubMed Central

    Rand, Katherine; Lahav, Amir

    2015-01-01

    Background The preferential response to mother’s voice in the fetus and term newborn is well documented. However, the response of preterm neonates is not well understood and more difficult to interpret due to the intensive clinical care and range of medical complications. Aim This study examined the physiological response to maternal sounds and its sustainability in the first month of life in infants born very pretermaturely. Methods Heart rate changes were monitored in 20 hospitalized preterm infants born between 25 and 32 weeks of gestation during 30-minute exposure vs. non-exposure periods of recorded maternal sounds played inside the Neonatal incubator. A total of 13,680 min of HR data was sampled throughout the first month of life during gavage feeds Heart rate with and without exposure to maternal sounds. Results During exposure periods, infants had significantly lower heart rate compared to matched periods of care Auditory without exposure on the same day (p < .0001). This effect was observed in all infants, across the first month of life, irrespective of day of life, gestational age at birth, birth weight, age at testing, Apgar score, caffeine therapy, and requirement for respiratory support. No adverse effects were observed. Conclusion Preterm newborns responded to maternal sounds with decreased heart rate throughout the first month of life. It is possible that maternal sounds improve autonomic stability and provide a more relaxing environment for this population of newborns. Further studies are needed to determine the therapeutic implications of maternal sound exposure for optimizing care practices and developmental outcomes. PMID:25194837

  11. Use of signal analysis of heart sounds and murmurs to assess severity of mitral valve regurgitation attributable to myxomatous mitral valve disease in dogs.

    PubMed

    Ljungvall, Ingrid; Ahlstrom, Christer; Höglund, Katja; Hult, Peter; Kvart, Clarence; Borgarelli, Michele; Ask, Per; Häggström, Jens

    2009-05-01

    To investigate use of signal analysis of heart sounds and murmurs in assessing severity of mitral valve regurgitation (mitral regurgitation [MR]) in dogs with myxomatous mitral valve disease (MMVD). 77 client-owned dogs. Cardiac sounds were recorded from dogs evaluated by use of auscultatory and echocardiographic classification systems. Signal analysis techniques were developed to extract 7 sound variables (first frequency peak, murmur energy ratio, murmur duration > 200 Hz, sample entropy and first minimum of the auto mutual information function of the murmurs, and energy ratios of the first heart sound [S1] and second heart sound [S2]). Significant associations were detected between severity of MR and all sound variables, except the energy ratio of S1. An increase in severity of MR resulted in greater contribution of higher frequencies, increased signal irregularity, and decreased energy ratio of S2. The optimal combination of variables for distinguishing dogs with high-intensity murmurs from other dogs was energy ratio of S2 and murmur duration > 200 Hz (sensitivity, 79%; specificity, 71%) by use of the auscultatory classification. By use of the echocardiographic classification, corresponding variables were auto mutual information, first frequency peak, and energy ratio of S2 (sensitivity, 88%; specificity, 82%). Most of the investigated sound variables were significantly associated with severity of MR, which indicated a powerful diagnostic potential for monitoring MMVD. Signal analysis techniques could be valuable for clinicians when performing risk assessment or determining whether special care and more extensive examinations are required.

  12. Presystolic tricuspid valve closure: an alternative mechanism of diastolic sound genesis.

    PubMed

    Lee, C H; Xiao, H B; Gibson, D G

    1990-01-01

    We describe a previously unrecognised cause of an added diastolic heart sound. The patient had first-degree heart block and diastolic tricuspid regurgitation, leading to presystolic closure of the tricuspid valve and the production of a loud diastolic sound. Unlike previously described mechanisms for diastolic sounds, this sound was generated by the sudden acceleration of retrograde AV flow in late diastole.

  13. The Advanced Technology Microwave Sounder (ATMS): The First 10 Months On-Orbit

    NASA Technical Reports Server (NTRS)

    Kim, Edward; Lyu, C-H Joseph; Blackwell, Willaim; Leslie, R. Vince; Baker, Neal; Mo, Tsan; Sun, Ninghai; Bi, Li; Anderson, Kent; Landrum, Mike; hide

    2012-01-01

    The Advanced Technology Microwave Sounder (ATMS) is a new satellite microwave sounding sensor designed to provide operational weather agencies with atmospheric temperature and moisture profile information for global weather forecasting and climate applications. A TMS will continue the microwave sounding capabilities first provided by its predecessors, the Microwave Sounding Unit (MSU) and Advanced Microwave Sounding Unit (AMSU). The first ATMS was launched October 28, 2011 on board the NPOESS Preparatory Project (NPP) satellite. Microwave soundings by themselves are the highest-impact input data used by Numerical Weather Prediction (NWP) models, especially under cloudy sky conditions. ATMS has 22 channels spanning 23-183 GHz, closely following the channel set of the MSU, AMSU-A1/2, AMSU-B, Microwave Humidity Sounder (MHS), and Humidity Sounder for Brazil (HSB). All this is accomplished with approximately 1/4 the volume, 1/2 the mass, and 1/2 the power of the three AMSUs. A description of ATMS cal/val activities will be presented followed by examples of its performance after its first 10 months on orbit.

  14. On Sound Reflection in Superfluid

    NASA Astrophysics Data System (ADS)

    Melnikovsky, L. A.

    2008-02-01

    We consider reflection of first and second sound waves by a rigid flat wall in superfluid. A nontrivial dependence of the reflection coefficients on the angle of incidence is obtained. Sound conversion is predicted at slanted incidence.

  15. Airborne sound transmission loss characteristics of woodframe construction

    Treesearch

    Fred F. Rudder

    1985-01-01

    This report summarizes the available data on the airborne sound transmission loss properties of wood-frame construction and evaluates the methods for predicting the airborne sound transmission loss. The first part of the report comprises a summary of sound transmission loss data for wood-frame interior walls and floor-ceiling construction. Data bases describing the...

  16. Just How Does Sound Wave?

    ERIC Educational Resources Information Center

    Shipman, Bob

    2006-01-01

    When children first hear the term "sound wave" perhaps they might associate it with the way a hand waves or perhaps the squiggly line image on a television monitor when sound recordings are being made. Research suggests that children tend to think sound somehow travels as a discrete package, a fast-moving invisible thing, and not something that…

  17. Sound Explorations from the Ages of 10 to 37 Months: The Ontogenesis of Musical Conducts

    ERIC Educational Resources Information Center

    Delalande, Francois; Cornara, Silvia

    2010-01-01

    One of the forms of first musical conduct is the exploration of sound sources. When young children produce sounds with any object, these sounds may surprise them and so they make the sounds again--not exactly the same, but introducing some variation. A process of repetition with slight changes is set in motion which can be analysed, as did Piaget,…

  18. Apparatus and method for processing Korotkov sounds. [for blood pressure measurement

    NASA Technical Reports Server (NTRS)

    Golden, D. P., Jr.; Hoffler, G. W.; Wolthuis, R. A. (Inventor)

    1974-01-01

    A Korotkov sound processor, used in a noninvasive automatic blood measuring system where the brachial artery is occluded by an inflatable cuff, is disclosed. The Korotkoff sound associated with the systolic event is determined when the ratio of the absolute value of a voltage signal, representing Korotkov sounds in the range of 18 to 26 Hz to a maximum absolute peak value of the unfiltered signals, first equals or exceeds a value of 0.45. Korotkov sound associated with the diastolic event is determined when a ratio of the voltage signal of the Korotkov sounds in the range of 40 to 60 Hz to the absolute peak value of such signals within a single measurement cycle first falls below a value of 0.17. The processor signals the occurrence of the systolic and diastolic events and these signals can be used to control a recorder to record pressure values for these events.

  19. First AFSWC Javelin Sounding Rocket On Launcher at Wallops Island.

    NASA Image and Video Library

    1959-07-07

    Air Force Javelin Rocket on Launcher (USAF JV-1) Wallops Model D4-78 L59-5144 First AFSWC Javelin sounding rocket ready for flight test, July 7, 1959. Photograph published in A New Dimension Wallops Island Flight Test Range: The First Fifteen Years by Joseph Shortal. A NASA publication. Page 704.

  20. The Advanced Technology Microwave Sounder (ATMS): First Year On-Orbit

    NASA Technical Reports Server (NTRS)

    Kim, Edward J.

    2012-01-01

    The Advanced Technology Microwave Sounder (ATMS) is a new satellite microwave sounding sensor designed to provide operational weather agencies with atmospheric temperature and moisture profile information for global weather forecasting and climate applications. A TMS will continue the microwave sounding capabilities first provided by its predecessors, the Microwave Sounding Unit (MSU) and Advanced Microwave Sounding Unit (AMSU). The first flight unit was launched a year ago in October, 2011 aboard the Suomi-National Polar-Orbiting Partnership (S-NPP) satellite, part of the new Joint Polar-Orbiting Satellite System (JPSS). Microwave soundings by themselves are the highest-impact input data used by Numerical Weather Prediction models; and A TMS, when combined with the Cross-track Infrared Sounder (CrIS), forms the Cross-track Infrared and Microwave Sounding Suite (CrIMSS). The microwave soundings help meet sounding requirements under cloudy sky conditions and provide key profile information near the surface. ATMS was designed & built by Aerojet Corporation in Azusa, California, (now Northrop Grumman Electronic Systems). It has 22 channels spanning 23-183 GHz, closely following the channel set of the MSU, AMSU-AI/2, AMSU-B, Microwave Humidity Sounder (MHS), and Humidity Sounder for Brazil (HSB). It continues their cross-track scanning geometry, but for the first time, provides Nyquist sample spacing. All this is accomplished with approximately V. the volume, Y, the mass, and Y, the power of the three AMSUs. A description will be given of its performance from its first year of operation as determined by post-launch calibration activities. These activities include radiometric calibration using the on-board warm targets and cold space views, and geolocation determination. Example imagery and zooms of specific weather events will be shown. The second ATMS flight model is currently under construction and planned for launch on the "Jl" satellite of the JPSS program in approximately 2016. Additional units are expected on the J2 and 13 satellites, as well as potentially on future European METOP satellites.

  1. a New Approach to Physiologic Triggering in Medical Imaging Using Multiple Heart Sounds Alone.

    NASA Astrophysics Data System (ADS)

    Groch, Mark Walter

    A new method for physiological synchronization of medical image acquisition using both the first and second heart sound has been developed. Heart sounds gating (HSG) circuitry has been developed which identifies, individually, both the first (S1) and second (S2) heart sounds from their timing relationship alone, and provides two synchronization points during the cardiac cycle. Identification of first and second heart sounds from their timing relationship alone and application to medical imaging has, heretofore, not been performed in radiology or nuclear medicine. The heart sounds are obtained as conditioned analog signals from a piezoelectric transducer microphone placed on the patient's chest. The timing relationships between the S1 to S2 pulses and the S2 to S1 pulses are determined using a logic scheme capable of distinguishing the S1 and S2 pulses from the heart sounds themselves, using their timing relationships, and the assumption that initially the S1-S2 interval will be shorter than the S2-S1 interval. Digital logic circuitry is utilized to continually track the timing intervals and extend the S1/S2 identification to heart rates up to 200 beats per minute (where the S1-S2 interval is not shorter than the S2-S1 interval). Clinically, first heart sound gating may be performed to assess the systolic ejection portion of the cardiac cycle, with S2 gating utilized for reproduction of the diastolic filling portion of the cycle. One application of HSG used for physiologic synchronization is in multigated blood pool (MGBP) imaging in nuclear medicine. Heart sounds gating has been applied to twenty patients who underwent analysis of ventricular function in Nuclear Medicine, and compared to conventional ECG gated MGBP. Left ventricular ejection fractions calculated from MGBP studies using a S1 and a S2 heart sound trigger correlated well with conventional ECG gated acquisitions in patients adequately gated by HSG and ECG. Heart sounds gating provided superior definition of the diastolic filling phase of the cardiac cycle by qualitative assessment of the left ventricular volume time -activity curves. Heart sounds physiological synchronization has potential to be used in other imaging modalities, such as magnetic resonance imaging, where the ECG is distorted due to the electromagnetic environment within the imager.

  2. Biased relevance filtering in the auditory system: A test of confidence-weighted first-impressions.

    PubMed

    Mullens, D; Winkler, I; Damaso, K; Heathcote, A; Whitson, L; Provost, A; Todd, J

    2016-03-01

    Although first-impressions are known to impact decision-making and to have prolonged effects on reasoning, it is less well known that the same type of rapidly formed assumptions can explain biases in automatic relevance filtering outside of deliberate behavior. This paper features two studies in which participants have been asked to ignore sequences of sound while focusing attention on a silent movie. The sequences consisted of blocks, each with a high-probability repetition interrupted by rare acoustic deviations (i.e., a sound of different pitch or duration). The probabilities of the two different sounds alternated across the concatenated blocks within the sequence (i.e., short-to-long and long-to-short). The sound probabilities are rapidly and automatically learned for each block and a perceptual inference is formed predicting the most likely characteristics of the upcoming sound. Deviations elicit a prediction-error signal known as mismatch negativity (MMN). Computational models of MMN generally assume that its elicitation is governed by transition statistics that define what sound attributes are most likely to follow the current sound. MMN amplitude reflects prediction confidence, which is derived from the stability of the current transition statistics. However, our prior research showed that MMN amplitude is modulated by a strong first-impression bias that outweighs transition statistics. Here we test the hypothesis that this bias can be attributed to assumptions about predictable vs. unpredictable nature of each tone within the first encountered context, which is weighted by the stability of that context. The results of Study 1 show that this bias is initially prevented if there is no 1:1 mapping between sound attributes and probability, but it returns once the auditory system determines which properties provide the highest predictive value. The results of Study 2 show that confidence in the first-impression bias drops if assumptions about the temporal stability of the transition-statistics are violated. Both studies provide compelling evidence that the auditory system extrapolates patterns on multiple timescales to adjust its response to prediction-errors, while profoundly distorting the effects of transition-statistics by the assumptions formed on the basis of first-impressions. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Difference in precedence effect between children and adults signifies development of sound localization abilities in complex listening tasks

    PubMed Central

    Litovsky, Ruth Y.; Godar, Shelly P.

    2010-01-01

    The precedence effect refers to the fact that humans are able to localize sound in reverberant environments, because the auditory system assigns greater weight to the direct sound (lead) than the later-arriving sound (lag). In this study, absolute sound localization was studied for single source stimuli and for dual source lead-lag stimuli in 4–5 year old children and adults. Lead-lag delays ranged from 5–100 ms. Testing was conducted in free field, with pink noise bursts emitted from loudspeakers positioned on a horizontal arc in the frontal field. Listeners indicated how many sounds were heard and the perceived location of the first- and second-heard sounds. Results suggest that at short delays (up to 10 ms), the lead dominates sound localization strongly at both ages, and localization errors are similar to those with single-source stimuli. At longer delays errors can be large, stemming from over-integration of the lead and lag, interchanging of perceived locations of the first-heard and second-heard sounds due to temporal order confusion, and dominance of the lead over the lag. The errors are greater for children than adults. Results are discussed in the context of maturation of auditory and non-auditory factors. PMID:20968369

  4. 33 CFR 165.154 - Safety and Security Zones: Long Island Sound Marine Inspection Zone and Captain of the Port Zone.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Island Sound Marine Inspection Zone and Captain of the Port Zone. 165.154 Section 165.154 Navigation and... Areas First Coast Guard District § 165.154 Safety and Security Zones: Long Island Sound Marine... this zone is prohibited unless authorized by the Captain of the Port Long, Island Sound. (3) All...

  5. Linking the shapes of alphabet letters to their sounds: the case of Hebrew

    PubMed Central

    Levin, Iris; Kessler, Brett

    2011-01-01

    Learning the sounds of letters is an important part of learning a writing system. Most previous studies of this process have examined English, focusing on variations in the phonetic iconicity of letter names as a reason why some letter sounds (such as that of b, where the sound is at the beginning of the letter’s name) are easier to learn than others (such as that of w, where the sound is not in the name). The present study examined Hebrew, where variations in the phonetic iconicity of letter names are minimal. In a study of 391 Israeli children with a mean age of 5 years, 10 months, we used multilevel models to examine the factors that are associated with knowledge of letter sounds. One set of factors involved letter names: Children sometimes attributed to a letter a consonant–vowel sound consisting of the first phonemes of the letter’s name. A second set of factors involved contrast: Children had difficulty when there was relatively little contrast in shape between one letter and others. Frequency was also important, encompassing both child-specific effects, such as a benefit for the first letter of a child’s forename, and effects that held true across children, such as a benefit for the first letters of the alphabet. These factors reflect general properties of human learning. PMID:22345901

  6. Cognitive integration of asynchronous natural or non-natural auditory and visual information in videos of real-world events: an event-related potential study.

    PubMed

    Liu, B; Wang, Z; Wu, G; Meng, X

    2011-04-28

    In this paper, we aim to study the cognitive integration of asynchronous natural or non-natural auditory and visual information in videos of real-world events. Videos with asynchronous semantically consistent or inconsistent natural sound or speech were used as stimuli in order to compare the difference and similarity between multisensory integrations of videos with asynchronous natural sound and speech. The event-related potential (ERP) results showed that N1 and P250 components were elicited irrespective of whether natural sounds were consistent or inconsistent with critical actions in videos. Videos with inconsistent natural sound could elicit N400-P600 effects compared to videos with consistent natural sound, which was similar to the results from unisensory visual studies. Videos with semantically consistent or inconsistent speech could both elicit N1 components. Meanwhile, videos with inconsistent speech would elicit N400-LPN effects in comparison with videos with consistent speech, which showed that this semantic processing was probably related to recognition memory. Moreover, the N400 effect elicited by videos with semantically inconsistent speech was larger and later than that elicited by videos with semantically inconsistent natural sound. Overall, multisensory integration of videos with natural sound or speech could be roughly divided into two stages. For the videos with natural sound, the first stage might reflect the connection between the received information and the stored information in memory; and the second one might stand for the evaluation process of inconsistent semantic information. For the videos with speech, the first stage was similar to the first stage of videos with natural sound; while the second one might be related to recognition memory process. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.

  7. Sound synthesis and evaluation of interactive footsteps and environmental sounds rendering for virtual reality applications.

    PubMed

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-09-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.

  8. Brain responses to sound intensity changes dissociate depressed participants and healthy controls.

    PubMed

    Ruohonen, Elisa M; Astikainen, Piia

    2017-07-01

    Depression is associated with bias in emotional information processing, but less is known about the processing of neutral sensory stimuli. Of particular interest is processing of sound intensity which is suggested to indicate central serotonergic function. We tested weather event-related brain potentials (ERPs) to occasional changes in sound intensity can dissociate first-episode depressed, recurrent depressed and healthy control participants. The first-episode depressed showed larger N1 amplitude to deviant sounds compared to recurrent depression group and control participants. In addition, both depression groups, but not the control group, showed larger N1 amplitude to deviant than standard sounds. Whether these manifestations of sensory over-excitability in depression are directly related to the serotonergic neurotransmission requires further research. The method based on ERPs to sound intensity change is fast and low-cost way to objectively measure brain activation and holds promise as a future diagnostic tool. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Interior and exterior sound field control using general two-dimensional first-order sources.

    PubMed

    Poletti, M A; Abhayapala, T D

    2011-01-01

    Reproduction of a given sound field interior to a circular loudspeaker array without producing an undesirable exterior sound field is an unsolved problem over a broadband of frequencies. At low frequencies, by implementing the Kirchhoff-Helmholtz integral using a circular discrete array of line-source loudspeakers, a sound field can be recreated within the array and produce no exterior sound field, provided that the loudspeakers have azimuthal polar responses with variable first-order responses which are a combination of a two-dimensional (2D) monopole and a radially oriented 2D dipole. This paper examines the performance of circular discrete arrays of line-source loudspeakers which also include a tangential dipole, providing general variable-directivity responses in azimuth. It is shown that at low frequencies, the tangential dipoles are not required, but that near and above the Nyquist frequency, the tangential dipoles can both improve the interior accuracy and reduce the exterior sound field. The additional dipoles extend the useful range of the array by around an octave.

  10. Principal cells of the brainstem's interaural sound level detector are temporal differentiators rather than integrators.

    PubMed

    Franken, Tom P; Joris, Philip X; Smith, Philip H

    2018-06-14

    The brainstem's lateral superior olive (LSO) is thought to be crucial for localizing high-frequency sounds by coding interaural sound level differences (ILD). Its neurons weigh contralateral inhibition against ipsilateral excitation, making their firing rate a function of the azimuthal position of a sound source. Since the very first in vivo recordings, LSO principal neurons have been reported to give sustained and temporally integrating 'chopper' responses to sustained sounds. Neurons with transient responses were observed but largely ignored and even considered a sign of pathology. Using the Mongolian gerbil as a model system, we have obtained the first in vivo patch clamp recordings from labeled LSO neurons and find that principal LSO neurons, the most numerous projection neurons of this nucleus, only respond at sound onset and show fast membrane features suggesting an importance for timing. These results provide a new framework to interpret previously puzzling features of this circuit. © 2018, Franken et al.

  11. Hurricane Gustav (2008) Waves and Storm Surge: Hindcast, Synoptic Analysis, and Validation in Southern Louisiana

    DTIC Science & Technology

    2011-08-01

    weakening to category 3 prior to its first landfall, maintained its intensity through the Breton and Chandeleur Sounds, and tracked near metropolitan New...Gulf Outlet (MRGO) 5 Inner Harbor Navigational Canal (IHNC) 6 Mississippi River Bays, lakes, and sounds 7 Chandeleur Sound 8 Breton Sound 9 Lake Borgne...Sound Islands 18 Chandeleur Islands 19 Grand Isle Places 20 Louisiana–Mississippi Shelf 21 Biloxi marsh 22 Caernarvon marsh 23 ‘‘Bird’s foot’’ of the

  12. Coronal sounding with Ulysses - Preliminary results from the first solar conjunction

    NASA Technical Reports Server (NTRS)

    Paetzold, M.; Bird, M. K.; Volland, H.; Edenhofer, P.; Asmar, S. W.; Brenkle, J. P.

    1992-01-01

    Radio-sounding observations of the solar corona between 4 and 115 solar radii were performed during the first superior solar conjunction phase of the Ulysses spacecraft in August/September 1991. As a first result of this Solar Corona Experiment, the total electron content inferred from dual-frequency ranging observations is presented here as a function of solar distance.

  13. What’s in a Name? Sound Symbolism and Gender in First Names

    PubMed Central

    2015-01-01

    Although the arbitrariness of language has been considered one of its defining features, studies have demonstrated that certain phonemes tend to be associated with certain kinds of meaning. A well-known example is the Bouba/Kiki effect, in which nonwords like bouba are associated with round shapes while nonwords like kiki are associated with sharp shapes. These sound symbolic associations have thus far been limited to nonwords. Here we tested whether or not the Bouba/Kiki effect extends to existing lexical stimuli; in particular, real first names. We found that the roundness/sharpness of the phonemes in first names impacted whether the names were associated with round or sharp shapes in the form of character silhouettes (Experiments 1a and 1b). We also observed an association between femaleness and round shapes, and maleness and sharp shapes. We next investigated whether this association would extend to the features of language and found the proportion of round-sounding phonemes was related to name gender (Analysis of Category Norms). Finally, we investigated whether sound symbolic associations for first names would be observed for other abstract properties; in particular, personality traits (Experiment 2). We found that adjectives previously judged to be either descriptive of a figuratively ‘round’ or a ‘sharp’ personality were associated with names containing either round- or sharp-sounding phonemes, respectively. These results demonstrate that sound symbolic associations extend to existing lexical stimuli, providing a new example of non-arbitrary mappings between form and meaning. PMID:26016856

  14. What's in a Name? Sound Symbolism and Gender in First Names.

    PubMed

    Sidhu, David M; Pexman, Penny M

    2015-01-01

    Although the arbitrariness of language has been considered one of its defining features, studies have demonstrated that certain phonemes tend to be associated with certain kinds of meaning. A well-known example is the Bouba/Kiki effect, in which nonwords like bouba are associated with round shapes while nonwords like kiki are associated with sharp shapes. These sound symbolic associations have thus far been limited to nonwords. Here we tested whether or not the Bouba/Kiki effect extends to existing lexical stimuli; in particular, real first names. We found that the roundness/sharpness of the phonemes in first names impacted whether the names were associated with round or sharp shapes in the form of character silhouettes (Experiments 1a and 1b). We also observed an association between femaleness and round shapes, and maleness and sharp shapes. We next investigated whether this association would extend to the features of language and found the proportion of round-sounding phonemes was related to name gender (Analysis of Category Norms). Finally, we investigated whether sound symbolic associations for first names would be observed for other abstract properties; in particular, personality traits (Experiment 2). We found that adjectives previously judged to be either descriptive of a figuratively 'round' or a 'sharp' personality were associated with names containing either round- or sharp-sounding phonemes, respectively. These results demonstrate that sound symbolic associations extend to existing lexical stimuli, providing a new example of non-arbitrary mappings between form and meaning.

  15. Electronic filters, hearing aids and methods

    NASA Technical Reports Server (NTRS)

    Engebretson, A. Maynard (Inventor)

    1995-01-01

    An electronic filter for an electroacoustic system. The system has a microphone for generating an electrical output from external sounds and an electrically driven transducer for emitting sound. Some of the sound emitted by the transducer returns to the microphone means to add a feedback contribution to its electrical output. The electronic filter includes a first circuit for electronic processing of the electrical output of the microphone to produce a first signal. An adaptive filter, interconnected with the first circuit, performs electronic processing of the first signal to produce an adaptive output to the first circuit to substantially offset the feedback contribution in the electrical output of the microphone, and the adaptive filter includes means for adapting only in response to polarities of signals supplied to and from the first circuit. Other electronic filters for hearing aids, public address systems and other electroacoustic systems, as well as such systems and methods of operating them are also disclosed.

  16. Exploratory investigation of sound pressure level in the wake of an oscillating airfoil in the vicinity of stall

    NASA Technical Reports Server (NTRS)

    Gray, R. B.; Pierce, G. A.

    1972-01-01

    Wind tunnel tests were performed on two oscillating two-dimensional lifting surfaces. The first of these models had an NACA 0012 airfoil section while the second simulated the classical flat plate. Both of these models had a mean angle of attack of 12 degrees while being oscillated in pitch about their midchord with a double amplitude of 6 degrees. Wake surveys of sound pressure level were made over a frequency range from 16 to 32 Hz and at various free stream velocities up to 100 ft/sec. The sound pressure level spectrum indicated significant peaks in sound intensity at the oscillation frequency and its first harmonic near the wake of both models. From a comparison of these data with that of a sound level meter, it is concluded that most of the sound intensity is contained within these peaks and no appreciable peaks occur at higher harmonics. It is concluded that within the wake the sound intensity is largely pseudosound while at one chord length outside the wake, it is largely true vortex sound. For both the airfoil and flat plate the peaks appear to be more strongly dependent upon the airspeed than on the oscillation frequency. Therefore reduced frequency does not appear to be a significant parameter in the generation of wake sound intensity.

  17. Is 9 louder than 1? Audiovisual cross-modal interactions between number magnitude and judged sound loudness.

    PubMed

    Alards-Tomalin, Doug; Walker, Alexander C; Shaw, Joshua D M; Leboe-McGowan, Launa C

    2015-09-01

    The cross-modal impact of number magnitude (i.e. Arabic digits) on perceived sound loudness was examined. Participants compared a target sound's intensity level against a previously heard reference sound (which they judged as quieter or louder). Paired with each target sound was a task irrelevant Arabic digit that varied in magnitude, being either small (1, 2, 3) or large (7, 8, 9). The degree to which the sound and the digit were synchronized was manipulated, with the digit and sound occurring simultaneously in Experiment 1, and the digit preceding the sound in Experiment 2. Firstly, when target sounds and digits occurred simultaneously, sounds paired with large digits were categorized as loud more frequently than sounds paired with small digits. Secondly, when the events were separated, number magnitude ceased to bias sound intensity judgments. In Experiment 3, the events were still separated, however the participants held the number in short-term memory. In this instance the bias returned. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Development of Virtual Auditory Interfaces

    DTIC Science & Technology

    2001-03-01

    reference to compare the sound in the VE with the real 4. Lessons from the Entertainment Industry world experience. The entertainment industry has...systems are currently being evaluated. even though we have the technology to create astounding The first system uses a portable Sony TCD-D8 DAT audio...data set created a system called "Fantasound" which wrapped the including sound recordings and sound measurements musical compositions and sound

  19. First University of Michigan Strongarm sounding rocket on launcher at Wallops for test, November 10, 1959E5-188 Shop and Launcher Pictures

    NASA Image and Video Library

    1959-11-10

    L59-7932 First University of Michigan Strongarm sounding rocket on launcher at Wallops for test, November 10, 1959. Photograph published in A New Dimension Wallops Island Flight Test Range: The First Fifteen Years by Joseph Shortal. A NASA publication. Page 701.E5-188 Shop and Launcher Pictures

  20. 14 CFR 36.6 - Incorporation by reference.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... No. 179, entitled “Precision Sound Level Meters,” dated 1973. (ii) IEC Publication No. 225, entitled... 1966. (iii) IEC Publication No. 651, entitled “Sound Level Meters,” first edition, dated 1979. (iv) IEC... edition, dated 1976. (v) IEC Publication No. 804, entitled “Integrating-averaging Sound Level Meters...

  1. The Riggs Institute: What We Teach.

    ERIC Educational Resources Information Center

    McCulloch, Myrna

    Phonetic content/handwriting instruction begins by teaching the sounds of, and letter formation for the 70 "Orton" phonograms which are the commonly-used correct spelling patterns for the 45 sounds of English speech. The purpose for teaching the sound/symbol relationship first in isolation, without key words or pictures (explicitly), is to give…

  2. Snoring intensity after a first session of soft palate radiofrequency: predictive value of the final result.

    PubMed

    Blumen, Marc Bernard; Vezina, Jean Philippe; Bequignon, Emilie; Chabolle, Frederic

    2013-06-01

    To determine whether snoring sound intensity measured after a first soft palate radiofrequency (RF) session for simple snoring helps predict the final result of the treatment. Observational retrospective study. We conducted a retrospective review of 105 subjects presenting with simple snoring or mild sleep apnea. All patients underwent two to three sessions of RF-assisted stiffening of the soft palate. In addition, uvulectomy was performed in case of a long uvula, and two paramedian trenches were created in the presence of palatal webbing. Snoring sound intensity was evaluated by the bed partner after each session. Eighty-six men and 19 women were included in the study. Mean age was 51.7 ± 9.8 years, and mean body mass index was 24.7 ± 4.4 kg/m(2) . The mean apnea/hypopnea index was 6.6 ± 4.2/h. The mean snoring sound intensity, as evaluated on a 10-cm visual analog scale (VAS), decreased from 8.2 ± 1.5 to 3.5 ± 2.2 after all sessions (P < .0001). A score of 3 was determined as being a score that satisfied the bed partner. Two groups were formed according to the final snoring sound intensity, using 3 as a threshold. Both groups had similar preoperative characteristics, but the snoring sound intensity was significantly lower after the first session in the group with final score <3 (P = .01). Similarly, a VAS score >7 after the first session was associated with a final score <3 in 30% of the cases. Snoring sound intensity after the first RF session helps predict the final outcome of RF-assisted stiffening of the soft palate for simple snoring. Copyright © 2012 The American Laryngological, Rhinological and Otological Society, Inc.

  3. Sound Shell Model for Acoustic Gravitational Wave Production at a First-Order Phase Transition in the Early Universe.

    PubMed

    Hindmarsh, Mark

    2018-02-16

    A model for the acoustic production of gravitational waves at a first-order phase transition is presented. The source of gravitational radiation is the sound waves generated by the explosive growth of bubbles of the stable phase. The model assumes that the sound waves are linear and that their power spectrum is determined by the characteristic form of the sound shell around the expanding bubble. The predicted power spectrum has two length scales, the average bubble separation and the sound shell width when the bubbles collide. The peak of the power spectrum is at wave numbers set by the sound shell width. For a higher wave number k, the power spectrum decreases to k^{-3}. At wave numbers below the inverse bubble separation, the power spectrum goes to k^{5}. For bubble wall speeds near the speed of sound where these two length scales are distinguished, there is an intermediate k^{1} power law. The detailed dependence of the power spectrum on the wall speed and the other parameters of the phase transition raises the possibility of their constraint or measurement at a future space-based gravitational wave observatory such as LISA.

  4. Sound Shell Model for Acoustic Gravitational Wave Production at a First-Order Phase Transition in the Early Universe

    NASA Astrophysics Data System (ADS)

    Hindmarsh, Mark

    2018-02-01

    A model for the acoustic production of gravitational waves at a first-order phase transition is presented. The source of gravitational radiation is the sound waves generated by the explosive growth of bubbles of the stable phase. The model assumes that the sound waves are linear and that their power spectrum is determined by the characteristic form of the sound shell around the expanding bubble. The predicted power spectrum has two length scales, the average bubble separation and the sound shell width when the bubbles collide. The peak of the power spectrum is at wave numbers set by the sound shell width. For a higher wave number k , the power spectrum decreases to k-3. At wave numbers below the inverse bubble separation, the power spectrum goes to k5. For bubble wall speeds near the speed of sound where these two length scales are distinguished, there is an intermediate k1 power law. The detailed dependence of the power spectrum on the wall speed and the other parameters of the phase transition raises the possibility of their constraint or measurement at a future space-based gravitational wave observatory such as LISA.

  5. Heart sounds: are you listening? Part 2.

    PubMed

    Reimer-Kent, Jocelyn

    2013-01-01

    The first of this two-part article on heart sounds was in the Spring 2013 issue of the Canadian Journal of Cardiovascular Nursing (Reimer-Kent, 2013). Part 1 emphasized the importance of all nurses having an understanding of heart sounds and being proficient in cardiac auscultation. The article also focused on an overview of the fundamentals of cardiac auscultation and basic heart sounds. This article provides an overview of the anatomy and pathophysiology related to valvular heart disease and describes the array of heart sounds associated with stenotic or regurgitant aortic and mitral valve conditions.

  6. Leak locating microphone, method and system for locating fluid leaks in pipes

    DOEpatents

    Kupperman, David S.; Spevak, Lev

    1994-01-01

    A leak detecting microphone inserted directly into fluid within a pipe includes a housing having a first end being inserted within the pipe and a second opposed end extending outside the pipe. A diaphragm is mounted within the first housing end and an acoustic transducer is coupled to the diaphragm for converting acoustical signals to electrical signals. A plurality of apertures are provided in the housing first end, the apertures located both above and below the diaphragm, whereby to equalize fluid pressure on either side of the diaphragm. A leak locating system and method are provided for locating fluid leaks within a pipe. A first microphone is installed within fluid in the pipe at a first selected location and sound is detected at the first location. A second microphone is installed within fluid in the pipe at a second selected location and sound is detected at the second location. A cross-correlation is identified between the detected sound at the first and second locations for identifying a leak location.

  7. 10 Hz Amplitude Modulated Sounds Induce Short-Term Tinnitus Suppression

    PubMed Central

    Neff, Patrick; Michels, Jakob; Meyer, Martin; Schecklmann, Martin; Langguth, Berthold; Schlee, Winfried

    2017-01-01

    Objectives: Acoustic stimulation or sound therapy is proposed as a main treatment option for chronic subjective tinnitus. To further probe the field of acoustic stimulations for tinnitus therapy, this exploratory study compared 10 Hz amplitude modulated (AM) sounds (two pure tones, noise, music, and frequency modulated (FM) sounds) and unmodulated sounds (pure tone, noise) regarding their temporary suppression of tinnitus loudness. First, it was hypothesized that modulated sounds elicit larger temporary loudness suppression (residual inhibition) than unmodulated sounds. Second, with manipulation of stimulus loudness and duration of the modulated sounds weaker or stronger effects of loudness suppression were expected, respectively. Methods: We recruited 29 participants with chronic tonal tinnitus from the multidisciplinary Tinnitus Clinic of the University of Regensburg. Participants underwent audiometric, psychometric and tinnitus pitch matching assessments followed by an acoustic stimulation experiment with a tinnitus loudness growth paradigm. In a first block participants were stimulated with all of the sounds for 3 min each and rated their subjective tinnitus loudness to the pre-stimulus loudness every 30 s after stimulus offset. The same procedure was deployed in the second block with the pure tone AM stimuli matched to the tinnitus frequency, manipulated in length (6 min), and loudness (reduced by 30 dB and linear fade out). Repeated measures mixed model analyses of variance (ANOVA) were calculated to assess differences in loudness growth between the stimuli for each block separately. Results: First, we found that all sounds elicit a short-term suppression of tinnitus loudness (seconds to minutes) with strongest suppression right after stimulus offset [F(6, 1331) = 3.74, p < 0.01]. Second, similar to previous findings we found that AM sounds near the tinnitus frequency produce significantly stronger tinnitus loudness suppression than noise [vs. Pink noise: t(27) = −4.22, p < 0.0001]. Finally, variants of the AM sound matched to the tinnitus frequency reduced in sound level resulted in less suppression while there was no significant difference observed for a longer stimulation duration. Moreover, feasibility of the overall procedure could be confirmed as scores of both tinnitus loudness and questionnaires were lower after the experiment [tinnitus loudness: t(27) = 2.77, p < 0.01; Tinnitus Questionnaire: t(27) = 2.06, p < 0.05; Tinnitus Handicap Inventory: t(27) = 1.92, p = 0.065]. Conclusion: Taken together, these results imply that AM sounds, especially in or around the tinnitus frequency, may induce larger suppression than unmodulated sounds. Future studies should thus evaluate this approach in longitudinal studies and real life settings. Furthermore, the putative neural relation of these sound stimuli with a modulation rate in the EEG α band to the observed tinnitus suppression should be probed with respective neurophysiological methods. PMID:28579955

  8. Standing Sound Waves in Air with DataStudio

    ERIC Educational Resources Information Center

    Kraftmakher, Yaakov

    2010-01-01

    Two experiments related to standing sound waves in air are adapted for using the ScienceWorkshop data-acquisition system with the DataStudio software from PASCO scientific. First, the standing waves are created by reflection from a plane reflector. The distribution of the sound pressure along the standing wave is measured. Second, the resonance…

  9. 3-D localization of virtual sound sources: effects of visual environment, pointing method, and training.

    PubMed

    Majdak, Piotr; Goupell, Matthew J; Laback, Bernhard

    2010-02-01

    The ability to localize sound sources in three-dimensional space was tested in humans. In Experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE; darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In Experiment 2, subjects were provided with sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of Experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies.

  10. Early lexical and phonological acquisition and its relationships.

    PubMed

    Wiethan, Fernanda Marafiga; Nóro, Letícia Arruda; Mota, Helena Bolli

    2014-01-01

    Verifying likely relationships between lexical and phonological development of children aged between 1 year to 1 year, 11 months and 29 days, who were enrolled in public kindergarten schools of Santa Maria (RS). The sample consisted of 18 children of both genders, with typical language development and aged between 1 year to 1 year, 11 months and 29 days, separated in three age subgroups. Visual recordings of spontaneous speech of each child were collected and then lexical analysis regarding the types of the said lexical items and phonological assessment were performed. The number of sounds acquired and partially acquired were counted together, and the 19 sounds and two all phones of Brazilian Portuguese were considered. To the statistical analysis, the tests of Kruskal-Wallis and Wilcoxon were used, with significance level of prelace_LT0.05. When compared the means relating to the acquired sounds and mean of the acquired and partially acquired sounds percentages, there was difference between the first and the second age subgroup, and between the first and the third subgroup. In the comparison of the said lexical items means among the age subgroups, there was difference between the first and the second subgroup, and between the first and the third subgroup again. In the comparison between the said lexical items and acquired and partially acquired sounds in each age subgroup, there was difference only in the age subgroup of 1 year and 8 months to 1 year, 11 months and 29 days, in which the sounds highlighted. The phonological and lexical domains develop as a growing process and influence each other. The Phonology has a little advantage.

  11. Discrimination of brief speech sounds is impaired in rats with auditory cortex lesions

    PubMed Central

    Porter, Benjamin A.; Rosenthal, Tara R.; Ranasinghe, Kamalini G.; Kilgard, Michael P.

    2011-01-01

    Auditory cortex (AC) lesions impair complex sound discrimination. However, a recent study demonstrated spared performance on an acoustic startle response test of speech discrimination following AC lesions (Floody et al., 2010). The current study reports the effects of AC lesions on two operant speech discrimination tasks. AC lesions caused a modest and quickly recovered impairment in the ability of rats to discriminate consonant-vowel-consonant speech sounds. This result seems to suggest that AC does not play a role in speech discrimination. However, the speech sounds used in both studies differed in many acoustic dimensions and an adaptive change in discrimination strategy could allow the rats to use an acoustic difference that does not require an intact AC to discriminate. Based on our earlier observation that the first 40 ms of the spatiotemporal activity patterns elicited by speech sounds best correlate with behavioral discriminations of these sounds (Engineer et al., 2008), we predicted that eliminating additional cues by truncating speech sounds to the first 40 ms would render the stimuli indistinguishable to a rat with AC lesions. Although the initial discrimination of truncated sounds took longer to learn, the final performance paralleled rats using full-length consonant-vowel-consonant sounds. After 20 days of testing, half of the rats using speech onsets received bilateral AC lesions. Lesions severely impaired speech onset discrimination for at least one-month post lesion. These results support the hypothesis that auditory cortex is required to accurately discriminate the subtle differences between similar consonant and vowel sounds. PMID:21167211

  12. Interactive Sound Propagation using Precomputation and Statistical Approximations

    NASA Astrophysics Data System (ADS)

    Antani, Lakulish

    Acoustic phenomena such as early reflections, diffraction, and reverberation have been shown to improve the user experience in interactive virtual environments and video games. These effects arise due to repeated interactions between sound waves and objects in the environment. In interactive applications, these effects must be simulated within a prescribed time budget. We present two complementary approaches for computing such acoustic effects in real time, with plausible variation in the sound field throughout the scene. The first approach, Precomputed Acoustic Radiance Transfer, precomputes a matrix that accounts for multiple acoustic interactions between all scene objects. The matrix is used at run time to provide sound propagation effects that vary smoothly as sources and listeners move. The second approach couples two techniques---Ambient Reverberance, and Aural Proxies---to provide approximate sound propagation effects in real time, based on only the portion of the environment immediately visible to the listener. These approaches lie at different ends of a space of interactive sound propagation techniques for modeling sound propagation effects in interactive applications. The first approach emphasizes accuracy by modeling acoustic interactions between all parts of the scene; the second approach emphasizes efficiency by only taking the local environment of the listener into account. These methods have been used to efficiently generate acoustic walkthroughs of architectural models. They have also been integrated into a modern game engine, and can enable realistic, interactive sound propagation on commodity desktop PCs.

  13. Differences in chewing sounds of dry-crisp snacks by multivariate data analysis

    NASA Astrophysics Data System (ADS)

    De Belie, N.; Sivertsvik, M.; De Baerdemaeker, J.

    2003-09-01

    Chewing sounds of different types of dry-crisp snacks (two types of potato chips, prawn crackers, cornflakes and low calorie snacks from extruded starch) were analysed to assess differences in sound emission patterns. The emitted sounds were recorded by a microphone placed over the ear canal. The first bite and the first subsequent chew were selected from the time signal and a fast Fourier transformation provided the power spectra. Different multivariate analysis techniques were used for classification of the snack groups. This included principal component analysis (PCA) and unfold partial least-squares (PLS) algorithms, as well as multi-way techniques such as three-way PLS, three-way PCA (Tucker3), and parallel factor analysis (PARAFAC) on the first bite and subsequent chew. The models were evaluated by calculating the classification errors and the root mean square error of prediction (RMSEP) for independent validation sets. It appeared that the logarithm of the power spectra obtained from the chewing sounds could be used successfully to distinguish the different snack groups. When different chewers were used, recalibration of the models was necessary. Multi-way models distinguished better between chewing sounds of different snack groups than PCA on bite or chew separately and than unfold PLS. From all three-way models applied, N-PLS with three components showed the best classification capabilities, resulting in classification errors of 14-18%. The major amount of incorrect classifications was due to one type of potato chips that had a very irregular shape, resulting in a wide variation of the emitted sounds.

  14. Inexpensive Instruments for a Sound Unit

    ERIC Educational Resources Information Center

    Brazzle, Bob

    2011-01-01

    My unit on sound and waves is embedded within a long-term project in which my high school students construct a musical instrument out of common materials. The unit culminates with a performance assessment: students play the first four measures of "Somewhere Over the Rainbow"--chosen because of the octave interval of the first two notes--in the key…

  15. Sound Recordings and the Library. Occasional Papers Number 179.

    ERIC Educational Resources Information Center

    Almquist, Sharon G.

    The basic concept that sound waves could be traced or recorded on a solid object was developed separately by Leon Scott, Charles Cros, and Thomas Alva Edison between 1857 and 1877 and, by 1890, the foundation of the present-day commercial record industry was established. Although cylinders were the first sound recordings to be sold commercially,…

  16. Fourth-order acoustic torque in intense sound fields

    NASA Technical Reports Server (NTRS)

    Wang, T. G.; Kanber, H.; Olli, E. E.

    1978-01-01

    The observation of a fourth-order acoustic torque in intense sound fields is reported. The torque was determined by measuring the acoustically induced angular deflection of a polished cylinder suspended by a torsion fiber. This torque was measured in a sound field of amplitude greater than that in which first-order acoustic torque has been observed.

  17. Monkeys Match and Tally Quantities across Senses

    ERIC Educational Resources Information Center

    Jordan, Kerry E.; MacLean, Evan L.; Brannon, Elizabeth M.

    2008-01-01

    We report here that monkeys can actively match the number of sounds they hear to the number of shapes they see and present the first evidence that monkeys sum over sounds and sights. In Experiment 1, two monkeys were trained to choose a simultaneous array of 1-9 squares that numerically matched a sample sequence of shapes or sounds. Monkeys…

  18. 49 CFR 325.59 - Measurement procedure; stationary test.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... made of the sound level generated by a stationary motor vehicle as follows: (a) Park the motor vehicle... open throttle. Return the engine's speed to idle. (e) Observe the maximum reading on the sound level... this section until the first two maximum sound level readings that are within 2 dB(A) of each other are...

  19. Auditory and visual localization accuracy in young children and adults.

    PubMed

    Martin, Karen; Johnstone, Patti; Hedrick, Mark

    2015-06-01

    This study aimed to measure and compare sound and light source localization ability in young children and adults who have normal hearing and normal/corrected vision in order to determine the extent to which age, type of stimuli, and stimulus order affects sound localization accuracy. Two experiments were conducted. The first involved a group of adults only. The second involved a group of 30 children aged 3 to 5 years. Testing occurred in a sound-treated booth containing a semi-circular array of 15 loudspeakers set at 10° intervals from -70° to 70° azimuth. Each loudspeaker had a tiny light bulb and a small picture fastened underneath. Seven of the loudspeakers were used to randomly test sound and light source identification. The sound stimulus was the word "baseball". The light stimulus was a flashing of a light bulb triggered by the digital signal of the word "baseball". Each participant was asked to face 0° azimuth, and identify the location of the test stimulus upon presentation. Adults used a computer mouse to click on an icon; children responded by verbally naming or walking toward the picture underneath the corresponding loudspeaker or light. A mixed experimental design using repeated measures was used to determine the effect of age and stimulus type on localization accuracy in children and adults. A mixed experimental design was used to compare the effect of stimulus order (light first/last) and varying or fixed intensity sound on localization accuracy in children and adults. Localization accuracy was significantly better for light stimuli than sound stimuli for children and adults. Children, compared to adults, showed significantly greater localization errors for audition. Three-year-old children had significantly greater sound localization errors compared to 4- and 5-year olds. Adults performed better on the sound localization task when the light localization task occurred first. Young children can understand and attend to localization tasks, but show poorer localization accuracy than adults in sound localization. This may be a reflection of differences in sensory modality development and/or central processes in young children, compared to adults. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Developmental Changes in Locating Voice and Sound in Space

    PubMed Central

    Kezuka, Emiko; Amano, Sachiko; Reddy, Vasudevi

    2017-01-01

    We know little about how infants locate voice and sound in a complex multi-modal space. Using a naturalistic laboratory experiment the present study tested 35 infants at 3 ages: 4 months (15 infants), 5 months (12 infants), and 7 months (8 infants). While they were engaged frontally with one experimenter, infants were presented with (a) a second experimenter’s voice and (b) castanet sounds from three different locations (left, right, and behind). There were clear increases with age in the successful localization of sounds from all directions, and a decrease in the number of repetitions required for success. Nonetheless even at 4 months two-thirds of the infants attempted to search for the voice or sound. At all ages localizing sounds from behind was more difficult and was clearly present only at 7 months. Perseverative errors (looking at the last location) were present at all ages and appeared to be task specific (only present in the 7 month-olds for the behind location). Spontaneous attention shifts by the infants between the two experimenters, evident at 7 months, suggest early evidence for infant initiation of triadic attentional engagements. There was no advantage found for voice over castanet sounds in this study. Auditory localization is a complex and contextual process emerging gradually in the first half of the first year. PMID:28979220

  1. Acquisition of Japanese contracted sounds in L1 phonology

    NASA Astrophysics Data System (ADS)

    Tsurutani, Chiharu

    2002-05-01

    Japanese possesses a group of palatalized consonants, known to Japanese scholars as the contracted sounds, [CjV]. English learners of Japanese appear to treat them initially as consonant + glide clusters, where there is an equivalent [Cj] cluster in English, or otherwise tend to insert an epenthetic vowel [CVjV]. The acquisition of the Japanese contracted sounds by first language (L1) learners has not been widely studied compared with the consonant clusters in English with which they bear a close phonetic resemblance but have quite a different phonological status. This is a study to investigate the L1 acquisition process of the Japanese contracted sounds (a) in order to observe how the palatalization gesture is acquired in Japanese and (b) to investigate differences in the sound acquisition processes of first and second language (L2) learners: Japanese children compared with English learners. To do this, the productions of Japanese children ranging in age from 2.5 to 3.5 years were transcribed and the pattern of misproduction was observed.

  2. The Development of Infants’ use of Property-poor Sounds to Individuate Objects

    PubMed Central

    Wilcox, Teresa; Smith, Tracy R.

    2010-01-01

    There is evidence that infants as young as 4.5 months use property-rich but not property-poor sounds as the basis for individuating objects (Wilcox et al., 2006). The current research sought to identify the age at which infants demonstrate the capacity to use property-poor sounds. Using the task of Wilcox et al., infants aged 7 and 9 months were tested. The results revealed that 9- but not 7-month-olds demonstrated sensitivity to property-poor sounds (electronic tones) in an object individuation task. Additional results confirmed that the younger infants were sensitive to property-rich sounds (rattle sounds). These are the first positive results obtained with property-poor sounds in infants and lay the foundation for future research to identify the underlying basis for the developmental hierarchy favoring property-rich over property-poor sounds and possible mechanisms for change. PMID:20701977

  3. Airborne sound transmission loss characteristics of wood-frame construction

    NASA Astrophysics Data System (ADS)

    Rudder, F. F., Jr.

    1985-03-01

    This report summarizes the available data on the airborne sound transmission loss properties of wood-frame construction and evaluates the methods for predicting the airborne sound transmission loss. The first part of the report comprises a summary of sound transmission loss data for wood-frame interior walls and floor-ceiling construction. Data bases describing the sound transmission loss characteristics of other building components, such as windows and doors, are discussed. The second part of the report presents the prediction of the sound transmission loss of wood-frame construction. Appropriate calculation methods are described both for single-panel and for double-panel construction with sound absorption material in the cavity. With available methods, single-panel construction and double-panel construction with the panels connected by studs may be adequately characterized. Technical appendices are included that summarize laboratory measurements, compare measurement with theory, describe details of the prediction methods, and present sound transmission loss data for common building materials.

  4. NEW THORACIC MURMURS, WITH TWO NEW INSTRUMENTS, THE REFRACTOSCOPE AND THE PARTIAL STETHOSCOPE

    PubMed Central

    Parker, Frederick D.

    1918-01-01

    1. An understanding of the physics of sound is essential for a better comprehension of refined auscultation, tone analysis, and the use of these instruments. 2. The detection of variations of the third heart sound should prove a valuable aid in predicting mitral disease. 3. The variations of the outflow sound should prove a valuable aid in determining early aortic lesions with the type of accompanying intimal changes. 4. The character of chamber timbre as distinct from loudness heard as the first and second heart sounds denotes more often the condition of heart muscle, and must not be confounded with valvular disease. 5. The full significance of sound shadows is uncertain. Cardiac sound shadows appear normally in the right axilla and below the left clavicle. Their mode of production is quite clear. 6. Both the third heart sound and the outflow sound may be heard with the ordinary stethoscope. PMID:19868281

  5. Electronic filters, hearing aids and methods

    NASA Technical Reports Server (NTRS)

    Engebretson, A. Maynard (Inventor); O'Connell, Michael P. (Inventor); Zheng, Baohua (Inventor)

    1991-01-01

    An electronic filter for an electroacoustic system. The system has a microphone for generating an electrical output from external sounds and an electrically driven transducer for emitting sound. Some of the sound emitted by the transducer returns to the microphone means to add a feedback contribution to its electical output. The electronic filter includes a first circuit for electronic processing of the electrical output of the microphone to produce a filtered signal. An adaptive filter, interconnected with the first circuit, performs electronic processing of the filtered signal to produce an adaptive output to the first circuit to substantially offset the feedback contribution in the electrical output of the microphone, and the adaptive filter includes means for adapting only in response to polarities of signals supplied to and from the first circuit. Other electronic filters for hearing aids, public address systems and other electroacoustic systems, as well as such systems, and methods of operating them are also disclosed.

  6. Onomatopeya, Derivacion y el Sufijo -azo. (Onomatopeia, Derivation, and the Suffix -azo).

    ERIC Educational Resources Information Center

    Corro, Raymond L.

    1985-01-01

    The nature and source of onomatopeic words in Spanish are discussed in order of decreasing resemblance to the sound imitated. The first group of onomatopeic words are the interjections, in which sound effects and animal sounds are expressed. Repetition is often used to enhance the effect. The second group includes verbs and nouns derived from the…

  7. Sound. Physical Science in Action[TM]. Schlessinger Science Library. [Videotape].

    ERIC Educational Resources Information Center

    2000

    A door closes. A horn beeps. A crowd roars. Sound waves travel outward in all directions from the source. They can all be heard, but how? Did they travel directly to the ears? Perhaps they bounced off another object first or traveled through a different medium, changing speed along the way. Students learn how sound waves travel and about their…

  8. Eye-movements intervening between two successive sounds disrupt comparisons of auditory location

    PubMed Central

    Pavani, Francesco; Husain, Masud; Driver, Jon

    2008-01-01

    Summary Many studies have investigated how saccades may affect the internal representation of visual locations across eye-movements. Here we studied instead whether eye-movements can affect auditory spatial cognition. In two experiments, participants judged the relative azimuth (same/different) of two successive sounds presented from a horizontal array of loudspeakers, separated by a 2.5 secs delay. Eye-position was either held constant throughout the trial (being directed in a fixed manner to the far left or right of the loudspeaker array), or had to be shifted to the opposite side of the array during the retention delay between the two sounds, after the first sound but before the second. Loudspeakers were either visible (Experiment1) or occluded from sight (Experiment 2). In both cases, shifting eye-position during the silent delay-period affected auditory performance in the successive auditory comparison task, even though the auditory inputs to be judged were equivalent. Sensitivity (d′) for the auditory discrimination was disrupted, specifically when the second sound shifted in the opposite direction to the intervening eye-movement with respect to the first sound. These results indicate that eye-movements affect internal representation of auditory location. PMID:18566808

  9. Eye-movements intervening between two successive sounds disrupt comparisons of auditory location.

    PubMed

    Pavani, Francesco; Husain, Masud; Driver, Jon

    2008-08-01

    Many studies have investigated how saccades may affect the internal representation of visual locations across eye-movements. Here, we studied, instead, whether eye-movements can affect auditory spatial cognition. In two experiments, participants judged the relative azimuth (same/different) of two successive sounds presented from a horizontal array of loudspeakers, separated by a 2.5-s delay. Eye-position was either held constant throughout the trial (being directed in a fixed manner to the far left or right of the loudspeaker array) or had to be shifted to the opposite side of the array during the retention delay between the two sounds, after the first sound but before the second. Loudspeakers were either visible (Experiment 1) or occluded from sight (Experiment 2). In both cases, shifting eye-position during the silent delay-period affected auditory performance in thn the successive auditory comparison task, even though the auditory inputs to be judged were equivalent. Sensitivity (d') for the auditory discrimination was disrupted, specifically when the second sound shifted in the opposite direction to the intervening eye-movement with respect to the first sound. These results indicate that eye-movements affect internal representation of auditory location.

  10. Acoustical measurements of sound fields between the stage and the orchestra pit inside an historical opera house

    NASA Astrophysics Data System (ADS)

    Sato, Shin-Ichi; Prodi, Nicola; Sakai, Hiroyuki

    2004-05-01

    To clarify the relationship of the sound fields between the stage and the orchestra pit, we conducted acoustical measurements in a typical historical opera house, the Teatro Comunale of Ferrara, Italy. Orthogonal factors based on the theory of subjective preference and other related factors were analyzed. First, the sound fields for a singer on the stage in relation to the musicians in the pit were analyzed. And then, the sound fields for performers in the pit in relation to the singers on the stage were considered. Because physical factors vary depending on the location of the sound source, performers can move on the stage or in the pit to find the preferred sound field.

  11. [Realization of Heart Sound Envelope Extraction Implemented on LabVIEW Based on Hilbert-Huang Transform].

    PubMed

    Tan, Zhixiang; Zhang, Yi; Zeng, Deping; Wang, Hua

    2015-04-01

    We proposed a research of a heart sound envelope extraction system in this paper. The system was implemented on LabVIEW based on the Hilbert-Huang transform (HHT). We firstly used the sound card to collect the heart sound, and then implemented the complete system program of signal acquisition, pretreatment and envelope extraction on LabVIEW based on the theory of HHT. Finally, we used a case to prove that the system could collect heart sound, preprocess and extract the envelope easily. The system was better to retain and show the characteristics of heart sound envelope, and its program and methods were important to other researches, such as those on the vibration and voice, etc.

  12. Auditory agnosia as a clinical symptom of childhood adrenoleukodystrophy.

    PubMed

    Furushima, Wakana; Kaga, Makiko; Nakamura, Masako; Gunji, Atsuko; Inagaki, Masumi

    2015-08-01

    To investigate detailed auditory features in patients with auditory impairment as the first clinical symptoms of childhood adrenoleukodystrophy (CSALD). Three patients who had hearing difficulty as the first clinical signs and/or symptoms of ALD. Precise examination of the clinical characteristics of hearing and auditory function was performed, including assessments of pure tone audiometry, verbal sound discrimination, otoacoustic emission (OAE), and auditory brainstem response (ABR), as well as an environmental sound discrimination test, a sound lateralization test, and a dichotic listening test (DLT). The auditory pathway was evaluated by MRI in each patient. Poor response to calling was detected in all patients. Two patients were not aware of their hearing difficulty, and had been diagnosed with normal hearing by otolaryngologists at first. Pure-tone audiometry disclosed normal hearing in all patients. All patients showed a normal wave V ABR threshold. Three patients showed obvious difficulty in discriminating verbal sounds, environmental sounds, and sound lateralization and strong left-ear suppression in a dichotic listening test. However, once they discriminated verbal sounds, they correctly understood the meaning. Two patients showed elongation of the I-V and III-V interwave intervals in ABR, but one showed no abnormality. MRIs of these three patients revealed signal changes in auditory radiation including in other subcortical areas. The hearing features of these subjects were diagnosed as auditory agnosia and not aphasia. It should be emphasized that when patients are suspected to have hearing impairment but have no abnormalities in pure tone audiometry and/or ABR, this should not be diagnosed immediately as psychogenic response or pathomimesis, but auditory agnosia must also be considered. Copyright © 2014 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  13. First-impression bias effects on mismatch negativity to auditory spatial deviants.

    PubMed

    Fitzgerald, Kaitlin; Provost, Alexander; Todd, Juanita

    2018-04-01

    Internal models of regularities in the world serve to facilitate perception as redundant input can be predicted and neural resources conserved for that which is new or unexpected. In the auditory system, this is reflected in an evoked potential component known as mismatch negativity (MMN). MMN is elicited by the violation of an established regularity to signal the inaccuracy of the current model and direct resources to the unexpected event. Prevailing accounts suggest that MMN amplitude will increase with stability in regularity; however, observations of first-impression bias contradict stability effects. If tones rotate probabilities as a rare deviant (p = .125) and common standard (p = .875), MMN elicited to the initial deviant tone reaches maximal amplitude faster than MMN to the first standard when later encountered as deviant-a differential pattern that persists throughout rotations. Sensory inference is therefore biased by longer-term contextual information beyond local probability statistics. Using the same multicontext sequence structure, we examined whether this bias generalizes to MMN elicited by spatial sound cues using monaural sounds (n = 19, right first deviant and n = 22, left first deviant) and binaural sounds (n = 19, right first deviant). The characteristic differential modulation of MMN to the two tones was observed in two of three groups, providing partial support for the generalization of first-impression bias to spatially deviant sounds. We discuss possible explanations for its absence when the initial deviant was delivered monaurally to the right ear. © 2017 Society for Psychophysiological Research.

  14. Angry Birds in Space

    NASA Astrophysics Data System (ADS)

    Halford, A. J.

    2017-12-01

    When space computers first started listening into space radio, they noticed that there were radio noises that happened on the morning side of the Earth. Because these waves sounded like noises birds make in the morning, we named these waves after them. These bird sounding waves can move around the Earth, flying up and down, and sometimes move into an area where there is more stuff. This area is also much colder than where these bird noises are first made. When the waves move into this cold area where there is more stuff, they start to sound like angry birds instead of happy birds. Both of these waves, the happy and angry bird sounding waves, are very important to our understanding of how the tiny things in space move and change. Sometimes the waves which sound like birds can push these tiniest of things into the sky. The happy bird sounding waves can push the tiniest things quickly while the angry bird sounding waves push the tinest of things more slowly. When the tiny things fall into the sky, they create beautiful space lights and light that burns which can hurt people in up goers and not so up goers as well as our things like phones, and space computers. We study these waves that sound like birds to better understand when and where the tiny things will fall. That way we can be prepared and enjoy watching the pretty space lights at night with no worries.

  15. Toward blind removal of unwanted sound from orchestrated music

    NASA Astrophysics Data System (ADS)

    Chang, Soo-Young; Chun, Joohwan

    2000-11-01

    The problem addressed in this paper is to removing unwanted sounds from music sound. The sound to be removed could be disturbance such as cough. We shall present some preliminary results on this problem using statistical properties of signals. Our approach consists of three steps. We first estimate the fundamental frequencies and partials given noise-corrupted music sound. This gives us the autoregressive (AR) model of the music sound. Then we filter the noise-corrupted sound using the AR parameters. The filtered signal is then subtracted from the original noise-corrupted signal to get the disturbance. Finally, the obtained disturbance is used a reference signal to eliminate the disturbance from the noise- corrupted music signal. Above three steps are carried out in a recursive manner using a sliding window or an infinitely growing window with an appropriate forgetting factor.

  16. First Contemporary Case of Human Infection with Cryptococcus gattii in Puget Sound: Evidence for Spread of the Vancouver Island Outbreak▿

    PubMed Central

    Upton, Arlo; Fraser, James A.; Kidd, Sarah E.; Bretz, Camille; Bartlett, Karen H.; Heitman, Joseph; Marr, Kieren A.

    2007-01-01

    We report a case of cryptococcosis due to C. gattii which appears to have been acquired in the Puget Sound region, Washington State. Genotyping confirmed identity to the predominant Vancouver Island genotype. This is the first documented case of human disease by the major Vancouver Island emergence strain acquired within the United States. PMID:17596366

  17. Control of Toxic Chemicals in Puget Sound, Phase 3: Study Of Atmospheric Deposition of Air Toxics to the Surface of Puget Sound

    DTIC Science & Technology

    2007-01-01

    deposition directly to Puget Sound was an important source of PAHs, polybrominated diphenyl ethers (PBDEs), and heavy metals . In most cases, atmospheric...versus Atmospheric Fluxes ........................................................................66  PAH Source Apportionment ...temperature inversions) on air quality during the wet season. A semi-quantitative apportionment study permitted a first-order characterization of source

  18. 33 CFR 3.05-35 - Sector Long Island Sound Marine Inspection Zone and Captain of the Port Zone.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Inspection Zone and Captain of the Port Zone. 3.05-35 Section 3.05-35 Navigation and Navigable Waters COAST... ZONES, AND CAPTAIN OF THE PORT ZONES First Coast Guard District § 3.05-35 Sector Long Island Sound Marine Inspection Zone and Captain of the Port Zone. Sector Long Island Sound's office is located in New...

  19. 33 CFR 3.05-35 - Sector Long Island Sound Marine Inspection Zone and Captain of the Port Zone.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Inspection Zone and Captain of the Port Zone. 3.05-35 Section 3.05-35 Navigation and Navigable Waters COAST... ZONES, AND CAPTAIN OF THE PORT ZONES First Coast Guard District § 3.05-35 Sector Long Island Sound Marine Inspection Zone and Captain of the Port Zone. Sector Long Island Sound's office is located in New...

  20. 33 CFR 3.05-35 - Sector Long Island Sound Marine Inspection Zone and Captain of the Port Zone.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Inspection Zone and Captain of the Port Zone. 3.05-35 Section 3.05-35 Navigation and Navigable Waters COAST... ZONES, AND CAPTAIN OF THE PORT ZONES First Coast Guard District § 3.05-35 Sector Long Island Sound Marine Inspection Zone and Captain of the Port Zone. Sector Long Island Sound's office is located in New...

  1. 33 CFR 3.05-35 - Sector Long Island Sound Marine Inspection Zone and Captain of the Port Zone.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Inspection Zone and Captain of the Port Zone. 3.05-35 Section 3.05-35 Navigation and Navigable Waters COAST... ZONES, AND CAPTAIN OF THE PORT ZONES First Coast Guard District § 3.05-35 Sector Long Island Sound Marine Inspection Zone and Captain of the Port Zone. Sector Long Island Sound's office is located in New...

  2. 33 CFR 3.05-35 - Sector Long Island Sound Marine Inspection Zone and Captain of the Port Zone.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Inspection Zone and Captain of the Port Zone. 3.05-35 Section 3.05-35 Navigation and Navigable Waters COAST... ZONES, AND CAPTAIN OF THE PORT ZONES First Coast Guard District § 3.05-35 Sector Long Island Sound Marine Inspection Zone and Captain of the Port Zone. Sector Long Island Sound's office is located in New...

  3. Study of environmental sound source identification based on hidden Markov model for robust speech recognition

    NASA Astrophysics Data System (ADS)

    Nishiura, Takanobu; Nakamura, Satoshi

    2003-10-01

    Humans communicate with each other through speech by focusing on the target speech among environmental sounds in real acoustic environments. We can easily identify the target sound from other environmental sounds. For hands-free speech recognition, the identification of the target speech from environmental sounds is imperative. This mechanism may also be important for a self-moving robot to sense the acoustic environments and communicate with humans. Therefore, this paper first proposes hidden Markov model (HMM)-based environmental sound source identification. Environmental sounds are modeled by three states of HMMs and evaluated using 92 kinds of environmental sounds. The identification accuracy was 95.4%. This paper also proposes a new HMM composition method that composes speech HMMs and an HMM of categorized environmental sounds for robust environmental sound-added speech recognition. As a result of the evaluation experiments, we confirmed that the proposed HMM composition outperforms the conventional HMM composition with speech HMMs and a noise (environmental sound) HMM trained using noise periods prior to the target speech in a captured signal. [Work supported by Ministry of Public Management, Home Affairs, Posts and Telecommunications of Japan.

  4. A Series of Case Studies of Tinnitus Suppression With Mixed Background Stimuli in a Cochlear Implant

    PubMed Central

    Keiner, A. J.; Walker, Kurt; Deshpande, Aniruddha K.; Witt, Shelley; Killian, Matthijs; Ji, Helena; Patrick, Jim; Dillier, Norbert; van Dijk, Pim; Lai, Wai Kong; Hansen, Marlan R.; Gantz, Bruce

    2015-01-01

    Purpose Background sounds provided by a wearable sound playback device were mixed with the acoustical input picked up by a cochlear implant speech processor in an attempt to suppress tinnitus. Method First, patients were allowed to listen to several sounds and to select up to 4 sounds that they thought might be effective. These stimuli were programmed to loop continuously in the wearable playback device. Second, subjects were instructed to use 1 background sound each day on the wearable device, and they sequenced the selected background sounds during a 28-day trial. Patients were instructed to go to a website at the end of each day and rate the loudness and annoyance of the tinnitus as well as the acceptability of the background sound. Patients completed the Tinnitus Primary Function Questionnaire (Tyler, Stocking, Secor, & Slattery, 2014) at the beginning of the trial. Results Results indicated that background sounds were very effective at suppressing tinnitus. There was considerable variability in sounds preferred by the subjects. Conclusion The study shows that a background sound mixed with the microphone input can be effective for suppressing tinnitus during daily use of the sound processor in selected cochlear implant users. PMID:26001407

  5. Konstantinov effect in helium II

    NASA Astrophysics Data System (ADS)

    Melnikovsky, L. A.

    2008-04-01

    The reflection of first and second sound waves by a rigid flat wall in helium II is considered. A nontrivial dependence of the reflection coefficients on the angle of incidence is obtained. Sound conversion is predicted at oblique incidence.

  6. Low-pass filtering of noisy field Schlumberger sounding curves. Part II: Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, N.; Wadhwa, R.S.; Shrotri, B.S.

    1986-02-01

    The basic principles of the application of the linear system theory for smoothing noise-degraded d.c. geoelectrical sounding curves were recently established by Patella. A field Schlumberger sounding is presented to demonstrate first their application and validity. To achieve this purpose, firstly it is pointed out that the required smoothing or low-pass filtering can be considered as an intrinsic property of the transformation of original Schlumberger sounding curves into pole-pole (two-electrode) curves. Then the authors sketch a numerical algorithm to perform the transformation, opportunely modified from a known procedure for transforming dipole diagrams into Schlumberger ones. Finally they show a fieldmore » example with the double aim of demonstrating (i) the high quality of the low-pass filtering, and (ii) the reliability of the transformed pole-pole curve as far as quantitative interpretation is concerned.« less

  7. Sound Wave Energy Resulting from the Impact of Water Drops on the Soil Surface

    PubMed Central

    Ryżak, Magdalena; Bieganowski, Andrzej; Korbiel, Tomasz

    2016-01-01

    The splashing of water drops on a soil surface is the first step of water erosion. There have been many investigations into splashing–most are based on recording and analysing images taken with high-speed cameras, or measuring the mass of the soil moved by splashing. Here, we present a new aspect of the splash phenomenon’s characterization the measurement of the sound pressure level and the sound energy of the wave that propagates in the air. The measurements were carried out for 10 consecutive water drop impacts on the soil surface. Three soils were tested (Endogleyic Umbrisol, Fluvic Endogleyic Cambisol and Haplic Chernozem) with four initial moisture levels (pressure heads: 0.1 kPa, 1 kPa, 3.16 kPa and 16 kPa). We found that the values of the sound pressure and sound wave energy were dependent on the particle size distribution of the soil, less dependent on the initial pressure head, and practically the same for subsequent water drops (from the first to the tenth drop). The highest sound pressure level (and the greatest variability) was for Endogleyic Umbrisol, which had the highest sand fraction content. The sound pressure for this soil increased from 29 dB to 42 dB with the next incidence of drops falling on the sample The smallest (and the lowest variability) was for Fluvic Endogleyic Cambisol which had the highest clay fraction. For all experiments the sound pressure level ranged from ~27 to ~42 dB and the energy emitted in the form of sound waves was within the range of 0.14 μJ to 5.26 μJ. This was from 0.03 to 1.07% of the energy of the incident drops. PMID:27388276

  8. Sound Wave Energy Resulting from the Impact of Water Drops on the Soil Surface.

    PubMed

    Ryżak, Magdalena; Bieganowski, Andrzej; Korbiel, Tomasz

    2016-01-01

    The splashing of water drops on a soil surface is the first step of water erosion. There have been many investigations into splashing-most are based on recording and analysing images taken with high-speed cameras, or measuring the mass of the soil moved by splashing. Here, we present a new aspect of the splash phenomenon's characterization the measurement of the sound pressure level and the sound energy of the wave that propagates in the air. The measurements were carried out for 10 consecutive water drop impacts on the soil surface. Three soils were tested (Endogleyic Umbrisol, Fluvic Endogleyic Cambisol and Haplic Chernozem) with four initial moisture levels (pressure heads: 0.1 kPa, 1 kPa, 3.16 kPa and 16 kPa). We found that the values of the sound pressure and sound wave energy were dependent on the particle size distribution of the soil, less dependent on the initial pressure head, and practically the same for subsequent water drops (from the first to the tenth drop). The highest sound pressure level (and the greatest variability) was for Endogleyic Umbrisol, which had the highest sand fraction content. The sound pressure for this soil increased from 29 dB to 42 dB with the next incidence of drops falling on the sample The smallest (and the lowest variability) was for Fluvic Endogleyic Cambisol which had the highest clay fraction. For all experiments the sound pressure level ranged from ~27 to ~42 dB and the energy emitted in the form of sound waves was within the range of 0.14 μJ to 5.26 μJ. This was from 0.03 to 1.07% of the energy of the incident drops.

  9. Situational Lightning Climatologies for Central Florida: Phase III

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III

    2008-01-01

    This report describes work done by the Applied Meteorology Unit (AMU) to add composite soundings to the Advanced Weather Interactive Processing System (AWIPS). This allows National Weather Service (NWS) forecasters to compare the current atmospheric state with climatology. In a previous phase, the AMU created composite soundings for four rawinsonde observation stations in Florida, for each of eight flow regimes. The composite soundings were delivered to the NWS Melbourne (MLB) office for display using the NSHARP software program. NWS MLB requested that the AMU make the composite soundings available for display in AWIPS. The AMU first created a procedure to customize AWIPS so composite soundings could be displayed. A unique four-character identifier was created for each of the 32 composite soundings. The AMU wrote a Tool Command Language/Tool Kit (TcVTk) software program to convert the composite soundings from NSHARP to Network Common Data Form (NetCDF) format. The NetCDF files were then displayable by AWIPS.

  10. Evaluating signal-to-noise ratios, loudness, and related measures as indicators of airborne sound insulation.

    PubMed

    Park, H K; Bradley, J S

    2009-09-01

    Subjective ratings of the audibility, annoyance, and loudness of music and speech sounds transmitted through 20 different simulated walls were used to identify better single number ratings of airborne sound insulation. The first part of this research considered standard measures such as the sound transmission class the weighted sound reduction index (R(w)) and variations of these measures [H. K. Park and J. S. Bradley, J. Acoust. Soc. Am. 126, 208-219 (2009)]. This paper considers a number of other measures including signal-to-noise ratios related to the intelligibility of speech and measures related to the loudness of sounds. An exploration of the importance of the included frequencies showed that the optimum ranges of included frequencies were different for speech and music sounds. Measures related to speech intelligibility were useful indicators of responses to speech sounds but were not as successful for music sounds. A-weighted level differences, signal-to-noise ratios and an A-weighted sound transmission loss measure were good predictors of responses when the included frequencies were optimized for each type of sound. The addition of new spectrum adaptation terms to R(w) values were found to be the most practical approach for achieving more accurate predictions of subjective ratings of transmitted speech and music sounds.

  11. Structure of supersonic jet flow and its radiated sound

    NASA Technical Reports Server (NTRS)

    Mankbadi, Reda R.; Hayer, M. Ehtesham; Povinelli, Louis A.

    1994-01-01

    The present paper explores the use of large-eddy simulations as a tool for predicting noise from first principles. A high-order numerical scheme is used to perform large-eddy simulations of a supersonic jet flow with emphasis on capturing the time-dependent flow structure representating the sound source. The wavelike nature of this structure under random inflow disturbances is demonstrated. This wavelike structure is then enhanced by taking the inflow disturbances to be purely harmonic. Application of Lighthill's theory to calculate the far-field noise, with the sound source obtained from the calculated time-dependent near field, is demonstrated. Alternative approaches to coupling the near-field sound source to the far-field sound are discussed.

  12. Mapping the sound field of an erupting submarine volcano using an acoustic glider.

    PubMed

    Matsumoto, Haru; Haxel, Joseph H; Dziak, Robert P; Bohnenstiehl, Delwayne R; Embley, Robert W

    2011-03-01

    An underwater glider with an acoustic data logger flew toward a recently discovered erupting submarine volcano in the northern Lau basin. With the volcano providing a wide-band sound source, recordings from the two-day survey produced a two-dimensional sound level map spanning 1 km (depth) × 40 km(distance). The observed sound field shows depth- and range-dependence, with the first-order spatial pattern being consistent with the predictions of a range-dependent propagation model. The results allow constraining the acoustic source level of the volcanic activity and suggest that the glider provides an effective platform for monitoring natural and anthropogenic ocean sounds. © 2011 Acoustical Society of America

  13. Stridulatory sound-production and its function in females of the cicada Subpsaltria yangi.

    PubMed

    Luo, Changqing; Wei, Cong

    2015-01-01

    Acoustic behavior plays a crucial role in many aspects of cicada biology, such as reproduction and intrasexual competition. Although female sound production has been reported in some cicada species, acoustic behavior of female cicadas has received little attention. In cicada Subpsaltria yangi, the females possess a pair of unusually well-developed stridulatory organs. Here, sound production and its function in females of this remarkable cicada species were investigated. We revealed that the females could produce sounds by stridulatory mechanism during pair formation, and the sounds were able to elicit both acoustic and phonotactic responses from males. In addition, the forewings would strike the body during performing stridulatory sound-producing movements, which generated impact sounds. Acoustic playback experiments indicated that the impact sounds played no role in the behavioral context of pair formation. This study provides the first experimental evidence that females of a cicada species can generate sounds by stridulatory mechanism. We anticipate that our results will promote acoustic studies on females of other cicada species which also possess stridulatory system.

  14. Active localization of virtual sounds

    NASA Technical Reports Server (NTRS)

    Loomis, Jack M.; Hebert, C.; Cicinelli, J. G.

    1991-01-01

    We describe a virtual sound display built around a 12 MHz 80286 microcomputer and special purpose analog hardware. The display implements most of the primary cues for sound localization in the ear-level plane. Static information about direction is conveyed by interaural time differences and, for frequencies above 1800 Hz, by head sound shadow (interaural intensity differences) and pinna sound shadow. Static information about distance is conveyed by variation in sound pressure (first power law) for all frequencies, by additional attenuation in the higher frequencies (simulating atmospheric absorption), and by the proportion of direct to reverberant sound. When the user actively locomotes, the changing angular position of the source occasioned by head rotations provides further information about direction and the changing angular velocity produced by head translations (motion parallax) provides further information about distance. Judging both from informal observations by users and from objective data obtained in an experiment on homing to virtual and real sounds, we conclude that simple displays such as this are effective in creating the perception of external sounds to which subjects can home with accuracy and ease.

  15. Pre-slaughter sound levels and pre-slaughter handling from loading at the farm till slaughter influence pork quality.

    PubMed

    Vermeulen, L; Van de Perre, V; Permentier, L; De Bie, S; Verbeke, G; Geers, R

    2016-06-01

    This study investigates the relationship between sound levels, pre-slaughter handling during loading and pork quality. Pre-slaughter variables were investigated from loading till slaughter. A total of 3213 pigs were measured 30 min post-mortem for pH(30LT) (M. Longissimus thoracis). First, a sound level model for the risk to develop PSE meat was established. The difference in maximum and mean sound level during loading, mean sound level during lairage and mean sound level prior to stunning remained significant within the model. This indicated that sound levels during loading had a significant added value to former sound models. Moreover, this study completed the global classification checklist (Vermeulen et al., 2015a) by developing a linear mixed model for pH(30LT) and PSE prevalence, with the difference in maximum and mean sound level measured during loading, the feed withdrawal period and the difference in temperature during loading and lairage. Hence, this study provided new insights over previous research where loading procedures were not included. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Silver, bighead, and common carp orient to acoustic particle motion when avoiding a complex sound.

    PubMed

    Zielinski, Daniel P; Sorensen, Peter W

    2017-01-01

    Behavioral responses of silver carp (Hypopthalmichthys molitrix), bighead carp (H. nobilis), and common carp (Cyprinus carpio) to a complex, broadband sound were tested in the absence of visual cues to determine whether these species are negatively phonotaxic and the roles that sound pressure and particle motion might play mediating this response. In a dark featureless square enclosure, groups of 3 fish were tracked and the distance of each fish from speakers and their swimming trajectories relative to sound pressure and particle acceleration were analyzed before, and then while an outboard motor sound was played. All three species exhibited negative phonotaxis during the first two exposures after which they ceased responding. The median percent time fish spent near the active speaker for the first two trials decreased from 7.0% to 1.3% for silver carp, 7.9% to 1.1% for bighead carp, and 9.5% to 3% for common carp. Notably, when close to the active speaker fish swam away from the source and maintained a nearly perfect 0° orientation to the axes of particle acceleration. Fish did not enter sound fields greater than 140 dB (ref. 1 μPa). These results demonstrate that carp avoid complex sounds in darkness and while initial responses may be informed by sound pressure, sustained oriented avoidance behavior is likely mediated by particle motion. This understanding of how invasive carp use particle motion to guide avoidance could be used to design new acoustic deterrents to divert them in dark, turbid river waters.

  17. Silver, bighead, and common carp orient to acoustic particle motion when avoiding a complex sound

    PubMed Central

    Sorensen, Peter W.

    2017-01-01

    Behavioral responses of silver carp (Hypopthalmichthys molitrix), bighead carp (H. nobilis), and common carp (Cyprinus carpio) to a complex, broadband sound were tested in the absence of visual cues to determine whether these species are negatively phonotaxic and the roles that sound pressure and particle motion might play mediating this response. In a dark featureless square enclosure, groups of 3 fish were tracked and the distance of each fish from speakers and their swimming trajectories relative to sound pressure and particle acceleration were analyzed before, and then while an outboard motor sound was played. All three species exhibited negative phonotaxis during the first two exposures after which they ceased responding. The median percent time fish spent near the active speaker for the first two trials decreased from 7.0% to 1.3% for silver carp, 7.9% to 1.1% for bighead carp, and 9.5% to 3% for common carp. Notably, when close to the active speaker fish swam away from the source and maintained a nearly perfect 0° orientation to the axes of particle acceleration. Fish did not enter sound fields greater than 140 dB (ref. 1 μPa). These results demonstrate that carp avoid complex sounds in darkness and while initial responses may be informed by sound pressure, sustained oriented avoidance behavior is likely mediated by particle motion. This understanding of how invasive carp use particle motion to guide avoidance could be used to design new acoustic deterrents to divert them in dark, turbid river waters. PMID:28654676

  18. Classification of Respiratory Sounds by Using An Artificial Neural Network

    DTIC Science & Technology

    2001-10-28

    CLASSIFICATION OF RESPIRATORY SOUNDS BY USING AN ARTIFICIAL NEURAL NETWORK M.C. Sezgin, Z. Dokur, T. Ölmez, M. Korürek Department of Electronics and...successfully classified by the GAL network. Keywords-Respiratory Sounds, Classification of Biomedical Signals, Artificial Neural Network . I. INTRODUCTION...process, feature extraction, and classification by the artificial neural network . At first, the RS signal obtained from a real-time measurement equipment is

  19. The State of Recorded Sound Preservation in the United States: A National Legacy at Risk in the Digital Age. CLIR Publication No. 148

    ERIC Educational Resources Information Center

    Bamberger, Rob; Brylawski, Sam

    2010-01-01

    This is the first comprehensive, national-level study of the state of sound recording preservation ever conducted in the U.S. The authors have produced a study outlining the web of interlocking issues that now threaten the long-term survival of the sound recording history. This study tells everyone that major areas of America's recorded sound…

  20. Development and Applications of Technology for Sensing Zooplankton

    DTIC Science & Technology

    2003-09-30

    zooplankton-like particles. WORK COMPLETED In support of our first objective, in prior years we occupied sites in both East and West Sound at Orcas ...Island in northern Puget Sound , WA. We have also made deployments at four sites on open linear coasts, including one just north of Oceanside, CA (Red...layers. Multi-static, multi-frequency methods Most active bioacoustical methods in oceanography exclusively utilize the sound that is scattered

  1. Separation of acoustic waves in isentropic flow perturbations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henke, Christian, E-mail: christian.henke@atlas-elektronik.com

    2015-04-15

    The present contribution investigates the mechanisms of sound generation and propagation in the case of highly-unsteady flows. Based on the linearisation of the isentropic Navier–Stokes equation around a new pathline-averaged base flow, it is demonstrated for the first time that flow perturbations of a non-uniform flow can be split into acoustic and vorticity modes, with the acoustic modes being independent of the vorticity modes. Therefore, we can propose this acoustic perturbation as a general definition of sound. As a consequence of the splitting result, we conclude that the present acoustic perturbation is propagated by the convective wave equation and fulfilsmore » Lighthill’s acoustic analogy. Moreover, we can define the deviations of the Navier–Stokes equation from the convective wave equation as “true” sound sources. In contrast to other authors, no assumptions on a slowly varying or irrotational flow are necessary. Using a symmetry argument for the conservation laws, an energy conservation result and a generalisation of the sound intensity are provided. - Highlights: • First splitting of non-uniform flows in acoustic and non-acoustic components. • These result leads to a generalisation of sound which is compatible with Lighthill’s acoustic analogy. • A closed equation for the generation and propagation of sound is given.« less

  2. Magnetoencephalographic responses in relation to temporal and spatial factors of sound fields

    NASA Astrophysics Data System (ADS)

    Soeta, Yoshiharu; Nakagawa, Seiji; Tonoike, Mitsuo; Hotehama, Takuya; Ando, Yoichi

    2004-05-01

    To establish the guidelines based on brain functions for designing sound fields such as a concert hall and an opera house, the activities of the human brain to the temporal and spatial factors of the sound field have been investigated using magnetoencephalography (MEG). MEG is a noninvasive technique for investigating neuronal activity in human brain. First of all, the auditory evoked responses in change of the magnitude of the interaural cross-correlation (IACC) were analyzed. IACC is one of the spatial factors, which has great influence on the degree of subjective preference and diffuseness for sound fields. The results indicated that the peak amplitude of N1m, which was found over the left and right temporal lobes around 100 ms after the stimulus onset, decreased with increasing the IACC. Second, the responses corresponding to subjective preference for one of the typical temporal factors, i.e., the initial delay gap between a direct sound and the first reflection, were investigated. The results showed that the effective duration of the autocorrelation function of MEG between 8 and 13 Hz became longer during presentations of a preferred stimulus. These results indicate that the brain may be relaxed, and repeat a similar temporal rhythm under preferred sound fields.

  3. Influence of double stimulation on sound-localization behavior in barn owls.

    PubMed

    Kettler, Lutz; Wagner, Hermann

    2014-12-01

    Barn owls do not immediately approach a source after they hear a sound, but wait for a second sound before they strike. This represents a gain in striking behavior by avoiding responses to random incidents. However, the first stimulus is also expected to change the threshold for perceiving the subsequent second sound, thus possibly introducing some costs. We mimicked this situation in a behavioral double-stimulus paradigm utilizing saccadic head turns of owls. The first stimulus served as an adapter, was presented in frontal space, and did not elicit a head turn. The second stimulus, emitted from a peripheral source, elicited the head turn. The time interval between both stimuli was varied. Data obtained with double stimulation were compared with data collected with a single stimulus from the same positions as the second stimulus in the double-stimulus paradigm. Sound-localization performance was quantified by the response latency, accuracy, and precision of the head turns. Response latency was increased with double stimuli, while accuracy and precision were decreased. The effect depended on the inter-stimulus interval. These results suggest that waiting for a second stimulus may indeed impose costs on sound localization by adaptation and this reduces the gain obtained by waiting for a second stimulus.

  4. Laser microphone

    DOEpatents

    Veligdan, James T.

    2000-11-14

    A microphone for detecting sound pressure waves includes a laser resonator having a laser gain material aligned coaxially between a pair of first and second mirrors for producing a laser beam. A reference cell is disposed between the laser material and one of the mirrors for transmitting a reference portion of the laser beam between the mirrors. A sensing cell is disposed between the laser material and one of the mirrors, and is laterally displaced from the reference cell for transmitting a signal portion of the laser beam, with the sensing cell being open for receiving the sound waves. A photodetector is disposed in optical communication with the first mirror for receiving the laser beam, and produces an acoustic signal therefrom for the sound waves.

  5. [The assessment of subjective distress related to hyperacusis with a self-rating questionnaire on hypersensitivity to sound].

    PubMed

    Nelting, M; Rienhoff, N K; Hesse, G; Lamparter, U

    2002-05-01

    So far there has been no adequate measure to assess or illustrate, in terms of different levels, subjective distress related to hypersensitivity to sound. The here presented work describes and discusses the construction of a questionnaire to assess subjective distress related to hypersensitivity to sound (GUF). Between May and September 2000 226 patients that experienced suffering from hypersensitivity to sound as well as from chronic tinnitus, completed a first version of the questionnaire on admittance to the hospital. Of these patients 27.9 % were out-patients and 72.1 % were in-patients. In addition, the in-patients completed the questionnaire again during their last week of treatment. The 27 items of the GUF were interpreted by factor analysis to explore and determine the structure of the questionnaire; the number of items was reduced under the aspects of consistency and reliability. Finally, the revised version of the GUF underwent a first validation. The factor analysis shows three factors explaining 50.65 % variance (factor 1 [KRH], cognitive reactions to hyperacusis; factor 2 [ASV], actional/somatic behaviour; factor 3 [ERG], emotional reaction to external noises). First attempts to validate the questionnaire are promising; it appears that the GUF is also sensitive to therapy effects. The here presented questionnaire is suitable for identifying distinct levels of subjective distress related to hypersensitivity to sound. Thus, for the first time, there is an adequate measure for assessment available. Furthermore, results of part of the sample show that the GUF is also suitable for therapy evaluation.

  6. S-NPP ATMS Instrument Prelaunch and On-Orbit Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Kim, Edward; Lyu, Cheng-Hsuan; Anderson, Kent; Leslie, Vincent R.; Blackwell, William J.

    2014-01-01

    The first of a new generation of microwave sounders was launched aboard the Suomi-National Polar-Orbiting Partnership satellite in October 2011. The Advanced Technology Microwave Sounder (ATMS) combines the capabilities and channel sets of three predecessor sounders into a single package to provide information on the atmospheric vertical temperature and moisture profiles that are the most critical observations needed for numerical weather forecast models. Enhancements include size/mass/power approximately one third of the previous total, three new sounding channels, the first space-based, Nyquist-sampled cross-track microwave temperature soundings for improved fusion with infrared soundings, plus improved temperature control and reliability. This paper describes the ATMS characteristics versus its predecessor, the advanced microwave sounding unit (AMSU), and presents the first comprehensive evaluation of key prelaunch and on-orbit performance parameters. Two-year on-orbit performance shows that the ATMS has maintained very stable radiometric sensitivity, in agreement with prelaunch data, meeting requirements for all channels (with margins of 40% for channels 1-15), and improvements over AMSU-A when processed for equivalent spatial resolution. The radiometric accuracy, determined by analysis from ground test measurements, and using on-orbit instrument temperatures, also shows large margins relative to requirements (specified as <1.0K for channels 1, 2, and 16-22 and <0.75 K for channels 3-15). A thorough evaluation of the performance of ATMS is especially important for this first proto-flight model unit of what will eventually be a series of ATMS sensors providing operational sounding capability for the U.S. and its international partners well into the next decade.

  7. Visual Presentation Effects on Identification of Multiple Environmental Sounds

    PubMed Central

    Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio

    2016-01-01

    This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478

  8. Thoracic auscultation in captive bottlenose dolphins (Tursiops truncatus), California sea lions (Zalophus californianus), and South African fur seals (Arctocephalus pusillus) with an electronic stethoscope.

    PubMed

    Scharpegge, Julia; Hartmann, Manuel García; Eulenberger, Klaus

    2012-06-01

    Thoracic auscultation is an important diagnostic method used in cases of suspected pulmonary disease in many species, as respiratory sounds contain significant information on the physiology and pathology of the lungs and upper airways. Respiratory diseases are frequent in marine mammals and are often listed as one of their main causes of death. The aim of this study was to investigate and report baseline parameters for the electronic-mediated thoracic auscultation of one cetacean species and two pinniped species in captivity. Respiratory sounds from 20 captive bottlenose dolphins (Tursiops truncatus), 6 California sea lions (Zalophus californianus), and 5 South African fur seals (Arctocephalus pusillus) were recorded with an electronic stethoscope. The sounds were analyzed for duration of the respiratory cycle, adventitious sounds, and peak frequencies of recorded sounds during expiration and inspiration as well as for sound intensity as reflected by waveform amplitude during the respiratory cycle. In respiratory cycles of the bottlenose dolphins' expiring "on command," the duration of the expiration was significantly shorter than the duration of the inspiration. In the examined pinnipeds of this study, there was no clear pattern concerning the duration of one breathing phase: Adventitious sounds were detected most often in bottlenose dolphins that were expiring on command and could be compared with "forced expiratory wheezes" in humans. This is the first report of forced expiratory wheezes in bottlenose dolphins; they can easily be misinterpreted as pathologic respiratory sounds. The peak frequencies of the respiratory sounds reached over 2,000 Hz in bottlenose dolphins and over 1,000 Hz in California sea lions and South African fur seals, but the variation of the frequency spectra was very high in all animals. To the authors' knowledge, this is the first systematic analysis of respiratory sounds of bottlenose dolphins and two species of pinnipeds.

  9. A general introduction to aeroacoustics and atmospheric sound

    NASA Technical Reports Server (NTRS)

    Lighthill, James

    1992-01-01

    A single unifying principle (based upon the nonlinear 'momentum-flux' effects produced when different components of a motion transport different components of its momentum) is used to give a broad scientific background to several aspects of the interaction between airflows and atmospheric sound. First, it treats the generation of sound by airflows of many different types. These include, for example, jet-like flows involving convected turbulent motions (with the resulting aeroacoustic radiation sensitively dependent on the Mach number of convection) and they include, as an extreme case, the supersonic 'boom' (shock waves generated by a supersonically convected flow pattern). Next, an analysis is given of sound propagation through nonuniformly moving airflows, and the exchange is quantified of energy between flow and sound; while, finally, problems are examined of how sound waves 'on their own' may generate the airflows known as acoustic streaming.

  10. SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization

    PubMed Central

    Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah

    2014-01-01

    Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431

  11. Recognition of Frequency Modulated Whistle-Like Sounds by a Bottlenose Dolphin (Tursiops truncatus) and Humans with Transformations in Amplitude, Duration and Frequency.

    PubMed

    Branstetter, Brian K; DeLong, Caroline M; Dziedzic, Brandon; Black, Amy; Bakhtiari, Kimberly

    2016-01-01

    Bottlenose dolphins (Tursiops truncatus) use the frequency contour of whistles produced by conspecifics for individual recognition. Here we tested a bottlenose dolphin's (Tursiops truncatus) ability to recognize frequency modulated whistle-like sounds using a three alternative matching-to-sample paradigm. The dolphin was first trained to select a specific object (object A) in response to a specific sound (sound A) for a total of three object-sound associations. The sounds were then transformed by amplitude, duration, or frequency transposition while still preserving the frequency contour of each sound. For comparison purposes, 30 human participants completed an identical task with the same sounds, objects, and training procedure. The dolphin's ability to correctly match objects to sounds was robust to changes in amplitude with only a minor decrement in performance for short durations. The dolphin failed to recognize sounds that were frequency transposed by plus or minus ½ octaves. Human participants demonstrated robust recognition with all acoustic transformations. The results indicate that this dolphin's acoustic recognition of whistle-like sounds was constrained by absolute pitch. Unlike human speech, which varies considerably in average frequency, signature whistles are relatively stable in frequency, which may have selected for a whistle recognition system invariant to frequency transposition.

  12. Recognition of Frequency Modulated Whistle-Like Sounds by a Bottlenose Dolphin (Tursiops truncatus) and Humans with Transformations in Amplitude, Duration and Frequency

    PubMed Central

    Branstetter, Brian K.; DeLong, Caroline M.; Dziedzic, Brandon; Black, Amy; Bakhtiari, Kimberly

    2016-01-01

    Bottlenose dolphins (Tursiops truncatus) use the frequency contour of whistles produced by conspecifics for individual recognition. Here we tested a bottlenose dolphin’s (Tursiops truncatus) ability to recognize frequency modulated whistle-like sounds using a three alternative matching-to-sample paradigm. The dolphin was first trained to select a specific object (object A) in response to a specific sound (sound A) for a total of three object-sound associations. The sounds were then transformed by amplitude, duration, or frequency transposition while still preserving the frequency contour of each sound. For comparison purposes, 30 human participants completed an identical task with the same sounds, objects, and training procedure. The dolphin’s ability to correctly match objects to sounds was robust to changes in amplitude with only a minor decrement in performance for short durations. The dolphin failed to recognize sounds that were frequency transposed by plus or minus ½ octaves. Human participants demonstrated robust recognition with all acoustic transformations. The results indicate that this dolphin’s acoustic recognition of whistle-like sounds was constrained by absolute pitch. Unlike human speech, which varies considerably in average frequency, signature whistles are relatively stable in frequency, which may have selected for a whistle recognition system invariant to frequency transposition. PMID:26863519

  13. Observing system simulations using synthetic radiances and atmospheric retrievals derived for the AMSU and HIRS in a mesoscale model. [Advanced Microwave Sounding Unit

    NASA Technical Reports Server (NTRS)

    Diak, George R.; Huang, Hung-Lung; Kim, Dongsoo

    1990-01-01

    The paper addresses the concept of synthetic satellite imagery as a visualization and diagnostic tool for understanding satellite sensors of the future and to detail preliminary results on the quality of soundings from the current sensors. Preliminary results are presented on the quality of soundings from the combination of the High-Resolution Infrared Radiometer Sounder and the Advanced Microwave Sounding Unit. Results are also presented on the first Observing System Simulation Experiment using this data in a mesoscale numerical prediction model.

  14. Ocean Basin Impact of Ambient Noise on Marine Mammal Detectability, Distribution, and Acoustic Communication

    DTIC Science & Technology

    2015-09-30

    soundscapes , and unit of analysis methodology. The study has culminated in a complex analysis of all environmental factors that could be predictors of...regional soundscapes . To build the correlation matrices from ambient sound recordings, the raw data was first converted into a series of sound...sounds. To compare two different soundscape time periods, the correlation matrices for the two periods were then subtracted from each other

  15. Improvisation Begins with Exploration: Giving Students Time to Explore the Sounds They Can Make with Their Instruments and Voices Is the First Step to Helping Them Become Successful Improvisers

    ERIC Educational Resources Information Center

    Volz, Micah D.

    2005-01-01

    Improvisation can be difficult to teach in any music classroom, but it can be particularly problematic in large ensembles like band, chorus, or orchestra. John Kratus proposes seven levels of improvisation, with exploration as the first step in the development of improvisation skills. Through experiences in making sounds, children begin to develop…

  16. Priming Gestures with Sounds

    PubMed Central

    Lemaitre, Guillaume; Heller, Laurie M.; Navolio, Nicole; Zúñiga-Peñaranda, Nicolas

    2015-01-01

    We report a series of experiments about a little-studied type of compatibility effect between a stimulus and a response: the priming of manual gestures via sounds associated with these gestures. The goal was to investigate the plasticity of the gesture-sound associations mediating this type of priming. Five experiments used a primed choice-reaction task. Participants were cued by a stimulus to perform response gestures that produced response sounds; those sounds were also used as primes before the response cues. We compared arbitrary associations between gestures and sounds (key lifts and pure tones) created during the experiment (i.e. no pre-existing knowledge) with ecological associations corresponding to the structure of the world (tapping gestures and sounds, scraping gestures and sounds) learned through the entire life of the participant (thus existing prior to the experiment). Two results were found. First, the priming effect exists for ecological as well as arbitrary associations between gestures and sounds. Second, the priming effect is greatly reduced for ecologically existing associations and is eliminated for arbitrary associations when the response gesture stops producing the associated sounds. These results provide evidence that auditory-motor priming is mainly created by rapid learning of the association between sounds and the gestures that produce them. Auditory-motor priming is therefore mediated by short-term associations between gestures and sounds that can be readily reconfigured regardless of prior knowledge. PMID:26544884

  17. Hybrid mode-scattering/sound-absorbing segmented liner system and method

    NASA Technical Reports Server (NTRS)

    Walker, Bruce E. (Inventor); Hersh, Alan S. (Inventor); Rice, Edward J. (Inventor)

    1999-01-01

    A hybrid mode-scattering/sound-absorbing segmented liner system and method in which an initial sound field within a duct is steered or scattered into higher-order modes in a first mode-scattering segment such that it is more readily and effectively absorbed in a second sound-absorbing segment. The mode-scattering segment is preferably a series of active control components positioned along the annulus of the duct, each of which includes a controller and a resonator into which a piezoelectric transducer generates the steering noise. The sound-absorbing segment is positioned acoustically downstream of the mode-scattering segment, and preferably comprises a honeycomb-backed passive acoustic liner. The invention is particularly adapted for use in turbofan engines, both in the inlet and exhaust.

  18. Material sound source localization through headphones

    NASA Astrophysics Data System (ADS)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  19. The correlation between the first heart sound and cardiac output as measured by using digital esophageal stethoscope under anaesthesia.

    PubMed

    Duck Shin, Young; Hoon Yim, Kyoung; Hi Park, Sang; Wook Jeon, Yong; Ho Bae, Jin; Soo Lee, Tae; Hwan Kim, Myoung; Jin Choi, Young

    2014-03-01

    The use of an esophageal stethoscope is a basic heart sounds monitoring procedure performed in patients under general anesthesia. As the size of the first heart sound can express the left ventricle function, its correlation with cardiac output should be investigated. The aim of this study was to investigate the effects of cardiac output (CO) on the first heart sound (S1) amplitude. Methods : Six male beagles were chosen. The S1 was obtained with the newly developed esophageal stethoscope system. CO was measured using NICOM, a non-invasive CO measuring device. Ephedrine and beta blockers were administered to the subjects to compare changes in figures, and the change from using an inhalation anesthetic was also compared. The S1 amplitude displayed positive correlation with the change rate of CO (r = 0.935, p < 0.001). The heart rate measured using the esophageal stethoscope and ECG showed considerably close figures through the Bland-Altman plot and showed a high positive correlation (r = 0.988, p < 0,001). In beagles, the amplitude of S1 had a significant correlation with changes in CO in a variety of situations.

  20. Sound pressure levels generated at risk volume steps of portable listening devices: types of smartphone and genres of music.

    PubMed

    Kim, Gibbeum; Han, Woojae

    2018-05-01

    The present study estimated the sound pressure levels of various music genres at the volume steps that contemporary smartphones deliver, because these levels put the listener at potential risk for hearing loss. Using six different smartphones (Galaxy S6, Galaxy Note 3, iPhone 5S, iPhone 6, LG G2, and LG G3), the sound pressure levels for three genres of K-pop music (dance-pop, hip-hop, and pop-ballad) and a Billboard pop chart of assorted genres were measured through an earbud for the first risk volume that was at the risk sign proposed by the smartphones, as well as consecutive higher volumes using a sound level meter and artificial mastoid. The first risk volume step of the Galaxy S6 and the LG G2, among the six smartphones, had the significantly lowest (84.1 dBA) and highest output levels (92.4 dBA), respectively. As the volume step increased, so did the sound pressure levels. The iPhone 6 was loudest (113.1 dBA) at the maximum volume step. Of the music genres, dance-pop showed the highest output level (91.1 dBA) for all smartphones. Within the frequency range of 20~ 20,000 Hz, the sound pressure level peaked at 2000 Hz for all the smartphones. The results showed that the sound pressure levels of either the first volume step or the maximum volume step were not the same for the different smartphone models and genres of music, which means that the risk volume sign and its output levels should be unified across the devices for their users. In addition, the risk volume steps proposed by the latest smartphone models are high enough to cause noise-induced hearing loss if their users habitually listen to music at those levels.

  1. Adaptation in sound localization processing induced by interaural time difference in amplitude envelope at high frequencies.

    PubMed

    Kawashima, Takayuki; Sato, Takao

    2012-01-01

    When a second sound follows a long first sound, its location appears to be perceived away from the first one (the localization/lateralization aftereffect). This aftereffect has often been considered to reflect an efficient neural coding of sound locations in the auditory system. To understand determinants of the localization aftereffect, the current study examined whether it is induced by an interaural temporal difference (ITD) in the amplitude envelope of high frequency transposed tones (over 2 kHz), which is known to function as a sound localization cue. In Experiment 1, participants were required to adjust the position of a pointer to the perceived location of test stimuli before and after adaptation. Test and adapter stimuli were amplitude modulated (AM) sounds presented at high frequencies and their positional differences were manipulated solely by the envelope ITD. Results showed that the adapter's ITD systematically affected the perceived position of test sounds to the directions expected from the localization/lateralization aftereffect when the adapter was presented at ±600 µs ITD; a corresponding significant effect was not observed for a 0 µs ITD adapter. In Experiment 2, the observed adapter effect was confirmed using a forced-choice task. It was also found that adaptation to the AM sounds at high frequencies did not significantly change the perceived position of pure-tone test stimuli in the low frequency region (128 and 256 Hz). The findings in the current study indicate that ITD in the envelope at high frequencies induces the localization aftereffect. This suggests that ITD in the high frequency region is involved in adaptive plasticity of auditory localization processing.

  2. The 'sail sound' and tricuspid regurgitation in Ebstein's anomaly: the value of echocardiography in evaluating their mechanisms.

    PubMed

    Oki, T; Fukuda, N; Tabata, T; Yamada, H; Manabe, K; Fukuda, K; Abe, M; Iuchi, A; Ito, S

    1997-03-01

    We describe a patient with Ebstein's anomaly in whom Doppler echocardiography was used to clarify the mechanism responsible for 'sail sound' and tricuspid regurgitation associated with this condition. Phonocardiography revealed an additional early systolic heart sound, consisting of a first low-amplitude component (T1) and a second high-amplitude component (T2, 'sail sound'). In simultaneous recordings of the tricuspid valve motion using M mode echocardiography and phonocardiography, the closing of the tricuspid valve occurred with T1 which originated at the tip of the tricuspid leaflets, while T2 originated from the body of the tricuspid leaflets. Using color Doppler imaging, the tricuspid regurgitant signal was detected during pansystole, indicating a blue signal during the phase corresponding to T1 and a mosaic signal during the phase corresponding to T2 at end-systole. Thus, 'sail sound' in patients with Ebstein's anomaly is not simply a closing sound of the tricuspid valve, but a complex closing sound which includes a sudden stopping sound after the anterior and/or other tricuspid leaflets balloon out at systole.

  3. Fluttering wing feathers produce the flight sounds of male streamertail hummingbirds.

    PubMed

    Clark, Christopher James

    2008-08-23

    Sounds produced continuously during flight potentially play important roles in avian communication, but the mechanisms underlying these sounds have received little attention. Adult male Red-billed Streamertail hummingbirds (Trochilus polytmus) bear elongated tail streamers and produce a distinctive 'whirring' flight sound, whereas subadult males and females do not. The production of this sound, which is a pulsed tone with a mean frequency of 858 Hz, has been attributed to these distinctive tail streamers. However, tail-less streamertails can still produce the flight sound. Three lines of evidence implicate the wings instead. First, it is pulsed in synchrony with the 29 Hz wingbeat frequency. Second, a high-speed video showed that primary feather eight (P8) bends during each downstroke, creating a gap between P8 and primary feather nine (P9). Manipulating either P8 or P9 reduced the production of the flight sound. Third, laboratory experiments indicated that both P8 and P9 can produce tones over a range of 700-900 Hz. The wings therefore produce the distinctive flight sound, enabled via subtle morphological changes to the structure of P8 and P9.

  4. Application of a methodology for categorizing and differentiating urban soundscapes using acoustical descriptors and semantic-differential attributes.

    PubMed

    Torija, Antonio J; Ruiz, Diego P; Ramos-Ridao, A F

    2013-07-01

    A subjective and physical categorization of an ambient sound is the first step to evaluate the soundscape and provides a basis for designing or adapting this ambient sound to match people's expectations. For this reason, the main goal of this work is to develop a categorization and differentiation analysis of soundscapes on the basis of acoustical and perceptual variables. A hierarchical cluster analysis, using 15 semantic-differential attributes and acoustical descriptors to include an equivalent sound-pressure level, maximum-minimum sound-pressure level, impulsiveness of the sound-pressure level, sound-pressure level time course, and spectral composition, was conducted to classify soundscapes into different typologies. This analysis identified 15 different soundscape typologies. Furthermore, based on a discriminant analysis the acoustical descriptors, the crest factor (impulsiveness of the sound-pressure level), and the sound level at 125 Hz were found to be the acoustical variables with the highest impact in the differentiation of the recognized types of soundscapes. Finally, to determine how the different soundscape typologies differed from each other, both subjectively and acoustically, a study was performed.

  5. Research and Implementation of Heart Sound Denoising

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  6. Wavelet Packet Entropy for Heart Murmurs Classification

    PubMed Central

    Safara, Fatemeh; Doraisamy, Shyamala; Azman, Azreen; Jantan, Azrul; Ranga, Sri

    2012-01-01

    Heart murmurs are the first signs of cardiac valve disorders. Several studies have been conducted in recent years to automatically differentiate normal heart sounds, from heart sounds with murmurs using various types of audio features. Entropy was successfully used as a feature to distinguish different heart sounds. In this paper, new entropy was introduced to analyze heart sounds and the feasibility of using this entropy in classification of five types of heart sounds and murmurs was shown. The entropy was previously introduced to analyze mammograms. Four common murmurs were considered including aortic regurgitation, mitral regurgitation, aortic stenosis, and mitral stenosis. Wavelet packet transform was employed for heart sound analysis, and the entropy was calculated for deriving feature vectors. Five types of classification were performed to evaluate the discriminatory power of the generated features. The best results were achieved by BayesNet with 96.94% accuracy. The promising results substantiate the effectiveness of the proposed wavelet packet entropy for heart sounds classification. PMID:23227043

  7. Numerical and Physical Modeling of the Response of Resonator Liners to Intense Sound and Grazing Flow

    NASA Technical Reports Server (NTRS)

    Hersh, Alan S.; Tam, Christopher

    2009-01-01

    Two significant advances have been made in the application of computational aeroacoustics methodology to acoustic liner technology. The first is that temperature effects for discrete sound are not the same as for broadband noise. For discrete sound, the normalized resistance appears to be insensitive to temperature except at high SPL. However, reactance is lower, significantly lower in absolute value, at high temperature. The second is the numerical investigation the acoustic performance of a liner by direct numerical simulation. Liner impedance is affected by the non-uniformity of the incident sound waves. This identifies the importance of pressure gradient. Preliminary design one and two-dimensional impedance models were developed to design sound absorbing liners in the presence of intense sound and grazing flow. The two-dimensional model offers the potential to empirically determine incident sound pressure face-plate distance from resonator orifices. This represents an important initial step in improving our understanding of how to effectively use the Dean Two-Microphone impedance measurement method.

  8. Propagation of second sound in a superfluid Fermi gas in the unitary limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arahata, Emiko; Nikuni, Tetsuro

    2009-10-15

    We study sound propagation in a uniform superfluid gas of Fermi atoms in the unitary limit. The existence of normal and superfluid components leads to appearance of two sound modes in the collisional regime, referred to as first and second sounds. The second sound is of particular interest as it is a clear signal of a superfluid component. Using Landau's two-fluid hydrodynamic theory, we calculate hydrodynamic sound velocities and these weights in the density response function. The latter is used to calculate the response to a sudden modification of the external potential generating pulse propagation. The amplitude of a pulsemore » which is proportional to the weight in the response function is calculated, the basis of the approach of Nozieres and Schmitt-Rink for the BCS-BEC. We show that, in a superfluid Fermi gas at unitarity, the second-sound pulse is excited with an appreciate amplitude by density perturbations.« less

  9. Perception of binary acoustic events associated with the first heart sound

    NASA Technical Reports Server (NTRS)

    Spodick, D. H.

    1977-01-01

    The resolving power of the auditory apparatus permits discrete vibrations associated with cardiac activity to be perceived as one or more events. Irrespective of the vibratory combinations recorded by conventional phonocardiography, in normal adults and in most adult patients auscultators tend to discriminate only two discrete events associated with the first heart sound S1. It is stressed that the heart sound S4 may be present when a binary acoustic event associated with S1 occurs in the sequence 'low pitched sound preceding high pitched sound', i.e., its components are perceived by auscultation as 'dull-sharp'. The question of S4 audibility arises in those individuals, normal and diseased, in whom the major components of S1 ought to be, at least clinically, at their customary high pitch and indeed on the PCG appear as high frequency oscillations. It is revealed that the apparent audibility of recorded S4 is not related to P-R interval, P-S4 interval, or relative amplitude of S4. The significant S4-LFC (low frequency component of S1) differences can be related to acoustic modification of the early component of S1.

  10. Visualization of Heart Sounds and Motion Using Multichannel Sensor

    NASA Astrophysics Data System (ADS)

    Nogata, Fumio; Yokota, Yasunari; Kawamura, Yoko

    2010-06-01

    As there are various difficulties associated with auscultation techniques, we have devised a technique for visualizing heart motion in order to assist in the understanding of heartbeat for both doctors and patients. Auscultatory sounds were first visualized using FFT and Wavelet analysis to visualize heart sounds. Next, to show global and simultaneous heart motions, a new technique for visualization was established. The visualization system consists of a 64-channel unit (63 acceleration sensors and one ECG sensor) and a signal/image analysis unit. The acceleration sensors were arranged in a square array (8×8) with a 20-mm pitch interval, which was adhered to the chest surface. The heart motion of one cycle was visualized at a sampling frequency of 3 kHz and quantization of 12 bits. The visualized results showed a typical waveform motion of the strong pressure shock due to closing tricuspid valve and mitral valve of the cardiac apex (first sound), and the closing aortic and pulmonic valve (second sound) in sequence. To overcome difficulties in auscultation, the system can be applied to the detection of heart disease and to the digital database management of the auscultation examination in medical areas.

  11. Spread Across Liquids: The World's First Microgravity Combustion Experiment on a Sounding Rocket

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Spread Across Liquids (SAL) experiment characterizes how flames spread over liquid pools in a low-gravity environment in comparison to test data at Earth's gravity and with numerical models. The modeling and experimental data provide a more complete understanding of flame spread, an area of textbook interest, and add to our knowledge about on-orbit and Earthbound fire behavior and fire hazards. The experiment was performed on a sounding rocket to obtain the necessary microgravity period. Such crewless sounding rockets provide a comparatively inexpensive means to fly very complex, and potentially hazardous, experiments and perform reflights at a very low additional cost. SAL was the first sounding-rocket-based, microgravity combustion experiment in the world. It was expected that gravity would affect ignition susceptibility and flame spread through buoyant convection in both the liquid pool and the gas above the pool. Prior to these sounding rocket tests, however, it was not clear whether the fuel would ignite readily and whether a flame would be sustained in microgravity. It also was not clear whether the flame spread rate would be faster or slower than in Earth's gravity.

  12. A unified approach for the spatial enhancement of sound

    NASA Astrophysics Data System (ADS)

    Choi, Joung-Woo; Jang, Ji-Ho; Kim, Yang-Hann

    2005-09-01

    This paper aims to control the sound field spatially, so that the desired or target acoustic variable is enhanced within a zone where a listener is located. This is somewhat analogous to having manipulators that can draw sounds in any place. This also means that one can somehow see the controlled shape of sound in frequency or in real time. The former assures its practical applicability, for example, listening zone control for music. The latter provides a mean of analyzing sound field. With all these regards, a unified approach is proposed that can enhance selected acoustic variables using multiple sources. Three kinds of acoustic variables that have to do with magnitude and direction of sound field are formulated and enhanced. The first one, which has to do with the spatial control of acoustic potential energy, enables one to make a zone of loud sound over an area. Otherwise, one can control directional characteristic of sound field by controlling directional energy density, or one can enhance the magnitude and direction of sound at the same time by controlling acoustic intensity. Throughout various examples, it is shown that these acoustic variables can be controlled successfully by the proposed approach.

  13. Perception of touch quality in piano tones.

    PubMed

    Goebl, Werner; Bresin, Roberto; Fujinaga, Ichiro

    2014-11-01

    Both timbre and dynamics of isolated piano tones are determined exclusively by the speed with which the hammer hits the strings. This physical view has been challenged by pianists who emphasize the importance of the way the keyboard is touched. This article presents empirical evidence from two perception experiments showing that touch-dependent sound components make sounds with identical hammer velocities but produced with different touch forms clearly distinguishable. The first experiment focused on finger-key sounds: musicians could identify pressed and struck touches. When the finger-key sounds were removed from the sounds, the effect vanished, suggesting that these sounds were the primary identification cue. The second experiment looked at key-keyframe sounds that occur when the key reaches key-bottom. Key-bottom impact was identified from key motion measured by a computer-controlled piano. Musicians were able to discriminate between piano tones that contain a key-bottom sound from those that do not. However, this effect might be attributable to sounds associated with the mechanical components of the piano action. In addition to the demonstrated acoustical effects of different touch forms, visual and tactile modalities may play important roles during piano performance that influence the production and perception of musical expression on the piano.

  14. Smart phone monitoring of second heart sound split.

    PubMed

    Thiyagaraja, Shanti R; Vempati, Jagannadh; Dantu, Ram; Sarma, Tom; Dantu, Siva

    2014-01-01

    Heart Auscultation (listening to heart sounds) is the basic element of cardiac diagnosis. The interpretation of these sounds is a difficult skill to acquire. In this work we have developed an application to detect, monitor, and analyze the split in second heart sound (S2) using a smart phone. The application records the heartbeat using a stethoscope connected to the smart phone. The audio signal is converted into the frequency domain using Fast Fourier Transform to detect the first and second heart sounds (S1 and S2). S2 is extracted and fed into the Discrete Wavelet Transform (DWT) and then to Continuous Wavelet Transform (CWT) to detect the Aortic (A2) and the Pulmonic (P2) components, which are used to calculate the split in S2. With our application, users can continuously monitor their second heart sound irrespective of ages and check for a split in their hearts with a low-cost, easily available equipment.

  15. Structural parameter effect of porous material on sound absorption performance of double-resonance material

    NASA Astrophysics Data System (ADS)

    Fan, C.; Tian, Y.; Wang, Z. Q.; Nie, J. K.; Wang, G. K.; Liu, X. S.

    2017-06-01

    In view of the noise feature and service environment of urban power substations, this paper explores the idea of compound impedance, fills some porous sound-absorption material in the first resonance cavity of the double-resonance sound-absorption material, and designs a new-type of composite acoustic board. We conduct some acoustic characterizations according to the standard test of impedance tube, and research on the influence of assembly order, the thickness and area density of the filling material, and back cavity on material sound-absorption performance. The results show that the new-type of acoustic board consisting of aluminum fibrous material as inner structure, micro-porous board as outer structure, and polyester-filled space between them, has good sound-absorption performance for low frequency and full frequency noise. When the thickness, area density of filling material and thickness of back cavity increase, the sound absorption coefficient curve peak will move toward low frequency.

  16. Effects of sounding temperature assimilation on weather forecasting - Model dependence studies

    NASA Technical Reports Server (NTRS)

    Ghil, M.; Halem, M.; Atlas, R.

    1979-01-01

    In comparing various methods for the assimilation of remote sounding information into numerical weather prediction (NWP) models, the problem of model dependence for the different results obtained becomes important. The paper investigates two aspects of the model dependence question: (1) the effect of increasing horizontal resolution within a given model on the assimilation of sounding data, and (2) the effect of using two entirely different models with the same assimilation method and sounding data. Tentative conclusions reached are: first, that model improvement as exemplified by increased resolution, can act in the same direction as judicious 4-D assimilation of remote sounding information, to improve 2-3 day numerical weather forecasts. Second, that the time continuous 4-D methods developed at GLAS have similar beneficial effects when used in the assimilation of remote sounding information into NWP models with very different numerical and physical characteristics.

  17. A new method for the automatic interpretation of Schlumberger and Wenner sounding curves

    USGS Publications Warehouse

    Zohdy, A.A.R.

    1989-01-01

    A fast iterative method for the automatic interpretation of Schlumberger and Wenner sounding curves is based on obtaining interpreted depths and resistivities from shifted electrode spacings and adjusted apparent resistivities, respectively. The method is fully automatic. It does not require an initial guess of the number of layers, their thicknesses, or their resistivities; and it does not require extrapolation of incomplete sounding curves. The number of layers in the interpreted model equals the number of digitized points on the sounding curve. The resulting multilayer model is always well-behaved with no thin layers of unusually high or unusually low resistivities. For noisy data, interpretation is done in two sets of iterations (two passes). Anomalous layers, created because of noise in the first pass, are eliminated in the second pass. Such layers are eliminated by considering the best-fitting curve from the first pass to be a smoothed version of the observed curve and automatically reinterpreting it (second pass). The application of the method is illustrated by several examples. -Author

  18. Door latching recognition apparatus and process

    DOEpatents

    Eakle, Jr., Robert F.

    2012-05-15

    An acoustic door latch detector is provided in which a sound recognition sensor is integrated into a door or door lock mechanism. The programmable sound recognition sensor can be trained to recognize the acoustic signature of the door and door lock mechanism being properly engaged and secured. The acoustic sensor will signal a first indicator indicating that proper closure was detected or sound an alarm condition if the proper acoustic signature is not detected within a predetermined time interval.

  19. Vibro-Acoustic Analysis of an Aircraft Maintenance Dock

    DTIC Science & Technology

    1992-08-01

    evaluated. This evaluation resulted in a table of allowable number of cycles of operation to produce the same impact on the facility as the original...for 18 Gage Galvanized Steel Walls of HV Ducts 209 H13 Summary of Calculated Vibration Response Parameters at Base of HV 217 H 14 Engine Power Level...The reverberant sound field due to the acoustic energy remaining within the AMD after the first reflection of the direct sound. The direct sound field

  20. The Story of a Poet Who Beat Cancer and Became a Squeak: A Sounded Narrative about Art, Education, and the Power of the Human Spirit

    ERIC Educational Resources Information Center

    Gershon, Walter S.; Van Deventer, George V.

    2013-01-01

    This collaborative piece represents one of the first iterations of a methodological possibility called sounded narratives. It is also a performative piece of sound/art, a narrative about a poet and his voice, stories that are as much about himself as they are about curricular possibilities and the power of art. Based on a pair of over two-hour…

  1. Low-momentum dynamic structure factor of a strongly interacting Fermi gas at finite temperature: A two-fluid hydrodynamic description

    NASA Astrophysics Data System (ADS)

    Hu, Hui; Zou, Peng; Liu, Xia-Ji

    2018-02-01

    We provide a description of the dynamic structure factor of a homogeneous unitary Fermi gas at low momentum and low frequency, based on the dissipative two-fluid hydrodynamic theory. The viscous relaxation time is estimated and is used to determine the regime where the hydrodynamic theory is applicable and to understand the nature of sound waves in the density response near the superfluid phase transition. By collecting the best knowledge on the shear viscosity and thermal conductivity known so far, we calculate the various diffusion coefficients and obtain the damping width of the (first and second) sounds. We find that the damping width of the first sound is greatly enhanced across the superfluid transition and very close to the transition the second sound might be resolved in the density response for the transferred momentum up to half of Fermi momentum. Our work is motivated by the recent measurement of the local dynamic structure factor at low momentum at Swinburne University of Technology and the ongoing experiment on sound attenuation of a homogeneous unitary Fermi gas at Massachusetts Institute of Technology. We discuss how the measurement of the velocity and damping width of the sound modes in low-momentum dynamic structure factor may lead to an improved determination of the universal superfluid density, shear viscosity, and thermal conductivity of a unitary Fermi gas.

  2. Evaluation of sea otter capture after the Exxon Valdez oil spill, Prince William Sound, Alaska

    USGS Publications Warehouse

    Bodkin, James L.; Weltz, F.; Bayha, Keith; Kormendy, Jennifer

    1990-01-01

    After the T/V Exxon Valdez oil spill into Prince William Sound, the U.S. Fish and Wildlife Service and Exxon Company, U.S.A., began rescuing sea otters (Enhydra lutris). The primary objective of this operation was to capture live, oiled sea otters for cleaning and rehabilitation. Between 30 March and 29 May 1989, 139 live sea otters were captured in the sound and transported to rehabilitation centers in Valdez, Alaska. Within the first 15 days of capture operations, 122 (88%) otters were captured. Most sea otters were captured near Knight, Green, and Evans islands in the western sound. The primary capture method consisted of dipnetting otters out of water and off beaches. While capture rates declined over time, survival of captured otters increased as the interval from spill date to capture date increased. The relative degree of oiling observed for each otter captured declined over time. Declining capture rates led to the use of tangle nets. The evidence suggests the greatest threat to sea otters in Prince William Sound occurred within the first 3 weeks after the spill. Thus, in the future, the authors believe rescue efforts should begin as soon as possible after an oil spill in sea otter habitat. Further, preemptive capture and relocation of sea otters in Prince William Sound may have increased the number of otters that could have survived this event.

  3. Free Field Modeling of a MEMS-based Pressure Gradient Microphone

    DTIC Science & Technology

    2009-12-01

    first thing that comes to your mind when you read the word sound? Is it a baby crying, people laughing , your favorite song, or perhaps someone calling...first and then the other ear. Our brain calculates this angle subconsciously and we know from which angle the sound originated. However, other types of...at NPS is included for comparison purposes. A further brief experiment is discussed that provides a reasonable explanation of why this sensor is a

  4. Sound-proof Sandwich Panel Design via Metamaterial Concept

    NASA Astrophysics Data System (ADS)

    Sui, Ni

    Sandwich panels consisting of hollow core cells and two face-sheets bonded on both sides have been widely used as lightweight and strong structures in practical engineering applications, but with poor acoustic performance especially at low frequency regime. Basic sound-proof methods for the sandwich panel design are spontaneously categorized as sound insulation and sound absorption. Motivated by metamaterial concept, this dissertation presents two sandwich panel designs without sacrificing weight or size penalty: A lightweight yet sound-proof honeycomb acoustic metamateiral can be used as core material for honeycomb sandwich panels to block sound and break the mass law to realize minimum sound transmission; the other sandwich panel design is based on coupled Helmholtz resonators and can achieve perfect sound absorption without sound reflection. Based on the honeycomb sandwich panel, the mechanical properties of the honeycomb core structure were studied first. By incorporating a thin membrane on top of each honeycomb core, the traditional honeycomb core turns into honeycomb acoustic metamaterial. The basic theory for such kind of membrane-type acoustic metamaterial is demonstrated by a lumped model with infinite periodic oscillator system, and the negative dynamic effective mass density for clamped membrane is analyzed under the membrane resonance condition. Evanescent wave mode caused by negative dynamic effective mass density and impedance methods are utilized to interpret the physical phenomenon of honeycomb acoustic metamaterials at resonance. The honeycomb metamaterials can extraordinarily improve low-frequency sound transmission loss below the first resonant frequency of the membrane. The property of the membrane, the tension of the membrane and the numbers of attached membranes can impact the sound transmission loss, which are observed by numerical simulations and validated by experiments. The sandwich panel which incorporates the honeycomb metamateiral as the core material maintains the mechanical property and yields a sound transmission loss that is consistently greater than 50 dB at low frequencies. Furthermore, the absorption property of the proposed honeycomb sandwich panel was experimentally studied. The honeycomb sandwich panel shows an excellent sound absorbing performance at high frequencies by using reinforced glass fiber without adding too much mass. The effect of the panel size and the stiffness of the grid-like frame effect of the honeycomb sandwich structures on sound transmission are discussed lastly. For the second sound-proof sandwich panel design, each unit cell of the sandwich panel is replaced by a Helmholtz resonator by perforating a small hole on the top face sheet. A perfect sound absorber sandwich panel with coupled Helmholtz resonators is proposed by two types: single identical Helmholtz resonator in each unit cell and dual Helmholtz resonators with different orifices, arranged in each cell arranged periodically. The soundproof sandwich panel is modelled as a panel embedded in rigid panel and assumed as a semiinfinite space with hard boundary condition. The net/mutual impedance model is first proposed and derived by solving Kirchhoff-Helmholtz integral by using the Green's function. The thermal-viscous energy dissipation at the thermal boundary layer dominates the total energy consumed. Two types of perfect sound absorber sandwich panel are designed in the last part. Two theoretical methods: the average energy and the equivalent surface impedance method are used to predict sound absorption performance. The geometry for perfect sound absorber sandwich panel at a target frequency can be obtained when the all the Helmholtz resonators are at resonance and the surface impedance of the sandwich panel matches the air impedance. The bandwidth for the identical sandwich panel mainly depends on the neck radius. The absorptive property of the dual Helmholtz resonators type of sandwich panel is studied by investigating the coupling effects between HRs. The theoretical results can be verified by numerical simulations through finite element method. The absorption bandwidth can be tuned by incorporating more HRs in each unit cell. Both sound-proof sandwich panel designs possess extraordinary acoustic performance for noise reduction at low frequency range with sub-wavelength structures. The sound absorber panel design can also achieve broadband sound attenuation at low frequencies.

  5. How the owl tracks its prey – II

    PubMed Central

    Takahashi, Terry T.

    2010-01-01

    Barn owls can capture prey in pitch darkness or by diving into snow, while homing in on the sounds made by their prey. First, the neural mechanisms by which the barn owl localizes a single sound source in an otherwise quiet environment will be explained. The ideas developed for the single source case will then be expanded to environments in which there are multiple sound sources and echoes – environments that are challenging for humans with impaired hearing. Recent controversies regarding the mechanisms of sound localization will be discussed. Finally, the case in which both visual and auditory information are available to the owl will be considered. PMID:20889819

  6. Ultrasonic Recovery and Modification of Food Ingredients

    NASA Astrophysics Data System (ADS)

    Vilkhu, Kamaljit; Manasseh, Richard; Mawson, Raymond; Ashokkumar, Muthupandian

    There are two general classes of effects that sound, and ultrasound in particular, can have on a fluid. First, very significant modifications to the nature of food and food ingredients can be due to the phenomena of bubble acoustics and cavitation. The applied sound oscillates bubbles in the fluid, creating intense forces at microscopic scales thus driving chemical changes. Second, the sound itself can cause the fluid to flow vigorously, both on a large scale and on a microscopic scale; furthermore, the sound can cause particles in the fluid to move relative to the fluid. These streaming phenomena can redistribute materials within food and food ingredients at both microscopic and macroscopic scales.

  7. 'Noises in the head': a prospective study to characterize intracranial sounds after cranial surgery.

    PubMed

    Sivasubramaniam, Vinothan; Alg, Varinder Singh; Frantzias, Joseph; Acharya, Shami Yesha; Papadopoulos, Marios Costa; Martin, Andrew James

    2016-08-01

    Patients often report sounds in the head after craniotomy. We aim to characterize the prevalence and nature of these sounds, and identify any patient, pathology, or technical factors related to them. These data may be used to inform patients of this sometimes unpleasant, but harmless effect of cranial surgery. Prospective observational study of patients undergoing cranial surgery with dural opening. Eligible patients completed a questionnaire preoperatively and daily after surgery until discharge. Subjects were followed up at 14 days with a telephone consultation. One hundred fifty-one patients with various pathologies were included. Of these, 47 (31 %) reported hearing sounds in their head, lasting an average 4-6 days (median, 4 days, mean, 6 days, range, 1-14 days). The peak onset was the first postoperative day and the most commonly used descriptors were 'clicking' [20/47 (43 %)] and 'fluid moving' in the head [9/47 (19 %)]. A significant proportion (42 %, 32/77) without a wound drain experienced intracranial sounds compared to those with a drain (20 %, 15/74, p < 0.01); there was no difference between suction and gravity drains. Approximately a third of the patients in both groups (post-craniotomy sounds group: 36 %, 17/47; group not reporting sounds: 31 %, 32/104), had postoperative CT scans for unrelated reasons: 73 % (8/11) of those with pneumocephalus experienced intracranial sounds, compared to 24 % (9/38) of those without pneumocephalus (p < 0.01). There was no significant association with craniotomy site or size, temporal bone drilling, bone flap replacement, or filling of the surgical cavity with fluid. Sounds in the head after cranial surgery are common, affecting 31 % of patients. This is the first study into this subject, and provides valuable information useful for consenting patients. The data suggest pneumocephalus as a plausible explanation with which to reassure patients, rather than relying on anecdotal evidence, as has been the case to date.

  8. Selective and Efficient Neural Coding of Communication Signals Depends on Early Acoustic and Social Environment

    PubMed Central

    Amin, Noopur; Gastpar, Michael; Theunissen, Frédéric E.

    2013-01-01

    Previous research has shown that postnatal exposure to simple, synthetic sounds can affect the sound representation in the auditory cortex as reflected by changes in the tonotopic map or other relatively simple tuning properties, such as AM tuning. However, their functional implications for neural processing in the generation of ethologically-based perception remain unexplored. Here we examined the effects of noise-rearing and social isolation on the neural processing of communication sounds such as species-specific song, in the primary auditory cortex analog of adult zebra finches. Our electrophysiological recordings reveal that neural tuning to simple frequency-based synthetic sounds is initially established in all the laminae independent of patterned acoustic experience; however, we provide the first evidence that early exposure to patterned sound statistics, such as those found in native sounds, is required for the subsequent emergence of neural selectivity for complex vocalizations and for shaping neural spiking precision in superficial and deep cortical laminae, and for creating efficient neural representations of song and a less redundant ensemble code in all the laminae. Our study also provides the first causal evidence for ‘sparse coding’, such that when the statistics of the stimuli were changed during rearing, as in noise-rearing, that the sparse or optimal representation for species-specific vocalizations disappeared. Taken together, these results imply that a layer-specific differential development of the auditory cortex requires patterned acoustic input, and a specialized and robust sensory representation of complex communication sounds in the auditory cortex requires a rich acoustic and social environment. PMID:23630587

  9. Correction factors in determining speed of sound among freshmen in undergraduate physics laboratory

    NASA Astrophysics Data System (ADS)

    Lutfiyah, A.; Adam, A. S.; Suprapto, N.; Kholiq, A.; Putri, N. P.

    2018-03-01

    This paper deals to identify the correction factor in determining speed of sound that have been done by freshmen in undergraduate physics laboratory. Then, the result will be compared with speed of sound that determining by senior student. Both of them used the similar instrument, namely resonance tube with apparatus. The speed of sound indicated by senior was 333.38 ms-1 with deviation to the theory about 3.98%. Meanwhile, for freshmen, the speed of sound experiment was categorised into three parts: accurate value (52.63%), middle value (31.58%) and lower value (15.79%). Based on analysis, some correction factors were suggested: human error in determining first and second harmonic, end correction of tube diameter, and another factors from environment, such as temperature, humidity, density, and pressure.

  10. Nike-Cajun Sounding Rocket with University of Iowa Payload

    NASA Image and Video Library

    1959-05-22

    L59-3802 Nike-Cajun sounding rocket with University of Iowa payload on launcher at Wallops for flight test, May 20, 1959. Photograph published in A New Dimension Wallops Island Flight Test Range: The First Fifteen Years by Joseph Shortal. A NASA publication. Page 698.

  11. Misconceptions about Sound among Engineering Students

    ERIC Educational Resources Information Center

    Pejuan, Arcadi; Bohigas, Xavier; Jaen, Xavier; Periago, Cristina

    2012-01-01

    Our first objective was to detect misconceptions about the microscopic nature of sound among senior university students enrolled in different engineering programmes (from chemistry to telecommunications). We sought to determine how these misconceptions are expressed (qualitative aspect) and, only very secondarily, to gain a general idea of the…

  12. [A new medical education using a lung sound auscultation simulator called "Mr. Lung"].

    PubMed

    Yoshii, Chiharu; Anzai, Takashi; Yatera, Kazuhiro; Kawajiri, Tatsunori; Nakashima, Yasuhide; Kido, Masamitsu

    2002-09-01

    We developed a lung sound auscultation simulator "Mr. Lung" in 2001. To improve the auscultation skills of lung sounds, we utilized this new device in our educational training facility. From June 2001 to March 2002, we used "Mr. Lung" for our small group training in which one hundred of the fifth year medical students were divided into small groups from which one group was taught every other week. The class consisted of ninety-minute training periods for auscultation of lung sounds. At first, we explained the classification of lung sounds, and then auscultation tests were performed. Namely, students listened to three cases of abnormal or adventitious lung sounds on "Mr. Lung" through their stethoscopes. Next they answered questions corresponding to the portion and quality of the sounds. Then, we explained the correct answers and how to differentiate lung sounds on "Mr. Lung". Additionally, at the beginning and the end of the lecture, five degrees of self-assessment for the auscultation of the lung sounds were performed. The ratio of correct answers for lung sounds were 36.9% for differences between bilateral lung sounds, 52.5% for coarse crackles, 34.1% for fine crackles, 69.2% for wheezes, 62.1% for rhonchi and 22.2% for stridor. Self-assessment scores were significantly higher after the class than before. The ratio of correct lung sound answers was surprisingly low among medical students. We believe repetitive auscultation of the simulator to be extremely helpful for medical education.

  13. Use of quantitative ultrasonography in differentiating osteomalacia from osteoporosis: preliminary study.

    PubMed

    Luisetto, G; Camozzi, V; De Terlizzi, F

    2000-04-01

    The aim of this work was to use ultrasonographic technology to differentiate osteoporosis from osteomalacia on the basis of different patterns of the graphic trace. Three patients with osteomalacia and three with osteoporosis, all with the same lumbar spine bone mineral density, were studied. The velocity of the ultrasound beam in bone was measured by a DBM Sonic 1,200/I densitometer at the proximal phalanges of the hands in all the patients. The ultrasound beam velocity was measured when the first peak of the waveform reached a predetermined minimum amplitude value (amplitude-dependent speed of sound) as well as at the lowest point prior to the first and second peaks, before they reached the predetermined minimum amplitude value (first and second minimum speeds of sound). The graphic traces were further analyzed by Fourier analysis, and both the main frequency (f0) and the width of the peak centered in the f0 (full width at half maximum) were measured. The first and second minimum speeds of sound were significantly lower in the patients with osteomalacia than in the osteoporosis group. The first minimum speed of sound was 2,169 +/- 73 m/s in osteoporosis and 1,983 +/- 61 m/s in osteomalacia (P < 0.0001); the second minimum peak speed of sound was 1,895 +/-59 m/s in osteoporosis and 1,748 +/- 38 m/s in osteomalacia (P < 0.0001). The f0 was similar in the two groups (osteoporosis, 0.85 +/- 0.14 MHz; osteomalacia, 0.9 +/- 0.22 MHz; P = 0.72), and the full width at half maximum was significantly higher in the osteomalacia patients (0.52 +/- 0.14 MHz) than in the osteoporosis patients (0.37 +/- 0.15 MHz) (P = 0.022). This study confirms that ultrasonography is a promising, noninvasive method that could be used to differentiate osteoporosis from osteomalacia, but further studies should be carried out before this method can be introduced into clinical practice.

  14. On the Correction of Shipboard Miniradiosondes of the Western Mediterranean Circulation Experiment - June 1986

    DTIC Science & Technology

    1989-03-01

    but no attempt was made at correction. The modification of the ambient atmospheric and oceanic environments due to the presence of a ship has been...in June, 1986. Two cruises were aboard the research vessel USNS Lynch. On the first cruise, 13 soundings were made in the western Mediterranean...between Spain and Algeria; on the second, 26 soundings were made near the Strait of Gibraltar. The third cruise, for which 16 soundings are available, was

  15. New non-invasive automatic cough counting program based on 6 types of classified cough sounds.

    PubMed

    Murata, Akira; Ohota, Nao; Shibuya, Atsuo; Ono, Hiroshi; Kudoh, Shoji

    2006-01-01

    Cough consisting of an initial deep inspiration, glottal closure, and an explosive expiration accompanied by a sound is one of the most common symptoms of respiratory disease. Despite its clinical importance, standard methods for objective cough analysis have yet to be established. We investigated the characteristics of cough sounds acoustically, designed a program to discriminate cough sounds from other sounds, and finally developed a new objective method of non-invasive cough counting. In addition, we evaluated the clinical efficacy of that program. We recorded cough sounds using a memory stick IC recorder in free-field from 2 patients and analyzed the intensity of 534 recorded coughs acoustically according to time domain. First we squared the sound waveform of recorded cough sounds, which was then smoothed out over a 20 ms window. The 5 parameters and some definitions to discriminate the cough sounds from other noise were identified and the cough sounds were classified into 6 groups. Next, we applied this method to develop a new automatic cough count program. Finally, to evaluate the accuracy and clinical usefulness of this program, we counted cough sounds collected from another 10 patients using our program and conventional manual counting. And the sensitivity, specificity and discriminative rate of the program were analyzed. This program successfully discriminated recorded cough sounds out of 1902 sound events collected from 10 patients at a rate of 93.1%. The sensitivity was 90.2% and the specificity was 96.5%. Our new cough counting program can be sufficiently useful for clinical studies.

  16. Control of Toxic Chemicals in Puget Sound, Phase 3: Study of Atmospheric Deposition of Air Toxics to the Surface of Puget Sound

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandenberger, Jill M.; Louchouarn, Patrick; Kuo, Li-Jung

    2010-07-05

    The results of the Phase 1 Toxics Loading study suggested that runoff from the land surface and atmospheric deposition directly to marine waters have resulted in considerable loads of contaminants to Puget Sound (Hart Crowser et al. 2007). The limited data available for atmospheric deposition fluxes throughout Puget Sound was recognized as a significant data gap. Therefore, this study provided more recent or first reported atmospheric deposition fluxes of PAHs, PBDEs, and select trace elements for Puget Sound. Samples representing bulk atmospheric deposition were collected during 2008 and 2009 at seven stations around Puget Sound spanning from Padilla Bay southmore » to Nisqually River including Hood Canal and the Straits of Juan de Fuca. Revised annual loading estimates for atmospheric deposition to the waters of Puget Sound were calculated for each of the toxics and demonstrated an overall decrease in the atmospheric loading estimates except for polybrominated diphenyl ethers (PBDEs) and total mercury (THg). The median atmospheric deposition flux of total PBDE (7.0 ng/m2/d) was higher than that of the Hart Crowser (2007) Phase 1 estimate (2.0 ng/m2/d). The THg was not significantly different from the original estimates. The median atmospheric deposition flux for pyrogenic PAHs (34.2 ng/m2/d; without TCB) shows a relatively narrow range across all stations (interquartile range: 21.2- 61.1 ng/m2/d) and shows no influence of season. The highest median fluxes for all parameters were measured at the industrial location in Tacoma and the lowest were recorded at the rural sites in Hood Canal and Sequim Bay. Finally, a semi-quantitative apportionment study permitted a first-order characterization of source inputs to the atmosphere of the Puget Sound. Both biomarker ratios and a principal component analysis confirmed regional data from the Puget Sound and Straits of Georgia region and pointed to the predominance of biomass and fossil fuel (mostly liquid petroleum products such as gasoline and/or diesel) combustion as source inputs of combustion by-products to the atmosphere of the region and subsequently to the waters of Puget Sound.« less

  17. Learning with Sound Recordings: A History of Suzuki's Mediated Pedagogy

    ERIC Educational Resources Information Center

    Thibeault, Matthew D.

    2018-01-01

    This article presents a history of mediated pedagogy in the Suzuki Method, the first widespread approach to learning an instrument in which sound recordings were central. Media are conceptualized as socially constituted: philosophical ideas, pedagogic practices, and cultural values that together form a contingent and changing technological…

  18. Techniques for decoding speech phonemes and sounds: A concept

    NASA Technical Reports Server (NTRS)

    Lokerson, D. C.; Holby, H. G.

    1975-01-01

    Techniques studied involve conversion of speech sounds into machine-compatible pulse trains. (1) Voltage-level quantizer produces number of output pulses proportional to amplitude characteristics of vowel-type phoneme waveforms. (2) Pulses produced by quantizer of first speech formants are compared with pulses produced by second formants.

  19. University of Maryland-Republic Terrapin Sounding Rocket H121-2681-I(Terrapin) Model on the Launcher

    NASA Image and Video Library

    1956-10-21

    LAL 95,647 University of Maryland-Republic Terrapin sounding rocket mounted on special launcher, September 21, 1956. Photograph published in A New Dimension Wallops Island Flight Test Range: The First Fifteen Years by Joseph Shortal. A NASA publication. Page 506.

  20. First and second sound in cylindrically trapped gases.

    PubMed

    Bertaina, G; Pitaevskii, L; Stringari, S

    2010-10-08

    We investigate the propagation of density and temperature waves in a cylindrically trapped gas with radial harmonic confinement. Starting from two-fluid hydrodynamic theory we derive effective 1D equations for the chemical potential and the temperature which explicitly account for the effects of viscosity and thermal conductivity. Differently from quantum fluids confined by rigid walls, the harmonic confinement allows for the propagation of both first and second sound in the long wavelength limit. We provide quantitative predictions for the two sound velocities of a superfluid Fermi gas at unitarity. For shorter wavelengths we discover a new surprising class of excitations continuously spread over a finite interval of frequencies. This results in a nondissipative damping in the response function which is analytically calculated in the limiting case of a classical ideal gas.

  1. Insights from the First International Conference on Hyperacusis: causes, evaluation, diagnosis and treatment.

    PubMed

    Aazh, Hashir; McFerran, Don; Salvi, Richard; Prasher, Deepak; Jastreboff, Margaret; Jastreboff, Pawel

    2014-01-01

    The First International Conference on Hyperacusis gathered over 100 scientists and health care professionals in London, UK. Key conclusions from the conference included: (1) Hyperacusis is characterized by reduced tolerance of sound that has perceptual, psychological and social dimensions; (2) there is a growing awareness that children as well as adults experience symptoms of hyperacusis or misophonia; (3) the exact mechanisms that give rise to hyperacusis are not clear, but the available evidence suggests that functional changes within the central nervous system are important and in particular, hyperacusis may be related to increased gain in the central auditory pathways and to increased anxiety or emotional response to sound; (4) various counseling and sound therapy approaches seem beneficial in the management of hyperacusis, but the evidence base for these remains poor.

  2. First steps towards dual-modality 3D photoacoustic and speed of sound imaging with optical ultrasound detection

    NASA Astrophysics Data System (ADS)

    Nuster, Robert; Wurzinger, Gerhild; Paltauf, Guenther

    2017-03-01

    CCD camera based optical ultrasound detection is a promising alternative approach for high resolution 3D photoacoustic imaging (PAI). To fully exploit its potential and to achieve an image resolution <50 μm, it is necessary to incorporate variations of the speed of sound (SOS) in the image reconstruction algorithm. Hence, in the proposed work the idea and a first implementation are shown how speed of sound imaging can be added to a previously developed camera based PAI setup. The current setup provides SOS-maps with a spatial resolution of 2 mm and an accuracy of the obtained absolute SOS values of about 1%. The proposed dual-modality setup has the potential to provide highly resolved and perfectly co-registered 3D photoacoustic and SOS images.

  3. Experimental localization of an acoustic sound source in a wind-tunnel flow by using a numerical time-reversal technique.

    PubMed

    Padois, Thomas; Prax, Christian; Valeau, Vincent; Marx, David

    2012-10-01

    The possibility of using the time-reversal technique to localize acoustic sources in a wind-tunnel flow is investigated. While the technique is widespread, it has scarcely been used in aeroacoustics up to now. The proposed method consists of two steps: in a first experimental step, the acoustic pressure fluctuations are recorded over a linear array of microphones; in a second numerical step, the experimental data are time-reversed and used as input data for a numerical code solving the linearized Euler equations. The simulation achieves the back-propagation of the waves from the array to the source and takes into account the effect of the mean flow on sound propagation. The ability of the method to localize a sound source in a typical wind-tunnel flow is first demonstrated using simulated data. A generic experiment is then set up in an anechoic wind tunnel to validate the proposed method with a flow at Mach number 0.11. Monopolar sources are first considered that are either monochromatic or have a narrow or wide-band frequency content. The source position estimation is well-achieved with an error inferior to the wavelength. An application to a dipolar sound source shows that this type of source is also very satisfactorily characterized.

  4. Audiovisual Delay as a Novel Cue to Visual Distance.

    PubMed

    Jaekl, Philip; Seidlitz, Jakob; Harris, Laurence R; Tadin, Duje

    2015-01-01

    For audiovisual sensory events, sound arrives with a delay relative to light that increases with event distance. It is unknown, however, whether humans can use these ubiquitous sound delays as an information source for distance computation. Here, we tested the hypothesis that audiovisual delays can both bias and improve human perceptual distance discrimination, such that visual stimuli paired with auditory delays are perceived as more distant and are thereby an ordinal distance cue. In two experiments, participants judged the relative distance of two repetitively displayed three-dimensional dot clusters, both presented with sounds of varying delays. In the first experiment, dot clusters presented with a sound delay were judged to be more distant than dot clusters paired with equivalent sound leads. In the second experiment, we confirmed that the presence of a sound delay was sufficient to cause stimuli to appear as more distant. Additionally, we found that ecologically congruent pairing of more distant events with a sound delay resulted in an increase in the precision of distance judgments. A control experiment determined that the sound delay duration influencing these distance judgments was not detectable, thereby eliminating decision-level influence. In sum, we present evidence that audiovisual delays can be an ordinal cue to visual distance.

  5. General analytical approach for sound transmission loss analysis through a thick metamaterial plate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oudich, Mourad; Zhou, Xiaoming; Badreddine Assouar, M., E-mail: Badreddine.Assouar@univ-lorraine.fr

    We report theoretically and numerically on the sound transmission loss performance through a thick plate-type acoustic metamaterial made of spring-mass resonators attached to the surface of a homogeneous elastic plate. Two general analytical approaches based on plane wave expansion were developed to calculate both the sound transmission loss through the metamaterial plate (thick and thin) and its band structure. The first one can be applied to thick plate systems to study the sound transmission for any normal or oblique incident sound pressure. The second approach gives the metamaterial dispersion behavior to describe the vibrational motions of the plate, which helpsmore » to understand the physics behind sound radiation through air by the structure. Computed results show that high sound transmission loss up to 72 dB at 2 kHz is reached with a thick metamaterial plate while only 23 dB can be obtained for a simple homogeneous plate with the same thickness. Such plate-type acoustic metamaterial can be a very effective solution for high performance sound insulation and structural vibration shielding in the very low-frequency range.« less

  6. Decadal trends in Indian Ocean ambient sound.

    PubMed

    Miksis-Olds, Jennifer L; Bradley, David L; Niu, Xiaoyue Maggie

    2013-11-01

    The increase of ocean noise documented in the North Pacific has sparked concern on whether the observed increases are a global or regional phenomenon. This work provides evidence of low frequency sound increases in the Indian Ocean. A decade (2002-2012) of recordings made off the island of Diego Garcia, UK in the Indian Ocean was parsed into time series according to frequency band and sound level. Quarterly sound level comparisons between the first and last years were also performed. The combination of time series and temporal comparison analyses over multiple measurement parameters produced results beyond those obtainable from a single parameter analysis. The ocean sound floor has increased over the past decade in the Indian Ocean. Increases were most prominent in recordings made south of Diego Garcia in the 85-105 Hz band. The highest sound level trends differed between the two sides of the island; the highest sound levels decreased in the north and increased in the south. Rate, direction, and magnitude of changes among the multiple parameters supported interpretation of source functions driving the trends. The observed sound floor increases are consistent with concurrent increases in shipping, wind speed, wave height, and blue whale abundance in the Indian Ocean.

  7. Chatty maps: constructing sound maps of urban areas from social media data.

    PubMed

    Aiello, Luca Maria; Schifanella, Rossano; Quercia, Daniele; Aletta, Francesco

    2016-03-01

    Urban sound has a huge influence over how we perceive places. Yet, city planning is concerned mainly with noise, simply because annoying sounds come to the attention of city officials in the form of complaints, whereas general urban sounds do not come to the attention as they cannot be easily captured at city scale. To capture both unpleasant and pleasant sounds, we applied a new methodology that relies on tagging information of georeferenced pictures to the cities of London and Barcelona. To begin with, we compiled the first urban sound dictionary and compared it with the one produced by collating insights from the literature: ours was experimentally more valid (if correlated with official noise pollution levels) and offered a wider geographical coverage. From picture tags, we then studied the relationship between soundscapes and emotions. We learned that streets with music sounds were associated with strong emotions of joy or sadness, whereas those with human sounds were associated with joy or surprise. Finally, we studied the relationship between soundscapes and people's perceptions and, in so doing, we were able to map which areas are chaotic, monotonous, calm and exciting. Those insights promise to inform the creation of restorative experiences in our increasingly urbanized world.

  8. Sound production in the tiger-tail seahorse Hippocampus comes: Insights into the sound producing mechanisms.

    PubMed

    Lim, A C O; Chong, V C; Chew, W X; Muniandy, S V; Wong, C S; Ong, Z C

    2015-07-01

    Acoustic signals of the tiger-tail seahorse (Hippocampus comes) during feeding were studied using wavelet transform analysis. The seahorse "click" appears to be a compounded sound, comprising three acoustic components that likely come from two sound producing mechanisms. The click sound begins with a low-frequency precursor signal, followed by a sudden high-frequency spike that decays quickly, and a final, low-frequency sinusoidal component. The first two components can, respectively, be traced to the sliding movement and forceful knock between the supraorbital bone and coronet bone of the cranium, while the third one (purr) although appearing to be initiated here is produced elsewhere. The seahorse also produces a growling sound when under duress. Growling is accompanied by the highest recorded vibration at the cheek indicating another sound producing mechanism here. The purr has the same low frequency as the growl; both are likely produced by the same structural mechanism. However, growl and purr are triggered and produced under different conditions, suggesting that such "vocalization" may have significance in communication between seahorses.

  9. Assessment of sound levels in a neonatal intensive care unit in tabriz, iran.

    PubMed

    Valizadeh, Sousan; Bagher Hosseini, Mohammad; Alavi, Nasrinsadat; Asadollahi, Malihe; Kashefimehr, Siamak

    2013-03-01

    High levels of sound have several negative effects, such as noise-induced hearing loss and delayed growth and development, on premature infants in neonatal intensive care units (NICUs). In order to reduce sound levels, they should first be measured. This study was performed to assess sound levels and determine sources of noise in the NICU of Alzahra Teaching Hospital (Tabriz, Iran). In a descriptive study, 24 hours in 4 workdays were randomly selected. Equivalent continuous sound level (Leq), sound level that is exceeded only 10% of the time (L10), maximum sound level (Lmax), and peak instantaneous sound pressure level (Lzpeak) were measured by CEL-440 sound level meter (SLM) at 6 fixed locations in the NICU. Data was collected using a questionnaire. SPSS13 was then used for data analysis. Mean values of Leq, L10, and Lmax were determined as 63.46 dBA, 65.81 dBA, and 71.30 dBA, respectively. They were all higher than standard levels (Leq < 45 dB, L10 ≤50 dB, and Lmax ≤65 dB). The highest Leq was measured at the time of nurse rounds. Leq was directly correlated with the number of staff members present in the ward. Finally, sources of noise were ordered based on their intensity. Considering that sound levels were higher than standard levels in our studied NICU, it is necessary to adopt policies to reduce sound.

  10. Assessment of Sound Levels in a Neonatal Intensive Care Unit in Tabriz, Iran

    PubMed Central

    Valizadeh, Sousan; Bagher Hosseini, Mohammad; Alavi, Nasrinsadat; Asadollahi, Malihe; Kashefimehr, Siamak

    2013-01-01

    Introduction: High levels of sound have several negative effects, such as noise-induced hearing loss and delayed growth and development, on premature infants in neonatal intensive care units (NICUs). In order to reduce sound levels, they should first be measured. This study was performed to assess sound levels and determine sources of noise in the NICU of Alzahra Teaching Hospital (Tabriz, Iran). Methods: In a descriptive study, 24 hours in 4 workdays were randomly selected. Equivalent continuous sound level (Leq), sound level that is exceeded only 10% of the time (L10), maximum sound level (Lmax), and peak instantaneous sound pressure level (Lzpeak) were measured by CEL-440 sound level meter (SLM) at 6 fixed locations in the NICU. Data was collected using a questionnaire. SPSS13 was then used for data analysis. Results: Mean values of Leq, L10, and Lmax were determined as 63.46 dBA, 65.81 dBA, and 71.30 dBA, respectively. They were all higher than standard levels (Leq < 45 dB, L10 ≤50 dB, and Lmax ≤65 dB). The highest Leq was measured at the time of nurse rounds. Leq was directly correlated with the number of staff members present in the ward. Finally, sources of noise were ordered based on their intensity. Conclusion: Considering that sound levels were higher than standard levels in our studied NICU, it is necessary to adopt policies to reduce sound. PMID:25276706

  11. Spatial attenuation of different sound field components in a water layer and shallow-water sediments

    NASA Astrophysics Data System (ADS)

    Belov, A. I.; Kuznetsov, G. N.

    2017-11-01

    The paper presents the results of an experimental study of spatial attenuation of low-frequency vector-scalar sound fields in shallow water. The experiments employed a towed pneumatic cannon and spatially separated four-component vector-scalar receiver modules. Narrowband analysis of received signals made it possible to estimate the attenuation coefficients of the first three modes in the frequency of range of 26-182 Hz and calculate the frequency dependences of the sound absorption coefficients in the upper part of bottom sediments. We analyze the experimental and calculated (using acoustic calibration of the waveguide) laws of the drop in sound pressure and orthogonal vector projections of the oscillation velocity. It is shown that the vertical projection of the oscillation velocity vector decreases significantly faster than the sound pressure field.

  12. Geographical variation in sound production in the anemonefish Amphiprion akallopisos.

    PubMed

    Parmentier, E; Lagardère, J P; Vandewalle, P; Fine, M L

    2005-08-22

    Because of pelagic-larval dispersal, coral-reef fishes are distributed widely with minimal genetic differentiation between populations. Amphiprion akallopisos, a clownfish that uses sound production to defend its anemone territory, has a wide but disjunct distribution in the Indian Ocean. We compared sounds produced by these fishes from populations in Madagascar and Indonesia, a distance of 6500 km. Differentiation of agonistic calls into distinct types indicates a complexity not previously recorded in fishes' acoustic communication. Moreover, various acoustic parameters, including peak frequency, pulse duration, number of peaks per pulse, differed between the two populations. The geographic comparison is the first to demonstrate 'dialects' in a marine fish species, and these differences in sound parameters suggest genetic divergence between these two populations. These results highlight the possible approach for investigating the role of sounds in fish behaviour in reproductive divergence and speciation.

  13. Automotive Exterior Noise Optimization Using Grey Relational Analysis Coupled with Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Chen, Shuming; Wang, Dengfeng; Liu, Bo

    This paper investigates optimization design of the thickness of the sound package performed on a passenger automobile. The major characteristics indexes for performance selected to evaluate the processes are the SPL of the exterior noise and the weight of the sound package, and the corresponding parameters of the sound package are the thickness of the glass wool with aluminum foil for the first layer, the thickness of the glass fiber for the second layer, and the thickness of the PE foam for the third layer. In this paper, the process is fundamentally with multiple performances, thus, the grey relational analysis that utilizes grey relational grade as performance index is especially employed to determine the optimal combination of the thickness of the different layers for the designed sound package. Additionally, in order to evaluate the weighting values corresponding to various performance characteristics, the principal component analysis is used to show their relative importance properly and objectively. The results of the confirmation experiments uncover that grey relational analysis coupled with principal analysis methods can successfully be applied to find the optimal combination of the thickness for each layer of the sound package material. Therefore, the presented method can be an effective tool to improve the vehicle exterior noise and lower the weight of the sound package. In addition, it will also be helpful for other applications in the automotive industry, such as the First Automobile Works in China, Changan Automobile in China, etc.

  14. Psychophysical investigation of an auditory spatial illusion in cats: the precedence effect.

    PubMed

    Tollin, Daniel J; Yin, Tom C T

    2003-10-01

    The precedence effect (PE) describes several spatial perceptual phenomena that occur when similar sounds are presented from two different locations and separated by a delay. The mechanisms that produce the effect are thought to be responsible for the ability to localize sounds in reverberant environments. Although the physiological bases for the PE have been studied, little is known about how these sounds are localized by species other than humans. Here we used the search coil technique to measure the eye positions of cats trained to saccade to the apparent locations of sounds. To study the PE, brief broadband stimuli were presented from two locations, with a delay between their onsets; the delayed sound meant to simulate a single reflection. Although the cats accurately localized single sources, the apparent locations of the paired sources depended on the delay. First, the cats exhibited summing localization, the perception of a "phantom" sound located between the sources, for delays < +/-400 micros for sources positioned in azimuth along the horizontal plane, but not for sources positioned in elevation along the sagittal plane. Second, consistent with localization dominance, for delays from 400 micros to about 10 ms, the cats oriented toward the leading source location only, with little influence of the lagging source, both for horizontally and vertically placed sources. Finally, the echo threshold was reached for delays >10 ms, where the cats first began to orient to the lagging source on some trials. These data reveal that cats experience the PE phenomena similarly to humans.

  15. Spectral timbre perception in ferrets: discrimination of artificial vowels under different listening conditions.

    PubMed

    Bizley, Jennifer K; Walker, Kerry M M; King, Andrew J; Schnupp, Jan W H

    2013-01-01

    Spectral timbre is an acoustic feature that enables human listeners to determine the identity of a spoken vowel. Despite its importance to sound perception, little is known about the neural representation of sound timbre and few psychophysical studies have investigated timbre discrimination in non-human species. In this study, ferrets were positively conditioned to discriminate artificial vowel sounds in a two-alternative-forced-choice paradigm. Animals quickly learned to discriminate the vowel sound /u/ from /ε/ and were immediately able to generalize across a range of voice pitches. They were further tested in a series of experiments designed to assess how well they could discriminate these vowel sounds under different listening conditions. First, a series of morphed vowels was created by systematically shifting the location of the first and second formant frequencies. Second, the ferrets were tested with single formant stimuli designed to assess which spectral cues they could be using to make their decisions. Finally, vowel discrimination thresholds were derived in the presence of noise maskers presented from either the same or a different spatial location. These data indicate that ferrets show robust vowel discrimination behavior across a range of listening conditions and that this ability shares many similarities with human listeners.

  16. Spectral timbre perception in ferrets; discrimination of artificial vowels under different listening conditions

    PubMed Central

    Bizley, Jennifer K; Walker, Kerry MM; King, Andrew J; Schnupp, Jan WH

    2013-01-01

    Spectral timbre is an acoustic feature that enables human listeners to determine the identity of a spoken vowel. Despite its importance to sound perception, little is known about the neural representation of sound timbre and few psychophysical studies have investigated timbre discrimination in non-human species. In this study, ferrets were positively conditioned to discriminate artificial vowel sounds in a two-alternative-forced-choice paradigm. Animals quickly learned to discriminate the vowel sound /u/ from /ε/, and were immediately able to generalize across a range of voice pitches. They were further tested in a series of experiments designed to assess how well they could discriminate these vowel sounds under different listening conditions. First, a series of morphed vowels was created by systematically shifting the location of the first and second formant frequencies. Second, the ferrets were tested with single formant stimuli designed to assess which spectral cues they could be using to make their decisions. Finally, vowel discrimination thresholds were derived in the presence of noise maskers presented from either the same or a different spatial location. These data indicate that ferrets show robust vowel discrimination behavior across a range of listening conditions and that this ability shares many similarities with human listeners. PMID:23297909

  17. Tidal Volume Estimation Using the Blanket Fractal Dimension of the Tracheal Sounds Acquired by Smartphone

    PubMed Central

    Reljin, Natasa; Reyes, Bersain A.; Chon, Ki H.

    2015-01-01

    In this paper, we propose the use of blanket fractal dimension (BFD) to estimate the tidal volume from smartphone-acquired tracheal sounds. We collected tracheal sounds with a Samsung Galaxy S4 smartphone, from five (N = 5) healthy volunteers. Each volunteer performed the experiment six times; first to obtain linear and exponential fitting models, and then to fit new data onto the existing models. Thus, the total number of recordings was 30. The estimated volumes were compared to the true values, obtained with a Respitrace system, which was considered as a reference. Since Shannon entropy (SE) is frequently used as a feature in tracheal sound analyses, we estimated the tidal volume from the same sounds by using SE as well. The evaluation of the performed estimation, using BFD and SE methods, was quantified by the normalized root-mean-squared error (NRMSE). The results show that the BFD outperformed the SE (at least twice smaller NRMSE was obtained). The smallest NRMSE error of 15.877% ± 9.246% (mean ± standard deviation) was obtained with the BFD and exponential model. In addition, it was shown that the fitting curves calculated during the first day of experiments could be successfully used for at least the five following days. PMID:25923929

  18. Tidal volume estimation using the blanket fractal dimension of the tracheal sounds acquired by smartphone.

    PubMed

    Reljin, Natasa; Reyes, Bersain A; Chon, Ki H

    2015-04-27

    In this paper, we propose the use of blanket fractal dimension (BFD) to estimate the tidal volume from smartphone-acquired tracheal sounds. We collected tracheal sounds with a Samsung Galaxy S4 smartphone, from five (N = 5) healthy volunteers. Each volunteer performed the experiment six times; first to obtain linear and exponential fitting models, and then to fit new data onto the existing models. Thus, the total number of recordings was 30. The estimated volumes were compared to the true values, obtained with a Respitrace system, which was considered as a reference. Since Shannon entropy (SE) is frequently used as a feature in tracheal sound analyses, we estimated the tidal volume from the same sounds by using SE as well. The evaluation of the performed estimation, using BFD and SE methods, was quantified by the normalized root-mean-squared error (NRMSE). The results show that the BFD outperformed the SE (at least twice smaller NRMSE was obtained). The smallest NRMSE error of 15.877% ± 9.246% (mean ± standard deviation) was obtained with the BFD and exponential model. In addition, it was shown that the fitting curves calculated during the first day of experiments could be successfully used for at least the five following days.

  19. Sex-biased sound symbolism in english-language first names.

    PubMed

    Pitcher, Benjamin J; Mesoudi, Alex; McElligott, Alan G

    2013-01-01

    Sexual selection has resulted in sex-based size dimorphism in many mammals, including humans. In Western societies, average to taller stature men and comparatively shorter, slimmer women have higher reproductive success and are typically considered more attractive. This size dimorphism also extends to vocalisations in many species, again including humans, with larger individuals exhibiting lower formant frequencies than smaller individuals. Further, across many languages there are associations between phonemes and the expression of size (e.g. large /a, o/, small /i, e/), consistent with the frequency-size relationship in vocalisations. We suggest that naming preferences are a product of this frequency-size relationship, driving male names to sound larger and female names smaller, through sound symbolism. In a 10-year dataset of the most popular British, Australian and American names we show that male names are significantly more likely to contain larger sounding phonemes (e.g. "Thomas"), while female names are significantly more likely to contain smaller phonemes (e.g. "Emily"). The desire of parents to have comparatively larger, more masculine sons, and smaller, more feminine daughters, and the increased social success that accompanies more sex-stereotyped names, is likely to be driving English-language first names to exploit sound symbolism of size in line with sexual body size dimorphism.

  20. Sex-Biased Sound Symbolism in English-Language First Names

    PubMed Central

    Pitcher, Benjamin J.; Mesoudi, Alex; McElligott, Alan G.

    2013-01-01

    Sexual selection has resulted in sex-based size dimorphism in many mammals, including humans. In Western societies, average to taller stature men and comparatively shorter, slimmer women have higher reproductive success and are typically considered more attractive. This size dimorphism also extends to vocalisations in many species, again including humans, with larger individuals exhibiting lower formant frequencies than smaller individuals. Further, across many languages there are associations between phonemes and the expression of size (e.g. large /a, o/, small /i, e/), consistent with the frequency-size relationship in vocalisations. We suggest that naming preferences are a product of this frequency-size relationship, driving male names to sound larger and female names smaller, through sound symbolism. In a 10-year dataset of the most popular British, Australian and American names we show that male names are significantly more likely to contain larger sounding phonemes (e.g. “Thomas”), while female names are significantly more likely to contain smaller phonemes (e.g. “Emily”). The desire of parents to have comparatively larger, more masculine sons, and smaller, more feminine daughters, and the increased social success that accompanies more sex-stereotyped names, is likely to be driving English-language first names to exploit sound symbolism of size in line with sexual body size dimorphism. PMID:23755148

  1. Variation in effectiveness of a cardiac auscultation training class with a cardiology patient simulator among heart sounds and murmurs.

    PubMed

    Kagaya, Yutaka; Tabata, Masao; Arata, Yutaro; Kameoka, Junichi; Ishii, Seiichi

    2017-08-01

    Effectiveness of simulation-based education in cardiac auscultation training is controversial, and may vary among a variety of heart sounds and murmurs. We investigated whether a single auscultation training class using a cardiology patient simulator for medical students provides competence required for clinical clerkship, and whether students' proficiency after the training differs among heart sounds and murmurs. A total of 324 fourth-year medical students (93-117/year for 3 years) were divided into groups of 6-8 students; each group participated in a three-hour training session using a cardiology patient simulator. After a mini-lecture and facilitated training, each student took two different tests. In the first test, they tried to identify three sounds of Category A (non-split, respiratory split, and abnormally wide split S2s) in random order, after being informed that they were from Category A. They then did the same with sounds of Category B (S3, S4, and S3+S4) and Category C (four heart murmurs). In the second test, they tried to identify only one from each of the three categories in random order without any category information. The overall accuracy rate declined from 80.4% in the first test to 62.0% in the second test (p<0.0001). The accuracy rate of all the heart murmurs was similar in the first (81.3%) and second tests (77.5%). That of all the heart sounds (S2/S3/S4) decreased from 79.9% to 54.3% in the second test (p<0.0001). The individual accuracy rate decreased in the second test as compared with the first test in all three S2s, S3, and S3+S4 (p<0.0001). Medical students may be less likely to correctly identify S2/S3/S4 as compared with heart murmurs in a situation close to clinical setting even immediately after training. We may have to consider such a characteristic of students when we provide them with cardiac auscultation training. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  2. A comparative study of electronic stethoscopes for cardiac auscultation.

    PubMed

    Pinto, C; Pereira, D; Ferreira-Coimbra, J; Portugues, J; Gama, V; Coimbra, M

    2017-07-01

    There are several electronic stethoscopes available on the market today, with a very high potential for healthcare namely telemedicine, assisted decision and education. However, there are no recent comparatives studies published about the recording quality of auscultation sounds. In this study we aim to: a) define a ranking, according to experts opinion of 6 of the most relevant electronic stethoscopes on the market today; b) verify if there are any relations between a stethoscope's performance and the type of pathology present; c) analyze if some pathologies are more easily identified than others when using electronic auscultation. Our methodology consisted in creating two study groups: the first group included 18 cardiologists and cardiology house officers, acting as the gold standard of this work. The second included 30 medical students. Using a database of heart sounds recorded in real hospital environments, we applied questionnaires to observers from each group. The first group listened to 60 cardiac auscultations recorded by the 6 stethoscopes, and each one was asked to identify the pathological sound present: aortic stenosis, mitral regurgitation or normal. The second group was asked to choose, between two auscultation recordings, using as criteria the best sound quality for the identification of pathological sounds. Results include a total of 1080 evaluations, in which 72% of cases were correctly diagnosed. A detailed breakdown of these results is presented in this paper. As conclusions, results showed that the impact of the differences between stethoscopes is very small, given that we did not find statistically significant differences between all pairs of stethoscopes. Normal sounds showed to be easier to identify than pathological sounds, but we did not find differences between stethoscopes in this identification.

  3. Software development for the analysis of heartbeat sounds with LabVIEW in diagnosis of cardiovascular disease.

    PubMed

    Topal, Taner; Polat, Hüseyin; Güler, Inan

    2008-10-01

    In this paper, a time-frequency spectral analysis software (Heart Sound Analyzer) for the computer-aided analysis of cardiac sounds has been developed with LabVIEW. Software modules reveal important information for cardiovascular disorders, it can also assist to general physicians to come up with more accurate and reliable diagnosis at early stages. Heart sound analyzer (HSA) software can overcome the deficiency of expert doctors and help them in rural as well as urban clinics and hospitals. HSA has two main blocks: data acquisition and preprocessing, time-frequency spectral analyses. The heart sounds are first acquired using a modified stethoscope which has an electret microphone in it. Then, the signals are analysed using the time-frequency/scale spectral analysis techniques such as STFT, Wigner-Ville distribution and wavelet transforms. HSA modules have been tested with real heart sounds from 35 volunteers and proved to be quite efficient and robust while dealing with a large variety of pathological conditions.

  4. The Development of Spontaneous Sound-Shape Matching in Monolingual and Bilingual Infants during the First Year

    ERIC Educational Resources Information Center

    Pejovic, Jovana; Molnar, Monika

    2017-01-01

    Recently it has been proposed that sensitivity to nonarbitrary relationships between speech sounds and objects potentially bootstraps lexical acquisition. However, it is currently unclear whether preverbal infants (e.g., before 6 months of age) with different linguistic profiles are sensitive to such nonarbitrary relationships. Here, the authors…

  5. Sounds and Noises. A Position Paper on Noise Pollution.

    ERIC Educational Resources Information Center

    Chapman, Thomas L.

    This position paper focuses on noise pollution and the problems and solutions associated with this form of pollution. The paper is divided into the following five sections: Noise and the Ear, Noise Measurement, III Effects of Noise, Acoustics and Action, and Programs and Activities. The first section identifies noise and sound, the beginnings of…

  6. Crossmodal Semantic Priming by Naturalistic Sounds and Spoken Words Enhances Visual Sensitivity

    ERIC Educational Resources Information Center

    Chen, Yi-Chuan; Spence, Charles

    2011-01-01

    We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when…

  7. Improving Classroom Acoustics (ICA): A Three-Year FM Sound Field Classroom Amplification Study.

    ERIC Educational Resources Information Center

    Rosenberg, Gail Gegg; Blake-Rahter, Patricia; Heavner, Judy; Allen, Linda; Redmond, Beatrice Myers; Phillips, Janet; Stigers, Kathy

    1999-01-01

    The Improving Classroom Acoustics (ICA) special project was designed to determine if students' listening and learning behaviors improved as a result of an acoustical environment enhanced through the use of FM sound field classroom amplification. The 3-year project involved 2,054 students in 94 general education kindergarten, first-, and…

  8. Learning Midlevel Auditory Codes from Natural Sound Statistics.

    PubMed

    Młynarski, Wiktor; McDermott, Josh H

    2018-03-01

    Interaction with the world requires an organism to transform sensory signals into representations in which behaviorally meaningful properties of the environment are made explicit. These representations are derived through cascades of neuronal processing stages in which neurons at each stage recode the output of preceding stages. Explanations of sensory coding may thus involve understanding how low-level patterns are combined into more complex structures. To gain insight into such midlevel representations for sound, we designed a hierarchical generative model of natural sounds that learns combinations of spectrotemporal features from natural stimulus statistics. In the first layer, the model forms a sparse convolutional code of spectrograms using a dictionary of learned spectrotemporal kernels. To generalize from specific kernel activation patterns, the second layer encodes patterns of time-varying magnitude of multiple first-layer coefficients. When trained on corpora of speech and environmental sounds, some second-layer units learned to group similar spectrotemporal features. Others instantiate opponency between distinct sets of features. Such groupings might be instantiated by neurons in the auditory cortex, providing a hypothesis for midlevel neuronal computation.

  9. Rene Theophile Hyacinthe Laënnec (1781–1826): The Man Behind the Stethoscope

    PubMed Central

    Roguin, Ariel

    2006-01-01

    Rene Theophile Hyacinthe Laënnec (1781–1826) was a French physician who, in 1816, invented the stethoscope. Using this new instrument, he investigated the sounds made by the heart and lungs and determined that his diagnoses were supported by the observations made during autopsies. Laënnec later published the first seminal work on the use of listening to body sounds, De L’auscultation Mediate (On Mediate Auscultation). Laënnec is considered the father of clinical auscultation and wrote the first descriptions of bronchiectasis and cirrhosis and also classified pulmonary conditions such as pneumonia, bronchiectasis, pleurisy, emphysema, pneumothorax, phthisis and other lung diseases from the sounds he heard with his invention. Laënnec perfected the art of physical examination of the chest and introduced many clinical terms still used today. PMID:17048358

  10. Inexpensive Instruments for a Sound Unit

    NASA Astrophysics Data System (ADS)

    Brazzle, Bob

    2011-04-01

    My unit on sound and waves is embedded within a long-term project in which my high school students construct a musical instrument out of common materials. The unit culminates with a performance assessment: students play the first four measures of "Somewhere Over the Rainbow"—chosen because of the octave interval of the first two notes—in the key of C, and write a short paper describing the theory underlying their instrument. My students have done this project for the past three years, and it continues to evolve. This year I added new instructional materials that I developed using a freeware program called Audacity. This software is very intuitive, and my students used it to develop their musical instruments. In this paper I will describe some of the inexpensive instructional materials in my sound unit, and how they fit with my learning goals.

  11. Drift and geodesic effects on the ion sound eigenmode in tokamak plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elfimov, A. G., E-mail: elfimov@if.usp.br; Smolyakov, A. I., E-mail: andrei.smolyakov@usask.ca; Melnikov, A. V.

    A kinetic treatment of geodesic acoustic modes (GAMs), taking into account ion parallel dynamics, drift and the second poloidal harmonic effects is presented. It is shown that first and second harmonics of the ion sound modes, which have respectively positive and negative radial dispersion, can be coupled due to the geodesic and drift effects. This coupling results in the drift geodesic ion sound eigenmode with a frequency below the standard GAM continuum frequency. Such eigenmode may be able to explain the split modes observed in some experiments.

  12. Recognition of Modified Conditioning Sounds by Competitively Trained Guinea Pigs

    PubMed Central

    Ojima, Hisayuki; Horikawa, Junsei

    2016-01-01

    The guinea pig (GP) is an often-used species in hearing research. However, behavioral studies are rare, especially in the context of sound recognition, because of difficulties in training these animals. We examined sound recognition in a social competitive setting in order to examine whether this setting could be used as an easy model. Two starved GPs were placed in the same training arena and compelled to compete for food after hearing a conditioning sound (CS), which was a repeat of almost identical sound segments. Through a 2-week intensive training, animals were trained to demonstrate a set of distinct behaviors solely to the CS. Then, each of them was subjected to generalization tests for recognition of sounds that had been modified from the CS in spectral, fine temporal and tempo (i.e., intersegment interval, ISI) dimensions. Results showed that they discriminated between the CS and band-rejected test sounds but had no preference for a particular frequency range for the recognition. In contrast, sounds modified in the fine temporal domain were largely perceived to be in the same category as the CS, except for the test sound generated by fully reversing the CS in time. Animals also discriminated sounds played at different tempos. Test sounds with ISIs shorter than that of the multi-segment CS were discriminated from the CS, while test sounds with ISIs longer than that of the CS segments were not. For the shorter ISIs, most animals initiated apparently positive food-access behavior as they did in response to the CS, but discontinued it during the sound-on period probably because of later recognition of tempo. Interestingly, the population range and mean of the delay time before animals initiated the food-access behavior were very similar among different ISI test sounds. This study, for the first time, demonstrates a wide aspect of sound discrimination abilities of the GP and will provide a way to examine tempo perception mechanisms using this animal species. PMID:26858617

  13. Fatigue sensation induced by the sounds associated with mental fatigue and its related neural activities: revealed by magnetoencephalography.

    PubMed

    Ishii, Akira; Tanaka, Masaaki; Iwamae, Masayoshi; Kim, Chongsoo; Yamano, Emi; Watanabe, Yasuyoshi

    2013-06-13

    It has been proposed that an inappropriately conditioned fatigue sensation could be one cause of chronic fatigue. Although classical conditioning of the fatigue sensation has been reported in rats, there have been no reports in humans. Our aim was to examine whether classical conditioning of the mental fatigue sensation can take place in humans and to clarify the neural mechanisms of fatigue sensation using magnetoencephalography (MEG). Ten and 9 healthy volunteers participated in a conditioning and a control experiment, respectively. In the conditioning experiment, we used metronome sounds as conditioned stimuli and two-back task trials as unconditioned stimuli to cause fatigue sensation. Participants underwent MEG measurement while listening to the metronome sounds for 6 min. Thereafter, fatigue-inducing mental task trials (two-back task trials), which are demanding working-memory task trials, were performed for 60 min; metronome sounds were started 30 min after the start of the task trials (conditioning session). The next day, neural activities while listening to the metronome for 6 min were measured. Levels of fatigue sensation were also assessed using a visual analogue scale. In the control experiment, participants listened to the metronome on the first and second days, but they did not perform conditioning session. MEG was not recorded in the control experiment. The level of fatigue sensation caused by listening to the metronome on the second day was significantly higher relative to that on the first day only when participants performed the conditioning session on the first day. Equivalent current dipoles (ECDs) in the insular cortex, with mean latencies of approximately 190 ms, were observed in six of eight participants after the conditioning session, although ECDs were not identified in any participant before the conditioning session. We demonstrated that the metronome sounds can cause mental fatigue sensation as a result of repeated pairings of the sounds with mental fatigue and that the insular cortex is involved in the neural substrates of this phenomenon.

  14. Learning to Pronounce First Words in Three Languages: An Investigation of Caregiver and Infant Behavior Using a Computational Model of an Infant

    PubMed Central

    Howard, Ian S.; Messum, Piers

    2014-01-01

    Words are made up of speech sounds. Almost all accounts of child speech development assume that children learn the pronunciation of first language (L1) speech sounds by imitation, most claiming that the child performs some kind of auditory matching to the elements of ambient speech. However, there is evidence to support an alternative account and we investigate the non-imitative child behavior and well-attested caregiver behavior that this account posits using Elija, a computational model of an infant. Through unsupervised active learning, Elija began by discovering motor patterns, which produced sounds. In separate interaction experiments, native speakers of English, French and German then played the role of his caregiver. In their first interactions with Elija, they were allowed to respond to his sounds if they felt this was natural. We analyzed the interactions through phonemic transcriptions of the caregivers' utterances and found that they interpreted his output within the framework of their native languages. Their form of response was almost always a reformulation of Elija's utterance into well-formed sounds of L1. Elija retained those motor patterns to which a caregiver responded and formed associations between his motor pattern and the response it provoked. Thus in a second phase of interaction, he was able to parse input utterances in terms of the caregiver responses he had heard previously, and respond using his associated motor patterns. This capacity enabled the caregivers to teach Elija to pronounce some simple words in their native languages, by his serial imitation of the words' component speech sounds. Overall, our results demonstrate that the natural responses and behaviors of human subjects to infant-like vocalizations can take a computational model from a biologically plausible initial state through to word pronunciation. This provides support for an alternative to current auditory matching hypotheses for how children learn to pronounce. PMID:25333740

  15. Results of analysis of flight and ground observation materials for first year of first stage of ""Program of experimental research to develop methods for remote sounding of soils and vegetation on analogous sections of the United States and USSR for 1975-1980''

    NASA Technical Reports Server (NTRS)

    1978-01-01

    A joint U.S.S.R. and United States program to develop methods for remote sounding of soils and vegetation is reported. The program is being conducted on similar sections of land in the USSR and the United States. Details of the data obtained and the type of sensing equipments employed are provided in the appendices.

  16. System and method to determine thermophysical properties of a multi-component gas

    DOEpatents

    Morrow, Thomas B.; Behring, II, Kendricks A.

    2003-08-05

    A system and method to characterize natural gas hydrocarbons using a single inferential property, such as standard sound speed, when the concentrations of the diluent gases (e.g., carbon dioxide and nitrogen) are known. The system to determine a thermophysical property of a gas having a first plurality of components comprises a sound velocity measurement device, a concentration measurement device, and a processor to determine a thermophysical property as a function of a correlation between the thermophysical property, the speed of sound, and the concentration measurements, wherein the number of concentration measurements is less than the number of components in the gas. The method includes the steps of determining the speed of sound in the gas, determining a plurality of gas component concentrations in the gas, and determining the thermophysical property as a function of a correlation between the thermophysical property, the speed of sound, and the plurality of concentrations.

  17. Real time sound analysis for medical remote monitoring.

    PubMed

    Istrate, Dan; Binet, Morgan; Cheng, Sreng

    2008-01-01

    The increase of aging population in Europe involves more people living alone at home with an increased risk of home accidents or falls. In order to prevent or detect a distress situation in the case of an elderly people living alone, a remote monitoring system based on the sound environment analysis can be used. We have already proposed a system which monitors the sound environment, identifies everyday life sounds and distress expressions in order to participate to an alarm decision. This first system uses a classical sound card on a PC or embedded PC allowing only one channel monitor. In this paper, we propose a new architecture of the remote monitoring system, which relies on a real time multichannel implementation based on an USB acquisition card. This structure allows monitoring eight channels in order to cover all the rooms of an apartment. More than that, the SNR estimation leads currently to the adaptation of the recognition models to environment.

  18. Speech Sound Processing Deficits and Training-Induced Neural Plasticity in Rats with Dyslexia Gene Knockdown

    PubMed Central

    Centanni, Tracy M.; Chen, Fuyi; Booker, Anne M.; Engineer, Crystal T.; Sloan, Andrew M.; Rennaker, Robert L.; LoTurco, Joseph J.; Kilgard, Michael P.

    2014-01-01

    In utero RNAi of the dyslexia-associated gene Kiaa0319 in rats (KIA-) degrades cortical responses to speech sounds and increases trial-by-trial variability in onset latency. We tested the hypothesis that KIA- rats would be impaired at speech sound discrimination. KIA- rats needed twice as much training in quiet conditions to perform at control levels and remained impaired at several speech tasks. Focused training using truncated speech sounds was able to normalize speech discrimination in quiet and background noise conditions. Training also normalized trial-by-trial neural variability and temporal phase locking. Cortical activity from speech trained KIA- rats was sufficient to accurately discriminate between similar consonant sounds. These results provide the first direct evidence that assumed reduced expression of the dyslexia-associated gene KIAA0319 can cause phoneme processing impairments similar to those seen in dyslexia and that intensive behavioral therapy can eliminate these impairments. PMID:24871331

  19. Sound absorption of a finite micro-perforated panel backed by a shunted loudspeaker.

    PubMed

    Tao, Jiancheng; Jing, Ruixiang; Qiu, Xiaojun

    2014-01-01

    Deep back cavities are usually required for micro-perforated panel (MPP) constructions to achieve good low frequency absorption. To overcome the problem, a close-box loudspeaker with a shunted circuit is proposed to substitute the back wall of the cavity of the MPP constructions to constitute a composite absorber. Based on the equivalent circuit model, the acoustic impedance of the shunted loudspeaker is formulated first, then a prediction model of the sound absorption of the MPP backed by shunted loudspeaker is developed by employing the mode solution of a finite size MPP coupled by an air cavity with an impendence back wall. The MPP absorbs mid to high frequency sound, and with properly adjusted electrical parameters of its shunted circuit, the shunted loudspeaker absorbs low frequency sound, so the composite absorber provides a compact solution to broadband sound control. Numerical simulations and experiments are carried out to validate the model.

  20. The Tympanic Membrane Motion in Forward and Reverse Middle-Ear Sound Transmission

    NASA Astrophysics Data System (ADS)

    Cheng, Jeffrey Tao; Harrington, Ellery; Horwitz, Rachelle; Furlong, Cosme; Rosowski, John J.

    2011-11-01

    Sound-induced displacement of the tympanic membrane (TM) is the first stage in the forward transformation of environmental sound to sound within the inner ear, while displacement of the TM induced by mechanical motions of the ossicular chain is the last stage in the reverse transformation of sound generated within the inner ear to clinically valuable otoacoustic emissions (OAEs). In this study, we use stroboscopic holographic interferometry to study motions of the human cadaveric TM evoked by both forward and reverse stimuli. During forward acoustic stimulation, pure tones from 500 to 10000 Hz are used to stimulate the TM, while reverse stimulation is produced by direct mechanical stimulation of the ossicular chain. The TM surface motions in response to both forward and reverse stimuli show differences and similarities, including the modal motion patterns at specific frequencies as well as the presence and directions of traveling waves on the TM surface.

  1. Geographical variation in sound production in the anemonefish Amphiprion akallopisos

    PubMed Central

    Parmentier, E; Lagardère, J.P; Vandewalle, P; Fine, M.L

    2005-01-01

    Because of pelagic-larval dispersal, coral-reef fishes are distributed widely with minimal genetic differentiation between populations. Amphiprion akallopisos, a clownfish that uses sound production to defend its anemone territory, has a wide but disjunct distribution in the Indian Ocean. We compared sounds produced by these fishes from populations in Madagascar and Indonesia, a distance of 6500 km. Differentiation of agonistic calls into distinct types indicates a complexity not previously recorded in fishes' acoustic communication. Moreover, various acoustic parameters, including peak frequency, pulse duration, number of peaks per pulse, differed between the two populations. The geographic comparison is the first to demonstrate ‘dialects’ in a marine fish species, and these differences in sound parameters suggest genetic divergence between these two populations. These results highlight the possible approach for investigating the role of sounds in fish behaviour in reproductive divergence and speciation. PMID:16087425

  2. A mechanism study of sound wave-trapping barriers.

    PubMed

    Yang, Cheng; Pan, Jie; Cheng, Li

    2013-09-01

    The performance of a sound barrier is usually degraded if a large reflecting surface is placed on the source side. A wave-trapping barrier (WTB), with its inner surface covered by wedge-shaped structures, has been proposed to confine waves within the area between the barrier and the reflecting surface, and thus improve the performance. In this paper, the deterioration in performance of a conventional sound barrier due to the reflecting surface is first explained in terms of the resonance effect of the trapped modes. At each resonance frequency, a strong and mode-controlled sound field is generated by the noise source both within and in the vicinity outside the region bounded by the sound barrier and the reflecting surface. It is found that the peak sound pressures in the barrier's shadow zone, which correspond to the minimum values in the barrier's insertion loss, are largely determined by the resonance frequencies and by the shapes and losses of the trapped modes. These peak pressures usually result in high sound intensity component impinging normal to the barrier surface near the top. The WTB can alter the sound wave diffraction at the top of the barrier if the wavelengths of the sound wave are comparable or smaller than the dimensions of the wedge. In this case, the modified barrier profile is capable of re-organizing the pressure distribution within the bounded domain and altering the acoustic properties near the top of the sound barrier.

  3. Modelling of human low frequency sound localization acuity demonstrates dominance of spatial variation of interaural time difference and suggests uniform just-noticeable differences in interaural time difference.

    PubMed

    Smith, Rosanna C G; Price, Stephen R

    2014-01-01

    Sound source localization is critical to animal survival and for identification of auditory objects. We investigated the acuity with which humans localize low frequency, pure tone sounds using timing differences between the ears. These small differences in time, known as interaural time differences or ITDs, are identified in a manner that allows localization acuity of around 1° at the midline. Acuity, a relative measure of localization ability, displays a non-linear variation as sound sources are positioned more laterally. All species studied localize sounds best at the midline and progressively worse as the sound is located out towards the side. To understand why sound localization displays this variation with azimuthal angle, we took a first-principles, systemic, analytical approach to model localization acuity. We calculated how ITDs vary with sound frequency, head size and sound source location for humans. This allowed us to model ITD variation for previously published experimental acuity data and determine the distribution of just-noticeable differences in ITD. Our results suggest that the best-fit model is one whereby just-noticeable differences in ITDs are identified with uniform or close to uniform sensitivity across the physiological range. We discuss how our results have several implications for neural ITD processing in different species as well as development of the auditory system.

  4. Chatty maps: constructing sound maps of urban areas from social media data

    PubMed Central

    Aiello, Luca Maria; Schifanella, Rossano; Quercia, Daniele; Aletta, Francesco

    2016-01-01

    Urban sound has a huge influence over how we perceive places. Yet, city planning is concerned mainly with noise, simply because annoying sounds come to the attention of city officials in the form of complaints, whereas general urban sounds do not come to the attention as they cannot be easily captured at city scale. To capture both unpleasant and pleasant sounds, we applied a new methodology that relies on tagging information of georeferenced pictures to the cities of London and Barcelona. To begin with, we compiled the first urban sound dictionary and compared it with the one produced by collating insights from the literature: ours was experimentally more valid (if correlated with official noise pollution levels) and offered a wider geographical coverage. From picture tags, we then studied the relationship between soundscapes and emotions. We learned that streets with music sounds were associated with strong emotions of joy or sadness, whereas those with human sounds were associated with joy or surprise. Finally, we studied the relationship between soundscapes and people's perceptions and, in so doing, we were able to map which areas are chaotic, monotonous, calm and exciting. Those insights promise to inform the creation of restorative experiences in our increasingly urbanized world. PMID:27069661

  5. Development of linear projecting in studies of non-linear flow. Acoustic heating induced by non-periodic sound

    NASA Astrophysics Data System (ADS)

    Perelomova, Anna

    2006-08-01

    The equation of energy balance is subdivided into two dynamics equations, one describing evolution of the dominative sound, and the second one responsible for acoustic heating. The first one is the famous KZK equation, and the second one is a novel equation governing acoustic heating. The novel dynamic equation considers both periodic and non-periodic sound. Quasi-plane geometry of flow is supposed. Subdividing is provided on the base of specific links of every mode. Media with arbitrary thermic T(p,ρ) and caloric e(p,ρ) equations of state are considered. Individual roles of thermal conductivity and viscosity in the heating induced by aperiodic sound in the ideal gases and media different from ideal gases are discussed.

  6. Equivalent modulus method for finite element simulation of the sound absorption of anechoic coating backed with orthogonally rib-stiffened plate

    NASA Astrophysics Data System (ADS)

    Jin, Zhongkun; Yin, Yao; Liu, Bilong

    2016-03-01

    The finite element method is often used to investigate the sound absorption of anechoic coating backed with orthogonally rib-stiffened plate. Since the anechoic coating contains cavities, the number of grid nodes of a periodic unit cell is usually large. An equivalent modulus method is proposed to reduce the large amount of nodes by calculating an equivalent homogeneous layer. Applications of this method in several models show that the method can well predict the sound absorption coefficient of such structure in a wide frequency range. Based on the simulation results, the sound absorption performance of such structure and the influences of different backings on the first absorption peak are also discussed.

  7. The Sound Generated by Mid-Ocean Ridge Black Smoker Hydrothermal Vents

    PubMed Central

    Crone, Timothy J.; Wilcock, William S.D.; Barclay, Andrew H.; Parsons, Jeffrey D.

    2006-01-01

    Hydrothermal flow through seafloor black smoker vents is typically turbulent and vigorous, with speeds often exceeding 1 m/s. Although theory predicts that these flows will generate sound, the prevailing view has been that black smokers are essentially silent. Here we present the first unambiguous field recordings showing that these vents radiate significant acoustic energy. The sounds contain a broadband component and narrowband tones which are indicative of resonance. The amplitude of the broadband component shows tidal modulation which is indicative of discharge rate variations related to the mechanics of tidal loading. Vent sounds will provide researchers with new ways to study flow through sulfide structures, and may provide some local organisms with behavioral or navigational cues. PMID:17205137

  8. Topological Transport of Light and Sound

    NASA Astrophysics Data System (ADS)

    Brendel, Christian; Peano, Vittorio; Schmidt, Michael; Marquardt, Florian

    Since they exploit global features of a material's band structure, topological states of matter are particularly robust. Having already been observed for electrons, atoms, and photons, it is an outstanding challenge to create a Chern insulator of sound waves in the solid state. In this work, we propose an implementation based on cavity optomechanics in a photonic crystal. We demonstrate the feasibility of our proposal by means of an effective lattice model as well as first principle simulations. The topological properties of the sound waves can be wholly tuned in situ by adjusting the amplitude and frequency of a driving laser that controls the optomechanical interaction between light and sound. The resulting chiral, topologically protected phonon transport can be probed completely optically.

  9. Oyster Larvae Settle in Response to Habitat-Associated Underwater Sounds

    PubMed Central

    Lillis, Ashlee; Eggleston, David B.; Bohnenstiehl, DelWayne R.

    2013-01-01

    Following a planktonic dispersal period of days to months, the larvae of benthic marine organisms must locate suitable seafloor habitat in which to settle and metamorphose. For animals that are sessile or sedentary as adults, settlement onto substrates that are adequate for survival and reproduction is particularly critical, yet represents a challenge since patchily distributed settlement sites may be difficult to find along a coast or within an estuary. Recent studies have demonstrated that the underwater soundscape, the distinct sounds that emanate from habitats and contain information about their biological and physical characteristics, may serve as broad-scale environmental cue for marine larvae to find satisfactory settlement sites. Here, we contrast the acoustic characteristics of oyster reef and off-reef soft bottoms, and investigate the effect of habitat-associated estuarine sound on the settlement patterns of an economically and ecologically important reef-building bivalve, the Eastern oyster (Crassostrea virginica). Subtidal oyster reefs in coastal North Carolina, USA show distinct acoustic signatures compared to adjacent off-reef soft bottom habitats, characterized by consistently higher levels of sound in the 1.5–20 kHz range. Manipulative laboratory playback experiments found increased settlement in larval oyster cultures exposed to oyster reef sound compared to unstructured soft bottom sound or no sound treatments. In field experiments, ambient reef sound produced higher levels of oyster settlement in larval cultures than did off-reef sound treatments. The results suggest that oyster larvae have the ability to respond to sounds indicative of optimal settlement sites, and this is the first evidence that habitat-related differences in estuarine sounds influence the settlement of a mollusk. Habitat-specific sound characteristics may represent an important settlement and habitat selection cue for estuarine invertebrates and could play a role in driving settlement and recruitment patterns in marine communities. PMID:24205381

  10. Oyster larvae settle in response to habitat-associated underwater sounds.

    PubMed

    Lillis, Ashlee; Eggleston, David B; Bohnenstiehl, DelWayne R

    2013-01-01

    Following a planktonic dispersal period of days to months, the larvae of benthic marine organisms must locate suitable seafloor habitat in which to settle and metamorphose. For animals that are sessile or sedentary as adults, settlement onto substrates that are adequate for survival and reproduction is particularly critical, yet represents a challenge since patchily distributed settlement sites may be difficult to find along a coast or within an estuary. Recent studies have demonstrated that the underwater soundscape, the distinct sounds that emanate from habitats and contain information about their biological and physical characteristics, may serve as broad-scale environmental cue for marine larvae to find satisfactory settlement sites. Here, we contrast the acoustic characteristics of oyster reef and off-reef soft bottoms, and investigate the effect of habitat-associated estuarine sound on the settlement patterns of an economically and ecologically important reef-building bivalve, the Eastern oyster (Crassostrea virginica). Subtidal oyster reefs in coastal North Carolina, USA show distinct acoustic signatures compared to adjacent off-reef soft bottom habitats, characterized by consistently higher levels of sound in the 1.5-20 kHz range. Manipulative laboratory playback experiments found increased settlement in larval oyster cultures exposed to oyster reef sound compared to unstructured soft bottom sound or no sound treatments. In field experiments, ambient reef sound produced higher levels of oyster settlement in larval cultures than did off-reef sound treatments. The results suggest that oyster larvae have the ability to respond to sounds indicative of optimal settlement sites, and this is the first evidence that habitat-related differences in estuarine sounds influence the settlement of a mollusk. Habitat-specific sound characteristics may represent an important settlement and habitat selection cue for estuarine invertebrates and could play a role in driving settlement and recruitment patterns in marine communities.

  11. The Human Voice and the Silent Cinema.

    ERIC Educational Resources Information Center

    Berg, Charles M.

    This paper traces the history of motion pictures from Thomas Edison's vision in 1887 of an instrument that recorded body movements to the development of synchronized sound-motion films in the late 1920s. The first synchronized sound film was made and demonstrated by W. K. L. Dickson, an assistant to Edison, in 1889. The popular acceptance of…

  12. 77 FR 49857 - Early Scoping Notification for the Alternatives Analysis of the Tacoma Link Expansion in Tacoma, WA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-17

    ... requirements associated with the New Starts (``Section 5309'') funding program for certain kinds of major capital investments. While recent legislation may lead to changes in the New Starts process, Sound Transit... business and theater districts. Sound Move, the first phase of regional transit investments, was approved...

  13. 37 CFR 201.22 - Advance notices of potential infringement of works consisting of sounds, images, or both.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Advance notices of potential infringement of works consisting of sounds, images, or both. (a) Definitions... section 411(b) of title 17 of the United States Code, and in accordance with the provisions of this..., provided registration for the work is made within three months after its first transmission. (2) For...

  14. Differentiating Speech Sound Disorders from Phonological Dialect Differences: Implications for Assessment and Intervention

    ERIC Educational Resources Information Center

    Velleman, Shelley L.; Pearson, Barbara Zurer

    2010-01-01

    B. Z. Pearson, S. L. Velleman, T. J. Bryant, and T. Charko (2009) demonstrated phonological differences in typically developing children learning African American English as their first dialect vs. General American English only. Extending this research to children with speech sound disorders (SSD) has key implications for intervention. A total of…

  15. The Structural Connectivity Underpinning Language Aptitude, Working Memory, and IQ in the Perisylvian Language Network

    ERIC Educational Resources Information Center

    Xiang, Huadong; Dediu, Dan; Roberts, Leah; van Oort, Erik; Norris, David G.; Hagoort, Peter

    2012-01-01

    In this article, we report the results of a study on the relationship between individual differences in language learning aptitude and the structural connectivity of language pathways in the adult brain, the first of its kind. We measured four components of language aptitude ("vocabulary learning"; "sound recognition"; "sound-symbol…

  16. Idea Bank: The Native American Flute--A Possibility for Your Classroom

    ERIC Educational Resources Information Center

    Kacanek, Hal

    2011-01-01

    The sound of the Native American flute seems to convey care, sadness, loneliness, longing, heartfelt emotion, a sense of the natural world, wisdom, the human spirit, and a sense of culture. It is a sound that competes for attention, dramatically punctuating messages about First Nation peoples on television and in movies. A relatively small group…

  17. Video and Sound Production: Flip out! Game on!

    ERIC Educational Resources Information Center

    Hunt, Marc W.

    2013-01-01

    The author started teaching TV and sound production in a career and technical education (CTE) setting six years ago. The first couple months of teaching provided a steep learning curve for him. He is highly experienced in his industry, but teaching the content presented a new set of obstacles. His students had a broad range of abilities,…

  18. The Role of Xylitol Gum Chewing in Restoring Postoperative Bowel Activity After Cesarean Section.

    PubMed

    Lee, Jian Tao; Hsieh, Mei-Hui; Cheng, Po-Jen; Lin, Jr-Rung

    2016-03-01

    The goal of this study was to evaluate the effects of xylitol gum chewing on gastrointestinal recovery after cesarean section. Women who underwent cesarean section (N = 120) were randomly allocated into Group A (xylitol gum), Group B (nonxylitol gum), or the control group (no chewing gum). Every 2 hr post-cesarean section and until first flatus, Groups A and B received two pellets of chewing gum and were asked to chew for 15 min. The times to first bowel sounds, first flatus, and first defecation were then compared among the three groups. Group A had the shortest mean time to first bowel sounds (6.9 ± 1.7 hr), followed by Group B (8 ± 1.6 hr) and the control group (12.8 ± 2.5 hr; one-way analysis of variance, p < .001; Scheffe's post hoc comparisons, p < .05). The gum-chewing groups demonstrated a faster return of flatus than the control group did (p < .001), but the time to flatus did not differ significantly between the gum-chewing groups. Additionally, the differences in the time to first defecation were not significant. After cesarean section, chewing gum increased participants' return of bowel activity, as measured by the appearance of bowel sounds and the passage of flatus. In this context, xylitol-containing gum may be superior to xylitol-free gum. © The Author(s) 2015.

  19. Characterization of swallowing sounds with the use of sonar Doppler in full-term and preterm newborns.

    PubMed

    Lagos, Hellen Nataly Correia; Santos, Rosane Sampaio; Abdulmassih, Edna Marcia da Silva; Gallinea, Liliane Friedrich; Langone, Mariangela

    2013-10-01

    Introduction Technological advances have provided a large variety of instruments to view the swallowing event, aiding in the evaluation, diagnosis, and monitoring of disturbances. These advances include electromyography of the surface, dynamic video fluoroscopy, and most recently sonar Doppler. Objective To characterize swallowing sounds in typical children through the use of sonar Doppler. Method Thirty newborns participated in this prospective study. All newborns received breast milk through either their mother's breasts or bottles during data collection. The newborns were placed in either right lateral or left lateral positions when given breast milk through their mother's breasts and in a sitting position when given a bottle. There were five variables measured: initial frequency of sound wave (FoI), frequency of the first peak of the sound wave (FoP1), frequency of the second peak of the sound wave (FoP2), initial intensity and final sound wave (II and IF), and swallowing length (T), the time elapsed from the beginning until the end of the analyzed acoustic signal measured by the audio signal, in seconds. Results The values obtained in the initial frequency of the babies had a mean of 850 Hz. In terms of frequency of first peak, only three presented with a subtle peak, which was due to the elevated larynx position. Conclusion The use of sonar Doppler as a complementary exam for clinical evaluations is of upmost importance because it is nonintrusive and painless, and it is not necessary to place patients in a special room or expose them to radiation.

  20. Characterization of Swallowing Sounds with the Use of Sonar Doppler in Full-Term and Preterm Newborns

    PubMed Central

    Lagos, Hellen Nataly Correia; Santos, Rosane Sampaio; Abdulmassih, Edna Marcia da Silva; Gallinea, Liliane Friedrich; Langone, Mariangela

    2013-01-01

    Introduction Technological advances have provided a large variety of instruments to view the swallowing event, aiding in the evaluation, diagnosis, and monitoring of disturbances. These advances include electromyography of the surface, dynamic video fluoroscopy, and most recently sonar Doppler. Objective To characterize swallowing sounds in typical children through the use of sonar Doppler. Method Thirty newborns participated in this prospective study. All newborns received breast milk through either their mother's breasts or bottles during data collection. The newborns were placed in either right lateral or left lateral positions when given breast milk through their mother's breasts and in a sitting position when given a bottle. There were five variables measured: initial frequency of sound wave (FoI), frequency of the first peak of the sound wave (FoP1), frequency of the second peak of the sound wave (FoP2), initial intensity and final sound wave (II and IF), and swallowing length (T), the time elapsed from the beginning until the end of the analyzed acoustic signal measured by the audio signal, in seconds. Results The values obtained in the initial frequency of the babies had a mean of 850 Hz. In terms of frequency of first peak, only three presented with a subtle peak, which was due to the elevated larynx position. Conclusion The use of sonar Doppler as a complementary exam for clinical evaluations is of upmost importance because it is nonintrusive and painless, and it is not necessary to place patients in a special room or expose them to radiation. PMID:25992041

  1. Sound insulation and energy harvesting based on acoustic metamaterial plate

    NASA Astrophysics Data System (ADS)

    Assouar, Badreddine; Oudich, Mourad; Zhou, Xiaoming

    2015-03-01

    The emergence of artificially designed sub-wavelength acoustic materials, denoted acoustic metamaterials (AMM), has significantly broadened the range of materials responses found in nature. These engineered materials can indeed manipulate sound/vibration in surprising ways, which include vibration/sound insulation, focusing, cloaking, acoustic energy harvesting …. In this work, we report both on the analysis of the airborne sound transmission loss (STL) through a thin metamaterial plate and on the possibility of acoustic energy harvesting. We first provide a theoretical study of the airborne STL and confronted them to the structure-borne dispersion of a metamaterial plate. Second, we propose to investigate the acoustic energy harvesting capability of the plate-type AMM. We have developed semi-analytical and numerical methods to investigate the STL performances of a plate-type AMM with an airborne sound excitation having different incident angles. The AMM is made of silicone rubber stubs squarely arranged in a thin aluminum plate, and the STL is calculated at low-frequency range [100Hz to 3kHz] for an incoming incident sound pressure wave. The obtained analytical and numerical STL present a very good agreement confirming the reliability of developed approaches. A comparison between computed STL and the band structure of the considered AMM shows an excellent agreement and gives a physical understanding of the observed behavior. On another hand, the acoustic energy confinement in AMM with created defects with suitable geometry was investigated. The first results give a general view for assessing the acoustic energy harvesting performances making use of AMM.

  2. The Extraordinary Nature of Barney's Drumming: A Complementary Study of Ordinary Noise Making in Chimpanzees.

    PubMed

    Dufour, Valérie; Pasquaretta, Cristian; Gayet, Pierre; Sterck, Elisabeth H M

    2017-01-01

    In a previous study (Dufour et al., 2015) we reported the unusual characteristics of the drumming performance of a chimpanzee named Barney. His sound production, several sequences of repeated drumming on an up-turned plastic barrel, shared features typical for human musical drumming: it was rhythmical, decontextualized, and well controlled by the chimpanzee. This type of performance raises questions about the origins of our musicality. Here we recorded spontaneously occurring events of sound production with objects in Barney's colony. First we collected data on the duration of sound making. Here we examined whether (i) the context in which objects were used for sound production, (ii) the sex of the producer, (iii) the medium, and (iv) the technique used for sound production had any effect on the duration of sound making. Interestingly, duration of drumming differed across contexts, sex, and techniques. Then we filmed as many events as possible to increase our chances of recording sequences that would be musically similar to Barney's performance in the original study. We filmed several long productions that were rhythmically interesting. However, none fully met the criteria of musical sound production, as previously reported for Barney.

  3. Air-borne and tissue-borne sensitivities of bioacoustic sensors used on the skin surface.

    PubMed

    Zañartu, Matías; Ho, Julio C; Kraman, Steve S; Pasterkamp, Hans; Huber, Jessica E; Wodicka, George R

    2009-02-01

    Measurements of body sounds on the skin surface have been widely used in the medical field and continue to be a topic of current research, ranging from the diagnosis of respiratory and cardiovascular diseases to the monitoring of voice dosimetry. These measurements are typically made using light-weight accelerometers and/or air-coupled microphones attached to the skin. Although normally neglected, air-borne sounds generated by the subject or other sources of background noise can easily corrupt such recordings, which is particularly critical in the recording of voiced sounds on the skin surface. In this study, the sensitivity of commonly used bioacoustic sensors to air-borne sounds was evaluated and compared with their sensitivity to tissue-borne body sounds. To delineate the sensitivity to each pathway, the sensors were first tested in vitro and then on human subjects. The results indicated that, in general, the air-borne sensitivity is sufficiently high to significantly corrupt body sound signals. In addition, the air-borne and tissue-borne sensitivities can be used to discriminate between these components. Although the study is focused on the evaluation of voiced sounds on the skin surface, an extension of the proposed methods to other bioacoustic applications is discussed.

  4. Interdependent effects of sound duration and amplitude on neuronal onset response in mice inferior colliculus.

    PubMed

    Wang, Ningqian; Wang, Xiao; Yang, Xiaoli; Tang, Jie; Xiao, Zhongju

    2014-01-16

    In this study, we adopted iso-frequency pure tone bursts to investigate the interdependent effects of sound amplitude/intensity and duration on mice inferior colliculus (IC) neuronal onset responses. On the majority of the sampled neurons (n=57, 89.1%), sound amplitude and duration had effects on the neuronal response to each other by showing complex changes of the rat-intensity function/duration selectivity types and/or best amplitudes (BAs)/durations (BDs), evaluated by spike counts. These results suggested that the balance between the excitatory and inhibitory inputs set by one acoustic parameter, amplitude or duration, affected the neuronal spike counts responses to the other. Neuronal duration selectivity types were altered easily by the low-amplitude sounds while the changes of rate-intensity function types had no obvious preferred stimulus durations. However, the first spike latencies (FSLs) of the onset response neurons were relative stable to iso-amplitude sound durations and changing systematically along with the sound levels. The superimposition of FSL and duration threshold (DT) as a function of stimulus amplitude after normalization indicated that the effects of the sound levels on FSLs are considered on DT actually. © 2013 Published by Elsevier B.V.

  5. The Extraordinary Nature of Barney's Drumming: A Complementary Study of Ordinary Noise Making in Chimpanzees

    PubMed Central

    Dufour, Valérie; Pasquaretta, Cristian; Gayet, Pierre; Sterck, Elisabeth H. M.

    2017-01-01

    In a previous study (Dufour et al., 2015) we reported the unusual characteristics of the drumming performance of a chimpanzee named Barney. His sound production, several sequences of repeated drumming on an up-turned plastic barrel, shared features typical for human musical drumming: it was rhythmical, decontextualized, and well controlled by the chimpanzee. This type of performance raises questions about the origins of our musicality. Here we recorded spontaneously occurring events of sound production with objects in Barney's colony. First we collected data on the duration of sound making. Here we examined whether (i) the context in which objects were used for sound production, (ii) the sex of the producer, (iii) the medium, and (iv) the technique used for sound production had any effect on the duration of sound making. Interestingly, duration of drumming differed across contexts, sex, and techniques. Then we filmed as many events as possible to increase our chances of recording sequences that would be musically similar to Barney's performance in the original study. We filmed several long productions that were rhythmically interesting. However, none fully met the criteria of musical sound production, as previously reported for Barney. PMID:28154521

  6. Active noise control using a steerable parametric array loudspeaker.

    PubMed

    Tanaka, Nobuo; Tanaka, Motoki

    2010-06-01

    Arguably active noise control enables the sound suppression at the designated control points, while the sound pressure except the targeted locations is likely to augment. The reason is clear; a control source normally radiates the sound omnidirectionally. To cope with this problem, this paper introduces a parametric array loudspeaker (PAL) which produces a spatially focused sound beam due to the attribute of ultrasound used for carrier waves, thereby allowing one to suppress the sound pressure at the designated point without causing spillover in the whole sound field. First the fundamental characteristics of PAL are overviewed. The scattered pressure in the near field contributed by source strength of PAL is then described, which is needed for the design of an active noise control system. Furthermore, the optimal control law for minimizing the sound pressure at control points is derived, the control effect being investigated analytically and experimentally. With a view to tracking a moving target point, a steerable PAL based upon a phased array scheme is presented, with the result that the generation of a moving zone of quiet becomes possible without mechanically rotating the PAL. An experiment is finally conducted, demonstrating the validity of the proposed method.

  7. Scanning silence: mental imagery of complex sounds.

    PubMed

    Bunzeck, Nico; Wuestenberg, Torsten; Lutz, Kai; Heinze, Hans-Jochen; Jancke, Lutz

    2005-07-15

    In this functional magnetic resonance imaging (fMRI) study, we investigated the neural basis of mental auditory imagery of familiar complex sounds that did not contain language or music. In the first condition (perception), the subjects watched familiar scenes and listened to the corresponding sounds that were presented simultaneously. In the second condition (imagery), the same scenes were presented silently and the subjects had to mentally imagine the appropriate sounds. During the third condition (control), the participants watched a scrambled version of the scenes without sound. To overcome the disadvantages of the stray acoustic scanner noise in auditory fMRI experiments, we applied sparse temporal sampling technique with five functional clusters that were acquired at the end of each movie presentation. Compared to the control condition, we found bilateral activations in the primary and secondary auditory cortices (including Heschl's gyrus and planum temporale) during perception of complex sounds. In contrast, the imagery condition elicited bilateral hemodynamic responses only in the secondary auditory cortex (including the planum temporale). No significant activity was observed in the primary auditory cortex. The results show that imagery and perception of complex sounds that do not contain language or music rely on overlapping neural correlates of the secondary but not primary auditory cortex.

  8. Personal sound zone reproduction with room reflections

    NASA Astrophysics Data System (ADS)

    Olik, Marek

    Loudspeaker-based sound systems, capable of a convincing reproduction of different audio streams to listeners in the same acoustic enclosure, are a convenient alternative to headphones. Such systems aim to generate "sound zones" in which target sound programmes are to be reproduced with minimum interference from any alternative programmes. This can be achieved with appropriate filtering of the source (loudspeaker) signals, so that the target sound's energy is directed to the chosen zone while being attenuated elsewhere. The existing methods are unable to produce the required sound energy ratio (acoustic contrast) between the zones with a small number of sources when strong room reflections are present. Optimization of parameters is therefore required for systems with practical limitations to improve their performance in reflective acoustic environments. One important parameter is positioning of sources with respect to the zones and room boundaries. The first contribution of this thesis is a comparison of the key sound zoning methods implemented on compact and distributed geometrical source arrangements. The study presents previously unpublished detailed evaluation and ranking of such arrangements for systems with a limited number of sources in a reflective acoustic environment similar to a domestic room. Motivated by the requirement to investigate the relationship between source positioning and performance in detail, the central contribution of this thesis is a study on optimizing source arrangements when strong individual room reflections occur. Small sound zone systems are studied analytically and numerically to reveal relationships between the geometry of source arrays and performance in terms of acoustic contrast and array effort (related to system efficiency). Three novel source position optimization techniques are proposed to increase the contrast, and geometrical means of reducing the effort are determined. Contrary to previously published case studies, this work presents a systematic examination of the key problem of first order reflections and proposes general optimization techniques, thus forming an important contribution. The remaining contribution considers evaluation and comparison of the proposed techniques with two alternative approaches to sound zone generation under reflective conditions: acoustic contrast control (ACC) combined with anechoic source optimization and sound power minimization (SPM). The study provides a ranking of the examined approaches which could serve as a guideline for method selection for rooms with strong individual reflections.

  9. Changes in room acoustics elicit a Mismatch Negativity in the absence of overall interaural intensity differences.

    PubMed

    Frey, Johannes Daniel; Wendt, Mike; Löw, Andreas; Möller, Stephan; Zölzer, Udo; Jacobsen, Thomas

    2017-02-15

    Changes in room acoustics provide important clues about the environment of sound source-perceiver systems, for example, by indicating changes in the reflecting characteristics of surrounding objects. To study the detection of auditory irregularities brought about by a change in room acoustics, a passive oddball protocol with participants watching a movie was applied in this study. Acoustic stimuli were presented via headphones. Standards and deviants were created by modelling rooms of different sizes, keeping the values of the basic acoustic dimensions (e.g., frequency, duration, sound pressure, and sound source location) as constant as possible. In the first experiment, each standard and deviant stimulus consisted of sequences of three short sounds derived from sinusoidal tones, resulting in three onsets during each stimulus. Deviant stimuli elicited a Mismatch Negativity (MMN) as well as two additional negative deflections corresponding to the three onset peaks. In the second experiment, only one sound was used; the stimuli were otherwise identical to the ones used in the first experiment. Again, an MMN was observed, followed by an additional negative deflection. These results provide further support for the hypothesis of automatic detection of unattended changes in room acoustics, extending previous work by demonstrating the elicitation of an MMN by changes in room acoustics. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. ARTICULATION OF SPEECH SOUNDS OF SERBIAN LANGUAGE IN CHILDREN AGED SIX TO EIGHT.

    PubMed

    Mihajlović, Biljana; Cvjetićanin, Bojana; Veselinović, Mila; Škrbić, Renata; Mitrović, Slobodan M

    2015-01-01

    Phonetic and phonological system of the healthy members of one linguistic community is fully formed around 8 yedrs of age. The auditory and articulatory habits are established with age and tend to be more difficult to be upgraded and completed later. The research was done as a cross-sectional study, conducted at the preschool institution "Radosno detinjstvo" and primary school "Branko Radičević" in Novi Sad. It included 66 children of both genders, aged 6 to 8. The quality of articulation was determined according to the Global Articulation Test by working with each child individually. In each individual vowel, plosive, nasal, lateral and fricative, the quality of articulation was statistically significantly better in the first graders compared to the preschool children (p<0.01). In each affricate, except for the sound /ć/, the quality of articulation was statistically significantly better in the first graders than in the preschool children (p<0.01). The quality of articulation of all speech sounds was statistically significantly better in the first graders than in the preschool children (p<0.01). The most common disorder of articulation is distortion, while only substitution and substitution associated with distortion are less common. Omission does not occur in children from 6 to 8 years of age. Girls have slightly better quality of articulation. The articulatory disorders are more common in preschool children than in children who are in the first grade of primary school. The most commonly mispronounced sounds belong to the group of affricates and fricatives.

  11. Getting the GeoSTAR Instrument Concept Ready for a Space Mission

    NASA Technical Reports Server (NTRS)

    Lambrigtsen, B.; Gaier, T.; Kangaslahti, P.; Lim, B.; Tanner, A.; Ruf, C.

    2011-01-01

    The Geostationary Synthetic Thinned Array Radiometer - GeoSTAR - is a microwave sounder intended for geostationary satellites. First proposed for the EO-3 New Millennium mission in 1999, the technology has since been developed under the Instrument Incubator Program. Under IIP-03 a proof-of-concept demonstrator operating in the temperature sounding 50 GHz band was developed to show that the aperture synthesis concept results in a realizable, stable and accurate imaging-sounding radiometer. Some of the most challenging technology, such as miniature low-power 183- GHz receivers used for water vapor sounding, was developed under IIP-07. The first such receiver has recently been adapted for use in the High Altitude MMIC Sounding Radiometer (HAMSR), which was previously developed under IIP-98. This receiver represents a new state of the art and outperforms the previous benchmark by an order of magnitude in radiometric sensitivity. It was first used in the GRIP hurricane field campaign in 2010, where HAMSR became the first microwave sounder to fly on the Global Hawk UAV. Now, under IIP-10, we will develop flight-like subsystems and a brassboard testing system, which will facilitate rapid implementation of a space mission. GeoSTAR is the baseline payload for the Precipitation and All-weather Temperature and Humidity (PATH) mission - one of NASA's 15 "decadal-survey" missions. Although PATH is currently in the third tier of those missions, the IIP efforts have advanced the required technology to a point where a space mission can be initiated in a time frame commensurate with second-tier missions. An even earlier Venture mission is also being considered.

  12. Separating acoustic deviance from novelty during the first year of life: a review of event-related potential evidence

    PubMed Central

    Kushnerenko, Elena V.; Van den Bergh, Bea R. H.; Winkler, István

    2013-01-01

    Orienting to salient events in the environment is a first step in the development of attention in young infants. Electrophysiological studies have indicated that in newborns and young infants, sounds with widely distributed spectral energy, such as noise and various environmental sounds, as well as sounds widely deviating from their context elicit an event-related potential (ERP) similar to the adult P3a response. We discuss how the maturation of event-related potentials parallels the process of the development of passive auditory attention during the first year of life. Behavioral studies have indicated that the neonatal orientation to high-energy stimuli gradually changes to attending to genuine novelty and other significant events by approximately 9 months of age. In accordance with these changes, in newborns, the ERP response to large acoustic deviance is dramatically larger than that to small and moderate deviations. This ERP difference, however, rapidly decreases within first months of life and the differentiation of the ERP response to genuine novelty from that to spectrally rich but repeatedly presented sounds commences during the same period. The relative decrease of the response amplitudes elicited by high-energy stimuli may reflect development of an inhibitory brain network suppressing the processing of uninformative stimuli. Based on data obtained from healthy full-term and pre-term infants as well as from infants at risk for various developmental problems, we suggest that the electrophysiological indices of the processing of acoustic and contextual deviance may be indicative of the functioning of auditory attention, a crucial prerequisite of learning and language development. PMID:24046757

  13. Theory of acoustic design of opera house and a design proposal

    NASA Astrophysics Data System (ADS)

    Ando, Yoichi

    2004-05-01

    First of all, the theory of subjective preference for sound fields based on the model of auditory-brain system is briefly mentioned. It consists of the temporal factors and spatial factors associated with the left and right cerebral hemispheres, respectively. The temporal criteria are the initial time delay gap between the direct sound and the first Reflection (Dt1) and the subsequent reverberation time (Tsub). These preferred conditions are related to the minimum value of effective duration of the running autocorrelation function of source signals (te)min. The spatial criteria are binaural listening level (LL) and the IACC, which may be extracted from the interaural crosscorrelation function. In the opera house, there are two different kind of sound sources, i.e., the vocal source of relatively short values of (te)min in the stage and the orchestra music of long values of (te)min in the pit. For these sources, a proposal is made here.

  14. Method for noninvasive determination of acoustic properties of fluids inside pipes

    DOEpatents

    None

    2016-08-02

    A method for determining the composition of fluids flowing through pipes from noninvasive measurements of acoustic properties of the fluid is described. The method includes exciting a first transducer located on the external surface of the pipe through which the fluid under investigation is flowing, to generate an ultrasound chirp signal, as opposed to conventional pulses. The chirp signal is received by a second transducer disposed on the external surface of the pipe opposing the location of the first transducer, from which the transit time through the fluid is determined and the sound speed of the ultrasound in the fluid is calculated. The composition of a fluid is calculated from the sound speed therein. The fluid density may also be derived from measurements of sound attenuation. Several signal processing approaches are described for extracting the transit time information from the data with the effects of the pipe wall having been subtracted.

  15. Common humpback whale (Megaptera novaeangliae) sound types for passive acoustic monitoring.

    PubMed

    Stimpert, Alison K; Au, Whitlow W L; Parks, Susan E; Hurst, Thomas; Wiley, David N

    2011-01-01

    Humpback whales (Megaptera novaeangliae) are one of several baleen whale species in the Northwest Atlantic that coexist with vessel traffic and anthropogenic noise. Passive acoustic monitoring strategies can be used in conservation management, but the first step toward understanding the acoustic behavior of a species is a good description of its acoustic repertoire. Digital acoustic tags (DTAGs) were placed on humpback whales in the Stellwagen Bank National Marine Sanctuary to record and describe the non-song sounds being produced in conjunction with foraging activities. Peak frequencies of sounds were generally less than 1 kHz, but ranged as high as 6 kHz, and sounds were generally less than 1 s in duration. Cluster analysis distilled the dataset into eight groups of sounds with similar acoustic properties. The two most stereotyped and distinctive types ("wops" and "grunts") were also identified aurally as candidates for use in passive acoustic monitoring. This identification of two of the most common sound types will be useful for moving forward conservation efforts on this Northwest Atlantic feeding ground.

  16. Optical and Acoustic Sensor-Based 3D Ball Motion Estimation for Ball Sport Simulators †.

    PubMed

    Seo, Sang-Woo; Kim, Myunggyu; Kim, Yejin

    2018-04-25

    Estimation of the motion of ball-shaped objects is essential for the operation of ball sport simulators. In this paper, we propose an estimation system for 3D ball motion, including speed and angle of projection, by using acoustic vector and infrared (IR) scanning sensors. Our system is comprised of three steps to estimate a ball motion: sound-based ball firing detection, sound source localization, and IR scanning for motion analysis. First, an impulsive sound classification based on the mel-frequency cepstrum and feed-forward neural network is introduced to detect the ball launch sound. An impulsive sound source localization using a 2D microelectromechanical system (MEMS) microphones and delay-and-sum beamforming is presented to estimate the firing position. The time and position of a ball in 3D space is determined from a high-speed infrared scanning method. Our experimental results demonstrate that the estimation of ball motion based on sound allows a wider activity area than similar camera-based methods. Thus, it can be practically applied to various simulations in sports such as soccer and baseball.

  17. Photoacoustics and speed-of-sound dual mode imaging with a long depth-of-field by using annular ultrasound array.

    PubMed

    Ding, Qiuning; Tao, Chao; Liu, Xiaojun

    2017-03-20

    Speed-of-sound and optical absorption reflect the structure and function of tissues from different aspects. A dual-mode microscopy system based on a concentric annular ultrasound array is proposed to simultaneously acquire the long depth-of-field images of speed-of-sound and optical absorption of inhomogeneous samples. First, speed-of-sound is decoded from the signal delay between each element of the annular array. The measured speed-of-sound could not only be used as an image contrast, but also improve the resolution and accuracy of spatial location of photoacoustic image in inhomogeneous acoustic media. Secondly, benefitting from dynamic focusing of annular array and the measured speed-of-sound, it is achieved an advanced acoustic-resolution photoacoustic microscopy with a precise position and a long depth-of-field. The performance of the dual-mode imaging system has been experimentally examined by using a custom-made annular array. The proposed dual-mode microscopy might have the significances in monitoring the biological physiological and pathological processes.

  18. Video indexing based on image and sound

    NASA Astrophysics Data System (ADS)

    Faudemay, Pascal; Montacie, Claude; Caraty, Marie-Jose

    1997-10-01

    Video indexing is a major challenge for both scientific and economic reasons. Information extraction can sometimes be easier from sound channel than from image channel. We first present a multi-channel and multi-modal query interface, to query sound, image and script through 'pull' and 'push' queries. We then summarize the segmentation phase, which needs information from the image channel. Detection of critical segments is proposed. It should speed-up both automatic and manual indexing. We then present an overview of the information extraction phase. Information can be extracted from the sound channel, through speaker recognition, vocal dictation with unconstrained vocabularies, and script alignment with speech. We present experiment results for these various techniques. Speaker recognition methods were tested on the TIMIT and NTIMIT database. Vocal dictation as experimented on newspaper sentences spoken by several speakers. Script alignment was tested on part of a carton movie, 'Ivanhoe'. For good quality sound segments, error rates are low enough for use in indexing applications. Major issues are the processing of sound segments with noise or music, and performance improvement through the use of appropriate, low-cost architectures or networks of workstations.

  19. Basic experimental study of the coupling between flow instabilities and incident sound

    NASA Astrophysics Data System (ADS)

    Ahuja, K. K.

    1984-03-01

    Whether a solid trailing edge is required to produce efficient coupling between sound and instability waves in a shear layer was investigated. The differences found in the literature on the theoretical notions about receptivity, and a need to resolve them by way of well-planned experiments are discussed. Instability waves in the shear layer of a subsonic jet, excited by a point sound source located external to the jet, were first visualized using an ensemble averaging technique. Various means were adopted to shield the sound reaching the nozzle lip. It was found that the low frequency sound couples more efficiently at distances downstream of the nozzle. To substantiate the findings further, a supersonic screeching jet was tested such that it passed through a small opening in a baffle placed parallel to the exit plane. The measured feedback or screech frequencies and also the excited flow disturbances changed drastically on traversing the baffle axially thus providing a strong indication that a trailing edge is not necessary for efficient coupling between sound and flow.

  20. Basic experimental study of the coupling between flow instabilities and incident sound

    NASA Technical Reports Server (NTRS)

    Ahuja, K. K.

    1984-01-01

    Whether a solid trailing edge is required to produce efficient coupling between sound and instability waves in a shear layer was investigated. The differences found in the literature on the theoretical notions about receptivity, and a need to resolve them by way of well-planned experiments are discussed. Instability waves in the shear layer of a subsonic jet, excited by a point sound source located external to the jet, were first visualized using an ensemble averaging technique. Various means were adopted to shield the sound reaching the nozzle lip. It was found that the low frequency sound couples more efficiently at distances downstream of the nozzle. To substantiate the findings further, a supersonic screeching jet was tested such that it passed through a small opening in a baffle placed parallel to the exit plane. The measured feedback or screech frequencies and also the excited flow disturbances changed drastically on traversing the baffle axially thus providing a strong indication that a trailing edge is not necessary for efficient coupling between sound and flow.

  1. Development of a Hydrodynamic Model of Puget Sound and Northwest Straits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Zhaoqing; Khangaonkar, Tarang P.

    2007-12-10

    The hydrodynamic model used in this study is the Finite Volume Coastal Ocean Model (FVCOM) developed by the University of Massachusetts at Dartmouth. The unstructured grid and finite volume framework, as well as the capability of wetting/drying simulation and baroclinic simulation, makes FVCOM a good fit to the modeling needs for nearshore restoration in Puget Sound. The model domain covers the entire Puget Sound, Strait of Juan de Fuca, San Juan Passages, and Georgia Strait at the United States-Canada Border. The model is driven by tide, freshwater discharge, and surface wind. Preliminary model validation was conducted for tides at variousmore » locations in the straits and Puget Sound using National Oceanic and Atmospheric Administration (NOAA) tide data. The hydrodynamic model was successfully linked to the NOAA oil spill model General NOAA Operational Modeling Environment model (GNOME) to predict particle trajectories at various locations in Puget Sound. Model results demonstrated that the Puget Sound GNOME model is a useful tool to obtain first-hand information for emergency response such as oil spill and fish migration pathways.« less

  2. Making Ultraviolet Spectro-Polarimetry Polarization Measurements with the MSFC Solar Ultraviolet Magnetograph Sounding Rocket

    NASA Technical Reports Server (NTRS)

    West, Edward; Cirtain, Jonathan; Kobayashi, Ken; Davis, John; Gary, Allen

    2011-01-01

    This paper will describe the Marshall Space Flight Center's Solar Ultraviolet Magnetograph Investigation (SUMI) sounding rocket program. This paper will concentrate on SUMI's VUV optics, and discuss their spectral, spatial and polarization characteristics. While SUMI's first flight (7/30/2010) met all of its mission success criteria, there are several areas that will be improved for its second and third flights. This paper will emphasize the MgII linear polarization measurements and describe the changes that will be made to the sounding rocket and how those changes will improve the scientific data acquired by SUMI.

  3. Radar soundings of the ionosphere of Mars.

    PubMed

    Gurnett, D A; Kirchner, D L; Huff, R L; Morgan, D D; Persoon, A M; Averkamp, T F; Duru, F; Nielsen, E; Safaeinili, A; Plaut, J J; Picardi, G

    2005-12-23

    We report the first radar soundings of the ionosphere of Mars with the MARSIS (Mars Advanced Radar for Subsurface and Ionosphere Sounding) instrument on board the orbiting Mars Express spacecraft. Several types of ionospheric echoes are observed, ranging from vertical echoes caused by specular reflection from the horizontally stratified ionosphere to a wide variety of oblique and diffuse echoes. The oblique echoes are believed to arise mainly from ionospheric structures associated with the complex crustal magnetic fields of Mars. Echoes at the electron plasma frequency and the cyclotron period also provide measurements of the local electron density and magnetic field strength.

  4. Humpback whale bioacoustics: From form to function

    NASA Astrophysics Data System (ADS)

    Mercado, Eduardo, III

    This thesis investigates how humpback whales produce, perceive, and use sounds from a comparative and computational perspective. Biomimetic models are developed within a systems-theoretic framework and then used to analyze the properties of humpback whale sounds. First, sound transmission is considered in terms of possible production mechanisms and the propagation characteristics of shallow water environments frequented by humpback whales. A standard source-filter model (used to describe human sound production) is shown to be well suited for characterizing sound production by humpback whales. Simulations of sound propagation based on normal mode theory reveal that optimal frequencies for long range propagation are higher than the frequencies used most often by humpbacks, and that sounds may contain spectral information indicating how far they have propagated. Next, sound reception is discussed. A model of human auditory processing is modified to emulate humpback whale auditory processing as suggested by cochlear anatomical dimensions. This auditory model is used to generate visual representations of humpback whale sounds that more clearly reveal what features are likely to be salient to listening whales. Additionally, the possibility that an unusual sensory organ (the tubercle) plays a role in acoustic processing is assessed. Spatial distributions of tubercles are described that suggest tubercles may be useful for localizing sound sources. Finally, these models are integrated with self-organizing feature maps to create a biomimetic sound classification system, and a detailed analysis of individual sounds and sound patterns in humpback whale 'songs' is performed. This analysis provides evidence that song sounds and sound patterns vary substantially in terms of detectability and propagation potential, suggesting that they do not all serve the same function. New quantitative techniques are also presented that allow for more objective characterizations of the long term acoustic features of songs. The quantitative framework developed in this thesis provides a basis for theoretical consideration of how humpback whales (and other cetaceans) might use sound. Evidence is presented suggesting that vocalizing humpbacks could use sounds not only to convey information to other whales, but also to collect information about other whales. In particular, it is suggested that some sounds currently believed to be primarily used as communicative signals, might be primarily used as sonar signals. This theoretical framework is shown to be generalizable to other baleen whales and to toothed whales.

  5. Development and use of a spherical microphone array for measurement of spatial properties of reverberant sound fields

    NASA Astrophysics Data System (ADS)

    Gover, Bradford Noel

    The problem of hands-free speech pick-up is introduced, and it is identified how details of the spatial properties of the reverberant field may be useful for enhanced design of microphone arrays. From this motivation, a broadly-applicable measurement system has been developed for the analysis of the directional and spatial variations in reverberant sound fields. Two spherical, 32-element arrays of microphones are used to generate narrow beams over two different frequency ranges, together covering 300--3300 Hz. Using an omnidirectional loudspeaker as excitation in a room, the pressure impulse response in each of 60 steering directions is measured. Through analysis of these responses, the variation of arriving energy with direction is studied. The system was first validated in simple sound fields in an anechoic chamber and in a reverberation chamber. The system characterizes these sound fields as expected, both quantitatively through numerical descriptors and qualitatively from plots of the arriving energy versus direction. The system was then used to measure the sound fields in several actual rooms. Through both qualitative and quantitative output, these sound fields were seen to be highly anisotropic, influenced greatly by the direct sound and early-arriving reflections. Furthermore, the rate of sound decay was not independent of direction, sound being absorbed more rapidly in some directions than in others. These results are discussed in the context of the original motivation, and methods for their application to enhanced speech pick-up using microphone arrays are proposed.

  6. Applying cybernetic technology to diagnose human pulmonary sounds.

    PubMed

    Chen, Mei-Yung; Chou, Cheng-Han

    2014-06-01

    Chest auscultation is a crucial and efficient method for diagnosing lung disease; however, it is a subjective process that relies on physician experience and the ability to differentiate between various sound patterns. Because the physiological signals composed of heart sounds and pulmonary sounds (PSs) are greater than 120 Hz and the human ear is not sensitive to low frequencies, successfully making diagnostic classifications is difficult. To solve this problem, we constructed various PS recognition systems for classifying six PS classes: vesicular breath sounds, bronchial breath sounds, tracheal breath sounds, crackles, wheezes, and stridor sounds. First, we used a piezoelectric microphone and data acquisition card to acquire PS signals and perform signal preprocessing. A wavelet transform was used for feature extraction, and the PS signals were decomposed into frequency subbands. Using a statistical method, we extracted 17 features that were used as the input vectors of a neural network. We proposed a 2-stage classifier combined with a back-propagation (BP) neural network and learning vector quantization (LVQ) neural network, which improves classification accuracy by using a haploid neural network. The receiver operating characteristic (ROC) curve verifies the high performance level of the neural network. To expand traditional auscultation methods, we constructed various PS diagnostic systems that can correctly classify the six common PSs. The proposed device overcomes the lack of human sensitivity to low-frequency sounds and various PS waves, characteristic values, and a spectral analysis charts are provided to elucidate the design of the human-machine interface.

  7. Where is the level of neutral buoyancy for deep convection?

    NASA Astrophysics Data System (ADS)

    Takahashi, Hanii; Luo, Zhengzhao

    2012-08-01

    This study revisits an old concept in meteorology - level of neutral buoyancy (LNB). The classic definition of LNB is derived from the parcel theory and can be estimated from the ambient sounding (LNB_sounding) without having to observe any actual convective cloud development. In reality, however, convection interacts with the environment in complicated ways; it will eventually manage to find its own effective LNB and manifests it through detraining masses and developing anvils (LNB_observation). This study conducts a near-global survey of LNB_observation for tropical deep convection using CloudSat data and makes comparison with the corresponding LNB_sounding. The principal findings are as follows: First, although LNB_sounding provides a reasonable upper bound for convective development, correlation between LNB_sounding and LNB_observation is low suggesting that ambient sounding contains limited information for accurately predicting the actual LNB. Second, maximum mass outflow is located more than 3 km lower than LNB_sounding. Hence, from convective transport perspective, LNB_sounding is a significant overestimate of the “destination” height level of the detrained mass. Third, LNB_observation is consistently higher over land than over ocean, although LNB_sounding is similar between land and ocean. This difference is likely related to the contrasts in convective strength and environment between land and ocean. Finally, we estimate the bulk entrainment rates associated with the observed deep convection, which can serve as an observational basis for adjusting GCM cumulus parameterization.

  8. Thermal and viscous effects on sound waves: revised classical theory.

    PubMed

    Davis, Anthony M J; Brenner, Howard

    2012-11-01

    In this paper the recently developed, bi-velocity model of fluid mechanics based on the principles of linear irreversible thermodynamics (LIT) is applied to sound propagation in gases taking account of first-order thermal and viscous dissipation effects. The results are compared and contrasted with the classical Navier-Stokes-Fourier results of Pierce for this same situation cited in his textbook. Comparisons are also made with the recent analyses of Dadzie and Reese, whose molecularly based sound propagation calculations furnish results virtually identical with the purely macroscopic LIT-based bi-velocity results below, as well as being well-supported by experimental data. Illustrative dissipative sound propagation examples involving application of the bi-velocity model to several elementary situations are also provided, showing the disjoint entropy mode and the additional, evanescent viscous mode.

  9. Improving the hospital 'soundscape': a framework to measure individual perceptual response to hospital sounds.

    PubMed

    Mackrill, J B; Jennings, P A; Cain, R

    2013-01-01

    Work on the perception of urban soundscapes has generated a number of perceptual models which are proposed as tools to test and evaluate soundscape interventions. However, despite the excessive sound levels and noise within hospital environments, perceptual models have not been developed for these spaces. To address this, a two-stage approach was developed by the authors to create such a model. First, semantics were obtained from listening evaluations which captured the feelings of individuals from hearing hospital sounds. Then, 30 participants rated a range of sound clips representative of a ward soundscape based on these semantics. Principal component analysis extracted a two-dimensional space representing an emotional-cognitive response. The framework enables soundscape interventions to be tested which may improve the perception of these hospital environments.

  10. Large Eddy Simulation in the Computation of Jet Noise

    NASA Technical Reports Server (NTRS)

    Mankbadi, R. R.; Goldstein, M. E.; Povinelli, L. A.; Hayder, M. E.; Turkel, E.

    1999-01-01

    Noise can be predicted by solving Full (time-dependent) Compressible Navier-Stokes Equation (FCNSE) with computational domain. The fluctuating near field of the jet produces propagating pressure waves that produce far-field sound. The fluctuating flow field as a function of time is needed in order to calculate sound from first principles. Noise can be predicted by solving the full, time-dependent, compressible Navier-Stokes equations with the computational domain extended to far field - but this is not feasible as indicated above. At high Reynolds number of technological interest turbulence has large range of scales. Direct numerical simulations (DNS) can not capture the small scales of turbulence. The large scales are more efficient than the small scales in radiating sound. The emphasize is thus on calculating sound radiated by large scales.

  11. Time dependent wave envelope finite difference analysis of sound propagation

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.

    1984-01-01

    A transient finite difference wave envelope formulation is presented for sound propagation, without steady flow. Before the finite difference equations are formulated, the governing wave equation is first transformed to a form whose solution tends not to oscillate along the propagation direction. This transformation reduces the required number of grid points by an order of magnitude. Physically, the transformed pressure represents the amplitude of the conventional sound wave. The derivation for the wave envelope transient wave equation and appropriate boundary conditions are presented as well as the difference equations and stability requirements. To illustrate the method, example solutions are presented for sound propagation in a straight hard wall duct and in a two dimensional straight soft wall duct. The numerical results are in good agreement with exact analytical results.

  12. Musical Interfaces: Visualization and Reconstruction of Music with a Microfluidic Two-Phase Flow

    PubMed Central

    Mak, Sze Yi; Li, Zida; Frere, Arnaud; Chan, Tat Chuen; Shum, Ho Cheung

    2014-01-01

    Detection of sound wave in fluids can hardly be realized because of the lack of approaches to visualize the very minute sound-induced fluid motion. In this paper, we demonstrate the first direct visualization of music in the form of ripples at a microfluidic aqueous-aqueous interface with an ultra-low interfacial tension. The interfaces respond to sound of different frequency and amplitude robustly with sufficiently precise time resolution for the recording of musical notes and even subsequent reconstruction with high fidelity. Our work shows the possibility of sensing and transmitting vibrations as tiny as those induced by sound. This robust control of the interfacial dynamics enables a platform for investigating the mechanical properties of microstructures and for studying frequency-dependent phenomena, for example, in biological systems. PMID:25327509

  13. IFLA General Conference, 1987. Division on Bibliographic Control. Bibliography and Round Table on A/V Media Section. Papers.

    ERIC Educational Resources Information Center

    International Federation of Library Associations, The Hague (Netherlands).

    The two papers in this section focus on the bibliographic control of sound recordings, primarily phonograph records. In the first, "The National Discography for the United Kingdom," Christopher Roads discusses the problem of lack of easily accessible and up-to-date information about the growing collection of new sound recordings as they…

  14. Contributions of Morphological Awareness Skills to Word-Level Reading and Spelling in First-Grade Children with and without Speech Sound Disorder

    ERIC Educational Resources Information Center

    Apel, Kenn; Lawrence, Jessika

    2011-01-01

    Purpose: In this study, the authors compared the morphological awareness abilities of children with speech sound disorder (SSD) and children with typical speech skills and examined how morphological awareness ability predicted word-level reading and spelling performance above other known contributors to literacy development. Method: Eighty-eight…

  15. Communication Sciences Laboratory Quarterly Progress Report, Volume 9, Number 3: Research Programs of Some of the Newer Members of CSL.

    ERIC Educational Resources Information Center

    Feinstein, Stephen H.; And Others

    The research reported in these papers covers a variety of communication problems. The first paper covers research on sound navigation by the blind and involves echo perception research and relevant aspects of underwater sound localization. The second paper describes a research program in acoustic phonetics and concerns such related issues as…

  16. Playthings as Art Objects: Ideas and Resources. Kites and Sound Making Objects and Playing Cards and Dolls.

    ERIC Educational Resources Information Center

    City of Birmingham Polytechnic (England). Dept. of Art.

    Five booklets focusing on playthings as art objects draw together information about historical, ethnographic, and play traditions of various cultures of the world. The first booklet provides an overview of ideas and resources about kites, sound making objects, playing cards, and dolls. The second booklet on kites discusses the distribution and…

  17. Art and Sonic Mining in the Archives: Methods for Investigating the Wartime History of Birmingham School of Art

    ERIC Educational Resources Information Center

    Vaughan, Sian

    2018-01-01

    "Absconditi Viscus" (or "Hidden Entries") is a series of sound compositions based on the history of Birmingham School of Art during the First World War. Sound artist Justin Wiggan explored the concept of historical sonic information that although lost could still potentially permeate the archival record and the fabric of the…

  18. Deviant Processing of Letters and Speech Sounds as Proximate Cause of Reading Failure: A Functional Magnetic Resonance Imaging Study of Dyslexic Children

    ERIC Educational Resources Information Center

    Blau, Vera; Reithler, Joel; van Atteveldt, Nienke; Seitz, Jochen; Gerretsen, Patty; Goebel, Rainer; Blomert, Leo

    2010-01-01

    Learning to associate auditory information of speech sounds with visual information of letters is a first and critical step for becoming a skilled reader in alphabetic languages. Nevertheless, it remains largely unknown which brain areas subserve the learning and automation of such associations. Here, we employ functional magnetic resonance…

  19. Contrast of Hemispheric Lateralization for Oro-Facial Movements between Learned Attention-Getting Sounds and Species-Typical Vocalizations in Chimpanzees: Extension in a Second Colony

    ERIC Educational Resources Information Center

    Wallez, Catherine; Schaeffer, Jennifer; Meguerditchian, Adrien; Vauclair, Jacques; Schapiro, Steven J.; Hopkins, William D.

    2012-01-01

    Studies involving oro-facial asymmetries in nonhuman primates have largely demonstrated a right hemispheric dominance for communicative signals and conveyance of emotional information. A recent study on chimpanzee reported the first evidence of significant left-hemispheric dominance when using attention-getting sounds and rightward bias for…

  20. Solid phase stability of molybdenum under compression: Sound velocity measurements and first-principles calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiulu; Laboratory for Extreme Conditions Matter Properties, Southwest University of Science and Technology, 621010 Mianyang, Sichuan; Liu, Zhongli

    2015-02-07

    The high-pressure solid phase stability of molybdenum (Mo) has been the center of a long-standing controversy on its high-pressure melting. In this work, experimental and theoretical researches have been conducted to check its solid phase stability under compression. First, we performed sound velocity measurements from 38 to 160 GPa using the two-stage light gas gun and explosive loading in backward- and forward-impact geometries, along with the high-precision velocity interferometry. From the sound velocities, we found no solid-solid phase transition in Mo before shock melting, which does not support the previous solid-solid phase transition conclusion inferred from the sharp drops of themore » longitudinal sound velocity [Hixson et al., Phys. Rev. Lett. 62, 637 (1989)]. Then, we searched its structures globally using the multi-algorithm collaborative crystal structure prediction technique combined with the density functional theory. By comparing the enthalpies of body centered cubic structure with those of the metastable structures, we found that bcc is the most stable structure in the range of 0–300 GPa. The present theoretical results together with previous ones greatly support our experimental conclusions.« less

  1. Gum chewing combined with oral intake of a semi-liquid diet in the postoperative care of patients after gynaecologic laparoscopic surgery.

    PubMed

    Pan, Yuping; Chen, Li; Zhong, Xiaorong; Feng, Suwen

    2017-10-01

    To evaluate the effects of gum chewing combined with a semi-liquid diet on patients after gynaecologic laparoscopic surgery. Previous studies suggested that chewing gum before traditional postoperative care promotes the postoperative recovery of bowel motility and function after open and laparoscopic surgery. However, gum chewing combined with a semi-liquid diet has not been reported in postoperative care of patients following gynaecologic laparoscopic surgery. A prospective randomised study. Total 234 patients were randomly assigned after elective gynaecologic laparoscopic surgery to a gum chewing and semi-liquid diet group, a semi-liquid only diet group or a liquid diet group. The gum chewing and semi-liquid diet group chewed sugar-free gum with an oral intake of a semi-liquid diet six hours postoperatively. The semi-liquid only diet and liquid diet groups received a semi-liquid diet or a liquid diet, respectively. The time to first bowel sounds, time to first regular postoperative bowel sounds, time to first passage of flatus, time to first defecation, serum gastrin and incidences of hunger, nausea, vomiting and abdominal distension were recorded. Hunger and gastrointestinal sensations were assessed using a four-point scale. Serum gastrin was assayed pre- and postoperatively using a gastrin radioimmunoassay kit. The gum chewing and semi-liquid diet group had first bowel sounds, first regular bowel sounds, first passage of flatus and first defecation earlier than the semi-liquid only and liquid groups. Increased serum gastrin was observed in the gum chewing and semi-liquid diet group. Incidences of nausea, vomiting and abdominal distention were not significantly different between these groups. Chewing gum combined with an oral intake of a semi-liquid diet is safe and accelerates the postoperative recovery of bowel function. It might be recommended as a better postoperative care regimen for patients after gynaecologic laparoscopic surgery. This study developed a new postoperative diet regimen to improve the postoperative care of patients undergoing laparoscopic gynecologic surgery. © 2016 John Wiley & Sons Ltd.

  2. Characterizing the 3-D atmosphere with NUCAPS sounding products from multiple platforms

    NASA Astrophysics Data System (ADS)

    Barnet, C. D.; Smith, N.; Gambacorta, A.; Wheeler, A. A.; Sjoberg, W.; Goldberg, M.

    2017-12-01

    The JPSS Proving Ground and Risk Reduction (PGRR) Program launched the Sounding Initiative in 2014 to develop operational applications that use 3-D satellite soundings. These are near global daily swaths of vertical atmospheric profiles of temperature, moisture and trace gas species. When high vertical resolution satellite soundings first became available, their assimilation into user applications was slow: forecasters familiar with 2-D satellite imagery or 1-D radiosondes did not have the technical capability nor product knowledge to readily ingest satellite soundings. Similarly, the satellite sounding developer community lacked wherewithal to understand the many challenges forecasters face in their real time decision-making. It took the PGRR Sounding Initiative to bring these two communities together and develop novel applications that now depend on NUCAPS soundings. NUCAPS - the NOAA Unique Combined Atmospheric Processing System - is platform agnostic and generates satellite soundings from measurements made by infrared and microwave sounder pairs on the MetOp (IASI/AMSU) and Suomi NPP (CrIS/ATMS) polar-orbiting platforms. We highlight here three new applications developed under the PGRR Sounding Initiative. They are, (i) aviation: NUCAPS identifies cold air "blobs" that causes jet fuel to freeze, (ii) severe weather: NUCAPS identifies areas of convective initiation, and (iii) air quality: NUCAPS identifies stratospheric intrusions and tracks long-range transport of biomass burning plumes. The value of NUCAPS being platform agnostic will become apparent with the JPSS-1 launch. NUCAPS soundings from Suomi NPP and JPSS-1, being 50 min apart, could capture fast-changing weather events and together with NUCAPS soundings from the two MetOp platforms ( 4 hours earlier in the day than JPSS) could characterize diurnal cycles. In this paper, we will summarize key accomplishments and assess whether NUCAPS maintains enough continuity in its sounding products from multiple platforms to sufficiently characterize atmospheric evolution at localized scales. With this we will address one of the primary data requirements that emerged in the Sounding Initiative, namely the need for a time sequence of satellite sounding products.

  3. Localizing semantic interference from distractor sounds in picture naming: A dual-task study.

    PubMed

    Mädebach, Andreas; Kieseler, Marie-Luise; Jescheniak, Jörg D

    2017-10-13

    In this study we explored the locus of semantic interference in a novel picture-sound interference task in which participants name pictures while ignoring environmental distractor sounds. In a previous study using this task (Mädebach, Wöhner, Kieseler, & Jescheniak, in Journal of Experimental Psychology: Human Perception and Performance, 43, 1629-1646, 2017), we showed that semantically related distractor sounds (e.g., BARKING dog ) interfere with a picture-naming response (e.g., "horse") more strongly than unrelated distractor sounds do (e.g., DRUMMING drum ). In the experiment reported here, we employed the psychological refractory period (PRP) approach to explore the locus of this effect. We combined a geometric form classification task (square vs. circle; Task 1) with the picture-sound interference task (Task 2). The stimulus onset asynchrony (SOA) between the tasks was systematically varied (0 vs. 500 ms). There were three central findings. First, the semantic interference effect from distractor sounds was replicated. Second, picture naming (in Task 2) was slower with the short than with the long task SOA. Third, both effects were additive-that is, the semantic interference effects were of similar magnitude at both task SOAs. This suggests that the interference arises during response selection or later stages, not during early perceptual processing. This finding corroborates the theory that semantic interference from distractor sounds reflects a competitive selection mechanism in word production.

  4. Development of sound measurement systems for auditory functional magnetic resonance imaging.

    PubMed

    Nam, Eui-Cheol; Kim, Sam Soo; Lee, Kang Uk; Kim, Sang Sik

    2008-06-01

    Auditory functional magnetic resonance imaging (fMRI) requires quantification of sound stimuli in the magnetic environment and adequate isolation of background noise. We report the development of two novel sound measurement systems that accurately measure the sound intensity inside the ear, which can simultaneously provide the similar or greater amount of scanner- noise protection than ear-muffs. First, we placed a 2.6 x 2.6-mm microphone in an insert phone that was connected to a headphone [microphone-integrated, foam-tipped insert-phone with a headphone (MIHP)]. This attenuated scanner noise by 37.8+/-4.6 dB, a level better than the reference amount obtained using earmuffs. The nonmetallic optical microphone was integrated with a headphone [optical microphone in a headphone (OMHP)] and it effectively detected the change of sound intensity caused by variable compression on the cushions of the headphone. Wearing the OMHP reduced the noise by 28.5+/-5.9 dB and did not affect echoplanar magnetic resonance images. We also performed an auditory fMRI study using the MIHP system and presented increase in the auditory cortical activation following 10-dB increment in the intensity of sound stimulation. These two newly developed sound measurement systems successfully achieved the accurate quantification of sound stimuli with maintaining the similar level of noise protection of wearing earmuffs in the auditory fMRI experiment.

  5. Sound production and mechanism in Heniochus chrysostomus (Chaetodontidae).

    PubMed

    Parmentier, Eric; Boyle, Kelly S; Berten, Laetitia; Brié, Christophe; Lecchini, David

    2011-08-15

    The diversity in calls and sonic mechanisms appears to be important in Chaetodontidae. Calls in Chaetodon multicinctus seem to include tail slap, jump, pelvic fin flick and dorsal-anal fin erection behaviors. Pulsatile sounds are associated with dorsal elevation of the head, anterior extension of the ventral pectoral girdle and dorsal elevation of the caudal skeleton in Forcipiger flavissiumus. In Hemitaurichthys polylepis, extrinsic swimbladder muscles could be involved in sounds originating from the swimbladder and correspond to the inward buckling of tissues situated dorsally in front of the swimbladder. These examples suggest that this mode of communication could be present in other members of the family. Sounds made by the pennant bannerfish (Heniochus chrysostomus) were recorded for the first time on coral reefs and when fish were hand held. In hand-held fishes, three types of calls were recorded: isolated pulses (51%), trains of four to 11 pulses (19%) and trains preceded by an isolated pulse (29%). Call frequencies were harmonic and had a fundamental frequency between 130 and 180 Hz. The fundamental frequency, sound amplitude and sound duration were not related to fish size. Data from morphology, sound analysis and electromyography recordings highlight that the calls are made by extrinsic sonic drumming muscles in association with the articulated bones of the ribcage. The pennant bannerfish system differs from other Chaetodontidae in terms of sound characteristics, associated body movements and, consequently, mechanism.

  6. Reverberation negatively impacts musical sound quality for cochlear implant users.

    PubMed

    Roy, Alexis T; Vigeant, Michelle; Munjal, Tina; Carver, Courtney; Jiradejvong, Patpong; Limb, Charles J

    2015-09-01

    Satisfactory musical sound quality remains a challenge for many cochlear implant (CI) users. In particular, questionnaires completed by CI users suggest that reverberation due to room acoustics can negatively impact their music listening experience. The objective of this study was to more specifically characterize of the effect of reverberation on musical sound quality in CI users, normal hearing (NH) non-musicians, and NH musicians using a previously designed assessment method, called Cochlear Implant-MUltiple Stimulus with Hidden Reference and Anchor (CI-MUSHRA). In this method, listeners were randomly presented with an anechoic musical segment and five-versions of this segment in which increasing amounts of reverberation were artificially added. Participants listened to the six reverberation versions and provided sound quality ratings between 0 (very poor) and 100 (excellent). Results demonstrated that on average CI users and NH non-musicians preferred the sound quality of anechoic versions to more reverberant versions. In comparison, NH musicians could be delineated into those who preferred the sound quality of anechoic pieces and those who preferred pieces with some reverberation. This is the first study, to our knowledge, to objectively compare the effects of reverberation on musical sound quality ratings in CI users. These results suggest that musical sound quality for CI users can be improved by non-reverberant listening conditions and musical stimuli in which reverberation is removed.

  7. Misconceptions About Sound Among Engineering Students

    NASA Astrophysics Data System (ADS)

    Pejuan, Arcadi; Bohigas, Xavier; Jaén, Xavier; Periago, Cristina

    2012-12-01

    Our first objective was to detect misconceptions about the microscopic nature of sound among senior university students enrolled in different engineering programmes (from chemistry to telecommunications). We sought to determine how these misconceptions are expressed (qualitative aspect) and, only very secondarily, to gain a general idea of the extent to which they are held (quantitative aspect). Our second objective was to explore other misconceptions about wave aspects of sound. We have also considered the degree of consistency in the model of sound used by each student. Forty students answered a questionnaire including open-ended questions. Based on their free, spontaneous answers, the main results were as follows: a large majority of students answered most of the questions regarding the microscopic model of sound according to the scientifically accepted model; however, only a small number answered consistently. The main model misconception found was the notion that sound is propagated through the travelling of air particles, even in solids. Misconceptions and mental-model inconsistencies tended to depend on the engineering programme in which the student was enrolled. However, students in general were inconsistent also in applying their model of sound to individual sound properties. The main conclusion is that our students have not truly internalised the scientifically accepted model that they have allegedly learnt. This implies a need to design learning activities that take these findings into account in order to be truly efficient.

  8. Time course of the influence of musical expertise on the processing of vocal and musical sounds.

    PubMed

    Rigoulot, S; Pell, M D; Armony, J L

    2015-04-02

    Previous functional magnetic resonance imaging (fMRI) studies have suggested that different cerebral regions preferentially process human voice and music. Yet, little is known on the temporal course of the brain processes that decode the category of sounds and how the expertise in one sound category can impact these processes. To address this question, we recorded the electroencephalogram (EEG) of 15 musicians and 18 non-musicians while they were listening to short musical excerpts (piano and violin) and vocal stimuli (speech and non-linguistic vocalizations). The task of the participants was to detect noise targets embedded within the stream of sounds. Event-related potentials revealed an early differentiation of sound category, within the first 100 ms after the onset of the sound, with mostly increased responses to musical sounds. Importantly, this effect was modulated by the musical background of participants, as musicians were more responsive to music sounds than non-musicians, consistent with the notion that musical training increases sensitivity to music. In late temporal windows, brain responses were enhanced in response to vocal stimuli, but musicians were still more responsive to music. These results shed new light on the temporal course of neural dynamics of auditory processing and reveal how it is impacted by the stimulus category and the expertise of participants. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  9. Geometric and frequency EMI sounding of estuarine earthen flood defence embankments in Ireland using 1D inversion models

    NASA Astrophysics Data System (ADS)

    Viganotti, Matteo; Jackson, Ruth; Krahn, Hartmut; Dyer, Mark

    2013-05-01

    Earthen flood defence embankments are linear structures, raised above the flood plain, that are commonly used as flood defences in rural settings; these are often relatively old structures constructed using locally garnered material and of which little is known in terms of design and construction. Alarmingly, it is generally reported that a number of urban developments have expanded to previously rural areas; hence, acquiring knowledge about the flood defences protecting these areas has risen significantly in the agendas of basin and asset managers. This paper focusses, by reporting two case studies, on electromagnetic induction (EMI) methods that would efficiently complement routine visual inspections and would represent a first step to more detailed investigations. Evaluation of the results is presented by comparison with ERT profiles and intrusive investigation data. The EM data, acquired using a GEM-2 apparatus for frequency sounding and an EM-31 apparatus for geometrical sounding, has been handled using the prototype eGMS software tool, being developed by the eGMS international research consortium; the depth sounding data interpretation was assisted by 1D inversions obtained with the EM1DFM software developed by the University of British Columbia. Although both sounding methods showed some limitations, the models obtained were consistent with ERT models and the techniques were useful screening methods for the identification of areas of interest, such as material interfaces or potential seepage areas, within the embankment structure: 1D modelling improved the rapid assessment of earthen flood defence embankments in an estuarine environment; evidence that EMI sounding could play an important role as a monitoring tool or as a first step towards more detailed investigations.

  10. Development of a hybrid wave based-transfer matrix model for sound transmission analysis.

    PubMed

    Dijckmans, A; Vermeir, G

    2013-04-01

    In this paper, a hybrid wave based-transfer matrix model is presented that allows for the investigation of the sound transmission through finite multilayered structures placed between two reverberant rooms. The multilayered structure may consist of an arbitrary configuration of fluid, elastic, or poro-elastic layers. The field variables (structural displacements and sound pressures) are expanded in terms of structural and acoustic wave functions. The boundary and continuity conditions in the rooms determine the participation factors in the pressure expansions. The displacement of the multilayered structure is determined by the mechanical impedance matrix, which gives a relation between the pressures and transverse displacements at both sides of the structure. The elements of this matrix are calculated with the transfer matrix method. First, the hybrid model is numerically validated. Next a comparison is made with sound transmission loss measurements of a hollow brick wall and a sandwich panel. Finally, numerical simulations show the influence of structural damping, room dimensions and plate dimensions on the sound transmission loss of multilayered structures.

  11. Location, location, location: finding a suitable home among the noise

    PubMed Central

    Stanley, Jenni A.; Radford, Craig A.; Jeffs, Andrew G.

    2012-01-01

    While sound is a useful cue for guiding the onshore orientation of larvae because it travels long distances underwater, it also has the potential to convey valuable information about the quality and type of the habitat at the source. Here, we provide, to our knowledge, the first evidence that settlement-stage coastal crab species can interpret and show a strong settlement and metamorphosis response to habitat-related differences in natural underwater sound. Laboratory- and field-based experiments demonstrated that time to metamorphosis in the settlement-stage larvae of common coastal crab species varied in response to different underwater sound signatures produced by different habitat types. The megalopae of five species of both temperate and tropical crabs showed a significant decrease in time to metamorphosis, when exposed to sound from their optimal settlement habitat type compared with other habitat types. These results indicate that sounds emanating from specific underwater habitats may play a major role in determining spatial patterns of recruitment in coastal crab species. PMID:22673354

  12. Callback response of dugongs to conspecific chirp playbacks.

    PubMed

    Ichikawa, Kotaro; Akamatsu, Tomonari; Shinke, Tomio; Adulyanukosol, Kanjana; Arai, Nobuaki

    2011-06-01

    Dugongs (Dugong dugon) produce bird-like calls such as chirps and trills. The vocal responses of dugongs to playbacks of several acoustic stimuli were investigated. Animals were exposed to four different playback stimuli: a recorded chirp from a wild dugong, a synthesized down-sweep sound, a synthesized constant-frequency sound, and silence. Wild dugongs vocalized more frequently after playback of broadcast chirps than that after constant-frequency sounds or silence. The down-sweep sound also elicited more vocal responses than did silence. No significant difference was found between the broadcast chirps and the down-sweep sound. The ratio of wild dugong chirps to all calls and the dominant frequencies of the wild dugong calls were significantly higher during playbacks of broadcast chirps, down-sweep sounds, and constant-frequency sounds than during those of silence. The source level and duration of dugong chirps increased significantly as signaling distance increased. No significant correlation was found between signaling distance and the source level of trills. These results show that dugongs vocalize to playbacks of frequency-modulated signals and suggest that the source level of dugong chirps may be manipulated to compensate for transmission loss between the source and receiver. This study provides the first behavioral observations revealing the function of dugong chirps. © 2011 Acoustical Society of America

  13. Choking - infant under 1 year

    MedlinePlus

    ... coughing Soft or high-pitched sounds while inhaling First Aid Do NOT perform these steps if the infant ... see it. DO NOT DO NOT perform choking first aid if the infant is coughing forcefully, has a ...

  14. Use of ultrasonography in the diagnosis of osteomalacia: preliminary results on experimental osteomalacia in the rat.

    PubMed

    Luisetto, G; Camozzi, V; De Terlizzi, F; Moschini, G; Ballanti, P

    1999-03-01

    This study was performed to investigate the ability of ultrasonographic technique to distinguish osteomalacia from normal bone with the same mineral content. Ten rats with experimentally induced osteomalacia (group A) and 12 control rats having similar body size and weight (group B) were studied. Histomorphometric analysis confirmed the presence of osteomalacia in two rats from group A and showed normally mineralized bone in two rats from group B. Whole body bone mineral density, measured by dual-energy x-ray absorptiometry, was similar in the two groups (86 +/- 6 mg/cm2 in group A and 89 +/- 4 mg/cm2 in group B). The velocity of the ultrasound beam in bone was measured by densitometer at the first caudal vertebra of each rat. The velocity was measured when the first peak of the waveform reached a predetermined minimum amplitude value (amplitude-dependent speed of sound) as well as at the lowest point of this curve before it reaches the predetermined minimum amplitude (first minimum speed of sound). Although the amplitude-dependent speed of sound was similar in the two groups (1381.9 +/- 11.8 m/s in group A and 1390.9 +/- 17.8 m/s in group B), the first minimum speed of sound was clearly different (1446.1 +/- 8.9 m/s in group A and 1503.3 +/- 10.9 m/s in group B; P < 0.001). This study shows that ultrasonography could be used to identify alterations in bone quality, such as osteomalacia, but further studies need to be carried out before this method can be introduced into clinical practice.

  15. Possibilities of psychoacoustics to determine sound quality

    NASA Astrophysics Data System (ADS)

    Genuit, Klaus

    For some years, acoustic engineers have increasingly become aware of the importance of analyzing and minimizing noise problems not only with regard to the A-weighted sound pressure level, but to design sound quality. It is relatively easy to determine the A-weighted SPL according to international standards. However, the objective evaluation to describe subjectively perceived sound quality - taking into account psychoacoustic parameters such as loudness, roughness, fluctuation strength, sharpness and so forth - is more difficult. On the one hand, the psychoacoustic measurement procedures which are known so far have yet not been standardized. On the other hand, they have only been tested in laboratories by means of listening tests in the free-field and one sound source and simple signals. Therefore, the results achieved cannot be transferred to complex sound situations with several spatially distributed sound sources without difficulty. Due to the directional hearing and selectivity of human hearing, individual sound events can be selected among many. Already in the late seventies a new binaural Artificial Head Measurement System was developed which met the requirements of the automobile industry in terms of measurement technology. The first industrial application of the Artificial Head Measurement System was in 1981. Since that time the system was further developed, particularly by the cooperation between HEAD acoustics and Mercedes-Benz. In addition to a calibratable Artificial Head Measurement System which is compatible with standard measurement technologies and has transfer characteristics comparable to human hearing, a Binaural Analysis System is now also available. This system permits the analysis of binaural signals regarding physical and psychoacoustic procedures. Moreover, the signals to be analyzed can be simultaneously monitored through headphones and manipulated in the time and frequency domain so that those signal components being responsible for noise annoyance can be found. Especially in complex sound situations with several spatially distributed sound sources, standard, one-channel measurements methods cannot adequately determine sound quality, the acoustic comfort, or annoyance of sound events.

  16. Sex-Specific Differences in Agonistic Behaviour, Sound Production and Auditory Sensitivity in the Callichthyid Armoured Catfish Megalechis thoracata

    PubMed Central

    Hadjiaghai, Oliwia; Ladich, Friedrich

    2015-01-01

    Background Data on sex-specific differences in sound production, acoustic behaviour and hearing abilities in fishes are rare. Representatives of numerous catfish families are known to produce sounds in agonistic contexts (intraspecific aggression and interspecific disturbance situations) using their pectoral fins. The present study investigates differences in agonistic behaviour, sound production and hearing abilities in males and females of a callichthyid catfish. Methodology/Principal Findings Eight males and nine females of the armoured catfish Megalechis thoracata were investigated. Agonistic behaviour displayed during male-male and female-female dyadic contests and sounds emitted were recorded, sound characteristics analysed and hearing thresholds measured using the auditory evoked potential (AEP) recording technique. Male pectoral spines were on average 1.7-fold longer than those of same-sized females. Visual and acoustic threat displays differed between sexes. Males produced low-frequency harmonic barks at longer distances and thumps at close distances, whereas females emitted broad-band pulsed crackles when close to each other. Female aggressive sounds were significantly shorter than those of males (167 ms versus 219 to 240 ms) and of higher dominant frequency (562 Hz versus 132 to 403 Hz). Sound duration and sound level were positively correlated with body and pectoral spine length, but dominant frequency was inversely correlated only to spine length. Both sexes showed a similar U-shaped hearing curve with lowest thresholds between 0.2 and 1 kHz and a drop in sensitivity above 1 kHz. The main energies of sounds were located at the most sensitive frequencies. Conclusions/Significance Current data demonstrate that both male and female M. thoracata produce aggressive sounds, but the behavioural contexts and sound characteristics differ between sexes. Sexes do not differ in hearing, but it remains to be clarified if this is a general pattern among fish. This is the first study to describe sex-specific differences in agonistic behaviour in fishes. PMID:25775458

  17. Postnatal development of echolocation abilities in a bottlenose dolphin (Tursiops truncatus): temporal organization.

    PubMed

    Favaro, Livio; Gnone, Guido; Pessani, Daniela

    2013-03-01

    In spite of all the information available on adult bottlenose dolphin (Tursiops truncatus) biosonar, the ontogeny of its echolocation abilities has been investigated very little. Earlier studies have reported that neonatal dolphins can produce both whistles and burst-pulsed sounds just after birth and that early-pulsed sounds are probably a precursor of echolocation click trains. The aim of this research is to investigate the development of echolocation signals in a captive calf, born in the facilities of the Acquario di Genova. A set of 81 impulsive sounds were collected from birth to the seventh postnatal week and six additional echolocation click trains were recorded when the dolphin was 1 year old. Moreover, behavioral observations, concurring with sound production, were carried out by means of a video camera. For each sound we measured five acoustic parameters: click train duration (CTD), number of clicks per train, minimum, maximum, and mean click repetition rate (CRR). CTD and number of clicks per train were found to increase with age. Maximum and mean CRR followed a decreasing trend with dolphin growth starting from the second postnatal week. The calf's first head scanning movement was recorded 21 days after birth. Our data suggest that in the bottlenose dolphin the early postnatal weeks are essential for the development of echolocation abilities and that the temporal features of the echolocation click trains remain relatively stable from the seventh postnatal week up to the first year of life. © 2013 Wiley Periodicals, Inc.

  18. Efficient method for events detection in phonocardiographic signals

    NASA Astrophysics Data System (ADS)

    Martinez-Alajarin, Juan; Ruiz-Merino, Ramon

    2005-06-01

    The auscultation of the heart is still the first basic analysis tool used to evaluate the functional state of the heart, as well as the first indicator used to submit the patient to a cardiologist. In order to improve the diagnosis capabilities of auscultation, signal processing algorithms are currently being developed to assist the physician at primary care centers for adult and pediatric population. A basic task for the diagnosis from the phonocardiogram is to detect the events (main and additional sounds, murmurs and clicks) present in the cardiac cycle. This is usually made by applying a threshold and detecting the events that are bigger than the threshold. However, this method usually does not allow the detection of the main sounds when additional sounds and murmurs exist, or it may join several events into a unique one. In this paper we present a reliable method to detect the events present in the phonocardiogram, even in the presence of heart murmurs or additional sounds. The method detects relative maxima peaks in the amplitude envelope of the phonocardiogram, and computes a set of parameters associated with each event. Finally, a set of characteristics is extracted from each event to aid in the identification of the events. Besides, the morphology of the murmurs is also detected, which aids in the differentiation of different diseases that can occur in the same temporal localization. The algorithms have been applied to real normal heart sounds and murmurs, achieving satisfactory results.

  19. Least-squares Legendre spectral element solutions to sound propagation problems.

    PubMed

    Lin, W H

    2001-02-01

    This paper presents a novel algorithm and numerical results of sound wave propagation. The method is based on a least-squares Legendre spectral element approach for spatial discretization and the Crank-Nicolson [Proc. Cambridge Philos. Soc. 43, 50-67 (1947)] and Adams-Bashforth [D. Gottlieb and S. A. Orszag, Numerical Analysis of Spectral Methods: Theory and Applications (CBMS-NSF Monograph, Siam 1977)] schemes for temporal discretization to solve the linearized acoustic field equations for sound propagation. Two types of NASA Computational Aeroacoustics (CAA) Workshop benchmark problems [ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics, edited by J. C. Hardin, J. R. Ristorcelli, and C. K. W. Tam, NASA Conference Publication 3300, 1995a] are considered: a narrow Gaussian sound wave propagating in a one-dimensional space without flows, and the reflection of a two-dimensional acoustic pulse off a rigid wall in the presence of a uniform flow of Mach 0.5 in a semi-infinite space. The first problem was used to examine the numerical dispersion and dissipation characteristics of the proposed algorithm. The second problem was to demonstrate the capability of the algorithm in treating sound propagation in a flow. Comparisons were made of the computed results with analytical results and results obtained by other methods. It is shown that all results computed by the present method are in good agreement with the analytical solutions and results of the first problem agree very well with those predicted by other schemes.

  20. Two-dimensional adaptation in the auditory forebrain

    PubMed Central

    Nagel, Katherine I.; Doupe, Allison J.

    2011-01-01

    Sensory neurons exhibit two universal properties: sensitivity to multiple stimulus dimensions, and adaptation to stimulus statistics. How adaptation affects encoding along primary dimensions is well characterized for most sensory pathways, but if and how it affects secondary dimensions is less clear. We studied these effects for neurons in the avian equivalent of primary auditory cortex, responding to temporally modulated sounds. We showed that the firing rate of single neurons in field L was affected by at least two components of the time-varying sound log-amplitude. When overall sound amplitude was low, neural responses were based on nonlinear combinations of the mean log-amplitude and its rate of change (first time differential). At high mean sound amplitude, the two relevant stimulus features became the first and second time derivatives of the sound log-amplitude. Thus a strikingly systematic relationship between dimensions was conserved across changes in stimulus intensity, whereby one of the relevant dimensions approximated the time differential of the other dimension. In contrast to stimulus mean, increases in stimulus variance did not change relevant dimensions, but selectively increased the contribution of the second dimension to neural firing, illustrating a new adaptive behavior enabled by multidimensional encoding. Finally, we demonstrated theoretically that inclusion of time differentials as additional stimulus features, as seen so prominently in the single-neuron responses studied here, is a useful strategy for encoding naturalistic stimuli, because it can lower the necessary sampling rate while maintaining the robustness of stimulus reconstruction to correlated noise. PMID:21753019

  1. Emergence of band-pass filtering through adaptive spiking in the owl's cochlear nucleus

    PubMed Central

    MacLeod, Katrina M.; Lubejko, Susan T.; Steinberg, Louisa J.; Köppl, Christine; Peña, Jose L.

    2014-01-01

    In the visual, auditory, and electrosensory modalities, stimuli are defined by first- and second-order attributes. The fast time-pressure signal of a sound, a first-order attribute, is important, for instance, in sound localization and pitch perception, while its slow amplitude-modulated envelope, a second-order attribute, can be used for sound recognition. Ascending the auditory pathway from ear to midbrain, neurons increasingly show a preference for the envelope and are most sensitive to particular envelope modulation frequencies, a tuning considered important for encoding sound identity. The level at which this tuning property emerges along the pathway varies across species, and the mechanism of how this occurs is a matter of debate. In this paper, we target the transition between auditory nerve fibers and the cochlear nucleus angularis (NA). While the owl's auditory nerve fibers simultaneously encode the fast and slow attributes of a sound, one synapse further, NA neurons encode the envelope more efficiently than the auditory nerve. Using in vivo and in vitro electrophysiology and computational analysis, we show that a single-cell mechanism inducing spike threshold adaptation can explain the difference in neural filtering between the two areas. We show that spike threshold adaptation can explain the increased selectivity to modulation frequency, as input level increases in NA. These results demonstrate that a spike generation nonlinearity can modulate the tuning to second-order stimulus features, without invoking network or synaptic mechanisms. PMID:24790170

  2. Bubble dynamics in drinks

    NASA Astrophysics Data System (ADS)

    Broučková, Zuzana; Trávníček, Zdeněk; Šafařík, Pavel

    2014-03-01

    This study introduces two physical effects known from beverages: the effect of sinking bubbles and the hot chocolate sound effect. The paper presents two simple "kitchen" experiments. The first and second effects are indicated by means of a flow visualization and microphone measurement, respectively. To quantify the second (acoustic) effect, sound records are analyzed using time-frequency signal processing, and the obtained power spectra and spectrograms are discussed.

  3. Wavelet-Based Adaptive Denoising of Phonocardiographic Records

    DTIC Science & Technology

    2001-10-25

    phonocardiography, including the recording of fetal heart sounds on the maternal abdominal surface. Keywords - phonocardiography, wavelets, denoising, signal... fetal heart rate monitoring [2], [7], [8]. Unfortunately, heart sound records are very often disturbed by various factors, which can prohibit their...recorded the acoustic signals. The first microphone was inserted into the focus of a stethoscope and it recorded the acoustic signals of the heart ( heart

  4. High-Frequency Sound Interaction in Ocean Sediments

    DTIC Science & Technology

    2002-09-30

    sediment attenuation (10-300 kHz) and sound speed (10-300 kHz) and determine constraints imposed on sediment acoustic models , such as poroelastic (Biot...by poroelastic seafloors: First-order theory,” accepted for publication in J. Acoust . Soc. Am. 5. K. L. Williams, “An effective density fluid model ... poroelastic sediment models , the appropriateness of stochastic descriptions of sediment heterogeneities, the importance of single versus multiple

  5. Cabin acoustical noise

    NASA Astrophysics Data System (ADS)

    Homick, J. L.

    1981-12-01

    Using a hand-held sound pressure level meter the crew made one octave band and A-weight sound level measurements at four locations in the Orbiter on Mission Day 1. The data were voice recorded and transmitted to the ground prior to the first inflight sleep period. The data obtained are summarized. From a physiological point of view the noise levels measured on STS-1 were not hazardous to the crewmens' hearing.

  6. Hoeren unter Wasser: Absolute Reizschwellen und Richtungswahrnehnumg (Underwater Hearing: Absolute Thresholds and Sound Localization),

    DTIC Science & Technology

    The article deals first with the theoretical foundations of underwater hearing, and the effects of the acoustical characteristics of water on hearing...lead to the conclusion that, in water , man can locate the direction of sound at low and at very high tonal frequencies of the audio range, but this ability is probably vanishing in the middle range of frequencies. (Author)

  7. Fatigue sensation induced by the sounds associated with mental fatigue and its related neural activities: revealed by magnetoencephalography

    PubMed Central

    2013-01-01

    Background It has been proposed that an inappropriately conditioned fatigue sensation could be one cause of chronic fatigue. Although classical conditioning of the fatigue sensation has been reported in rats, there have been no reports in humans. Our aim was to examine whether classical conditioning of the mental fatigue sensation can take place in humans and to clarify the neural mechanisms of fatigue sensation using magnetoencephalography (MEG). Methods Ten and 9 healthy volunteers participated in a conditioning and a control experiment, respectively. In the conditioning experiment, we used metronome sounds as conditioned stimuli and two-back task trials as unconditioned stimuli to cause fatigue sensation. Participants underwent MEG measurement while listening to the metronome sounds for 6 min. Thereafter, fatigue-inducing mental task trials (two-back task trials), which are demanding working-memory task trials, were performed for 60 min; metronome sounds were started 30 min after the start of the task trials (conditioning session). The next day, neural activities while listening to the metronome for 6 min were measured. Levels of fatigue sensation were also assessed using a visual analogue scale. In the control experiment, participants listened to the metronome on the first and second days, but they did not perform conditioning session. MEG was not recorded in the control experiment. Results The level of fatigue sensation caused by listening to the metronome on the second day was significantly higher relative to that on the first day only when participants performed the conditioning session on the first day. Equivalent current dipoles (ECDs) in the insular cortex, with mean latencies of approximately 190 ms, were observed in six of eight participants after the conditioning session, although ECDs were not identified in any participant before the conditioning session. Conclusions We demonstrated that the metronome sounds can cause mental fatigue sensation as a result of repeated pairings of the sounds with mental fatigue and that the insular cortex is involved in the neural substrates of this phenomenon. PMID:23764106

  8. Rising tones and rustling noises: Metaphors in gestural depictions of sounds

    PubMed Central

    Scurto, Hugo; Françoise, Jules; Bevilacqua, Frédéric; Houix, Olivier; Susini, Patrick

    2017-01-01

    Communicating an auditory experience with words is a difficult task and, in consequence, people often rely on imitative non-verbal vocalizations and gestures. This work explored the combination of such vocalizations and gestures to communicate auditory sensations and representations elicited by non-vocal everyday sounds. Whereas our previous studies have analyzed vocal imitations, the present research focused on gestural depictions of sounds. To this end, two studies investigated the combination of gestures and non-verbal vocalizations. A first, observational study examined a set of vocal and gestural imitations of recordings of sounds representative of a typical everyday environment (ecological sounds) with manual annotations. A second, experimental study used non-ecological sounds whose parameters had been specifically designed to elicit the behaviors highlighted in the observational study, and used quantitative measures and inferential statistics. The results showed that these depicting gestures are based on systematic analogies between a referent sound, as interpreted by a receiver, and the visual aspects of the gestures: auditory-visual metaphors. The results also suggested a different role for vocalizations and gestures. Whereas the vocalizations reproduce all features of the referent sounds as faithfully as vocally possible, the gestures focus on one salient feature with metaphors based on auditory-visual correspondences. Both studies highlighted two metaphors consistently shared across participants: the spatial metaphor of pitch (mapping different pitches to different positions on the vertical dimension), and the rustling metaphor of random fluctuations (rapidly shaking of hands and fingers). We interpret these metaphors as the result of two kinds of representations elicited by sounds: auditory sensations (pitch and loudness) mapped to spatial position, and causal representations of the sound sources (e.g. rain drops, rustling leaves) pantomimed and embodied by the participants’ gestures. PMID:28750071

  9. [Sound improves distinction of low intensities of light in the visual cortex of a rabbit].

    PubMed

    Polianskiĭ, V B; Alymkulov, D E; Evtikhin, D V; Chernyshev, B V

    2011-01-01

    Electrodes were implanted into cranium above the primary visual cortex of four rabbits (Orictolagus cuniculus). At the first stage, visual evoked potentials (VEPs) were recorded in response to substitution of threshold visual stimuli (0.28 and 0.31 cd/m2). Then the sound (2000 Hz, 84 dB, duration 40 ms) was added simultaneously to every visual stimulus. Single sounds (without visual stimuli) did not produce a VEP-response. It was found that the amplitude ofVEP component N1 (85-110 ms) in response to complex stimuli (visual and sound) increased 1.6 times as compared to "simple" visual stimulation. At the second stage, paired substitutions of 8 different visual stimuli (range 0.38-20.2 cd/m2) by each other were performed. Sensory spaces of intensity were reconstructed on the basis of factor analysis. Sensory spaces of complexes were reconstructed in a similar way for simultaneous visual and sound stimulation. Comparison of vectors representing the stimuli in the spaces showed that the addition of a sound led to a 1.4-fold expansion of the space occupied by smaller intensities (0.28; 1.02; 3.05; 6.35 cd/m2). Also, the addition of the sound led to an arrangement of intensities in an ascending order. At the same time, the sound 1.33-times narrowed the space of larger intensities (8.48; 13.7; 16.8; 20.2 cd/m2). It is suggested that the addition of a sound improves a distinction of smaller intensities and impairs a dis- tinction of larger intensities. Sensory spaces revealed by complex stimuli were two-dimensional. This fact can be a consequence of integration of sound and light in a unified complex at simultaneous stimulation.

  10. Apparatus and method for suppressing sound in a gas turbine engine powerplant

    NASA Technical Reports Server (NTRS)

    Wynosky, Thomas A. (Inventor); Mischke, Robert J. (Inventor)

    1992-01-01

    A method and apparatus for suppressing jet noise in a gas turbine engine powerplant 10 is disclosed. Various construction details are developed for providing sound suppression at sea level take-off operative conditions and not providing sound suppression at cruise operative conditions. In one embodiment, the powerplant 10 has a lobed mixer 152 between a primary flowpath 44 and a second flowpath 46, a diffusion region downstream of the lobed mixer region (first mixing region 76), and a deployable ejector/mixer 176 in the diffusion region which forms a second mixing region 78 having a diffusion flowpath 72 downstream of the ejector/mixer and sound absorbing structure 18 bounding the flowpath throughout the diffusion region. The method includes deploying the ejector/mixer 176 at take-off and stowing the ejector/mixer at cruise.

  11. Metaporous layer to overcome the thickness constraint for broadband sound absorption

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Jieun; Lee, Joong Seok; Kim, Yoon Young, E-mail: yykim@snu.ac.kr

    The sound absorption of a porous layer is affected by its thickness, especially in a low-frequency range. If a hard-backed porous layer contains periodical arrangements of rigid partitions that are coordinated parallel and perpendicular to the direction of incoming sound waves, the lower bound of the effective sound absorption can be lowered much more and the overall absorption performance enhanced. The consequence of rigid partitioning in a porous layer is to make the first thickness resonance mode in the layer appear at much lower frequencies compared to that in the original homogeneous porous layer with the same thickness. Moreover, appropriatemore » partitioning yields multiple thickness resonances with higher absorption peaks through impedance matching. The physics of the partitioned porous layer, or the metaporous layer, is theoretically investigated in this study.« less

  12. Confinement of gigahertz sound and light in Tamm plasmon resonators

    NASA Astrophysics Data System (ADS)

    Villafañe, V.; Bruchhausen, A. E.; Jusserand, B.; Senellart, P.; Lemaître, A.; Fainstein, A.

    2015-10-01

    We demonstrate theoretically and by pump-probe picosecond acoustics experiments the simultaneous confinement of light and gigahertz sound in Tamm plasmon resonators, formed by depositing a thin layer of Au onto a GaAs/AlGaAs Bragg reflector. The cavity has InGaAs quantum dots (QDs) embedded at the maximum of the confined optical field in the first GaAs layer. The different sound generation and detection mechanisms are theoretically analyzed. It is shown that the Au layer absorption and the resonant excitation of the QDs are the more efficient light-sound transducers for the coupling of near-infrared light with the confined acoustic modes, while the displacement of the interfaces is the main back-action mechanism at these energies. The prospects for the compact realization of optomechanical resonators based on Tamm plasmon cavities are discussed.

  13. Abrupt uplift within the past 1700 years at Southern Puget Sound, Washington

    USGS Publications Warehouse

    Bucknam, R.C.; Hemphill-Haley, E.; Leopold, E.B.

    1992-01-01

    Shorelines rose as much as 7 meters along southern Puget Sound and Hood Canal between 500 and 1700 years ago. Evidence for this uplift consists of elevated wave-cut shore platforms near Seattle and emerged, peat-covered tidal flats as much as 60 kilometers to the southwest. The uplift was too rapid for waves to leave intermediate shorelines on even the best preserved platform. The tidal flats also emerged abruptly; they changed into freshwater swamps and meadows without first becoming tidal marshes. Where uplift was greatest, it adjoined an inferred fault that crosses Puget Sound at Seattle and it probably accompanied reverse slip on that fault 1000 to 1100 years ago. The uplift and probable fault slip show that the crust of the North America plate contains potential sources of damaging earthquakes in the Puget Sound region.

  14. Neuromorphic audio-visual sensor fusion on a sound-localizing robot.

    PubMed

    Chan, Vincent Yue-Sek; Jin, Craig T; van Schaik, André

    2012-01-01

    This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment.

  15. The extreme ultraviolet spectrograph: A radial groove grating, sounding rocket-borne, astronomical instrument

    NASA Technical Reports Server (NTRS)

    Wilkinson, Erik; Green, James C.; Cash, Webster

    1993-01-01

    The design, calibration, and sounding rocket flight performance of a novel spectrograph suitable for moderate-resolution EUV spectroscopy are presented. The sounding rocket-borne instrument uses a radial groove grating to maintain a high system efficiency while controlling the aberrations induced when doing spectroscopy in a converging beam. The instrument has a resolution of approximately 2 A across the 200-330 A bandpass with an average effective area of 2 sq cm. The instrument, called the Extreme Ultraviolet Spectrograph, acquired the first EUV spectra in this wavelength region of the hot white dwarf G191-B2B and the late-type star Capella.

  16. Anticipated Effectiveness of Active Noise Control in Propeller Aircraft Interiors as Determined by Sound Quality Tests

    NASA Technical Reports Server (NTRS)

    Powell, Clemans A.; Sullivan, Brenda M.

    2004-01-01

    Two experiments were conducted, using sound quality engineering practices, to determine the subjective effectiveness of hypothetical active noise control systems in a range of propeller aircraft. The two tests differed by the type of judgments made by the subjects: pair comparisons in the first test and numerical category scaling in the second. Although the results of the two tests were in general agreement that the hypothetical active control measures improved the interior noise environments, the pair comparison method appears to be more sensitive to subtle changes in the characteristics of the sounds which are related to passenger preference.

  17. Model-based auralizations of violin sound trends accompanying plate-bridge tuning or holding.

    PubMed

    Bissinger, George; Mores, Robert

    2015-04-01

    To expose systematic trends in violin sound accompanying "tuning" only the plates or only the bridge, the first structural acoustics-based model auralizations of violin sound were created by passing a bowed-string driving force measured at the bridge of a solid body violin through the dynamic filter (DF) model radiativity profile "filter" RDF(f) (frequency-dependent pressure per unit driving force, free-free suspension, anechoic chamber). DF model auralizations for the more realistic case of a violin held/played in a reverberant auditorium reveal that holding the violin greatly diminishes its low frequency response, an effect only weakly compensated for by auditorium reverberation.

  18. Low Voltage MEMS Digital Loudspeaker Array Based on Thin-film PZT Actuators

    NASA Astrophysics Data System (ADS)

    Fanget, S.; Casset, F.; Dejaeger, R.; Maire, F.; Desloges, B.; Deutzer, J.; Morisson, R.; Bohard, Y.; Laroche, B.; Escato, J.; Leclere, Q.

    This paper reports on the development of a Digital Loudspeaker Array (DLA) solution based on Pb(Zr0.52,Ti0.48)O3 (PZT) thin-film actuated membranes. These membranes called speaklets are arranged in a matrix and operate in a binary manner by emitting short pulses of sound pressure. Using the principle of additivity of pressures in the air, it is possible to reconstruct audible sounds. For the first time, electromechanical and acoustic characterizations are reported on a 256-MEMS-membranes DLA. Sounds audible as far as several meters from the loudspeaker have been generated using low voltage (8 V).

  19. Statistics of natural reverberation enable perceptual separation of sound and space

    PubMed Central

    Traer, James; McDermott, Josh H.

    2016-01-01

    In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us. PMID:27834730

  20. Statistics of natural reverberation enable perceptual separation of sound and space.

    PubMed

    Traer, James; McDermott, Josh H

    2016-11-29

    In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us.

  1. Design and evaluation of a parametric model for cardiac sounds.

    PubMed

    Ibarra-Hernández, Roilhi F; Alonso-Arévalo, Miguel A; Cruz-Gutiérrez, Alejandro; Licona-Chávez, Ana L; Villarreal-Reyes, Salvador

    2017-10-01

    Heart sound analysis plays an important role in the auscultative diagnosis process to detect the presence of cardiovascular diseases. In this paper we propose a novel parametric heart sound model that accurately represents normal and pathological cardiac audio signals, also known as phonocardiograms (PCG). The proposed model considers that the PCG signal is formed by the sum of two parts: one of them is deterministic and the other one is stochastic. The first part contains most of the acoustic energy. This part is modeled by the Matching Pursuit (MP) algorithm, which performs an analysis-synthesis procedure to represent the PCG signal as a linear combination of elementary waveforms. The second part, also called residual, is obtained after subtracting the deterministic signal from the original heart sound recording and can be accurately represented as an autoregressive process using the Linear Predictive Coding (LPC) technique. We evaluate the proposed heart sound model by performing subjective and objective tests using signals corresponding to different pathological cardiac sounds. The results of the objective evaluation show an average Percentage of Root-Mean-Square Difference of approximately 5% between the original heart sound and the reconstructed signal. For the subjective test we conducted a formal methodology for perceptual evaluation of audio quality with the assistance of medical experts. Statistical results of the subjective evaluation show that our model provides a highly accurate approximation of real heart sound signals. We are not aware of any previous heart sound model rigorously evaluated as our proposal. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Superposition-Based Analysis of First-Order Probabilistic Timed Automata

    NASA Astrophysics Data System (ADS)

    Fietzke, Arnaud; Hermanns, Holger; Weidenbach, Christoph

    This paper discusses the analysis of first-order probabilistic timed automata (FPTA) by a combination of hierarchic first-order superposition-based theorem proving and probabilistic model checking. We develop the overall semantics of FPTAs and prove soundness and completeness of our method for reachability properties. Basically, we decompose FPTAs into their time plus first-order logic aspects on the one hand, and their probabilistic aspects on the other hand. Then we exploit the time plus first-order behavior by hierarchic superposition over linear arithmetic. The result of this analysis is the basis for the construction of a reachability equivalent (to the original FPTA) probabilistic timed automaton to which probabilistic model checking is finally applied. The hierarchic superposition calculus required for the analysis is sound and complete on the first-order formulas generated from FPTAs. It even works well in practice. We illustrate the potential behind it with a real-life DHCP protocol example, which we analyze by means of tool chain support.

  3. 15. Interior view of first floor aisle in 1904 middle ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    15. Interior view of first floor aisle in 1904 middle section. Camera pointed south from near juncture with 1922 north section. - Puget Sound Naval Shipyard, Pattern Shop, Farragut Avenue, Bremerton, Kitsap County, WA

  4. Underwater sound from vessel traffic reduces the effective communication range in Atlantic cod and haddock.

    PubMed

    Stanley, Jenni A; Van Parijs, Sofie M; Hatch, Leila T

    2017-11-07

    Stellwagen Bank National Marine Sanctuary is located in Massachusetts Bay off the densely populated northeast coast of the United States; subsequently, the marine inhabitants of the area are exposed to elevated levels of anthropogenic underwater sound, particularly due to commercial shipping. The current study investigated the alteration of estimated effective communication spaces at three spawning locations for populations of the commercially and ecologically important fishes, Atlantic cod (Gadus morhua) and haddock (Melanogrammus aeglefinus). Both the ambient sound pressure levels and the estimated effective vocalization radii, estimated through spherical spreading models, fluctuated dramatically during the three-month recording periods. Increases in sound pressure level appeared to be largely driven by large vessel activity, and accordingly exhibited a significant positive correlation with the number of Automatic Identification System tracked vessels at the two of the three sites. The near constant high levels of low frequency sound and consequential reduction in the communication space observed at these recording sites during times of high vocalization activity raises significant concerns that communication between conspecifics may be compromised during critical biological periods. This study takes the first steps in evaluating these animals' communication spaces and alteration of these spaces due to anthropogenic underwater sound.

  5. Whale contribution to long time series of low-frequency oceanic ambient sound

    NASA Astrophysics Data System (ADS)

    Andrew, Rex K.; Howe, Bruce M.; Mercer, James A.

    2002-05-01

    It has long been known that baleen (mainly blue and fin) whale vocalizations are a component of oceanic ambient sound. Urick reports that the famous ``20-cycle pulses'' were observed even from the first Navy hydrophone installations in the early 1950's. As part of the Acoustic Thermometry Ocean Climate (ATOC) and the North Pacific Acoustic Laboratory (NPAL) programs, more than 6 years of nearly continuous ambient sound data have been collected from Sound Surveillance System (SOSUS) sites in the northeast Pacific. These records now show that the average level of the ambient sound has risen by as much as 10 dB since the 1960's. Although much of this increase is probably attributable to manmade sources, the whale call component is still prominent. The data also show that the whale signal is clearly seasonal: in coherent averages of year-long records, the whale call signal is the only feature that stands out, making strong and repeatable patterns as the whale population migrates past the hydrophone systems. This prominent and sometimes dominant component of ambient sound has perhaps not been fully appreciated in current ambient noise models. [Work supported by ONR.

  6. Comparison of 3 bonded lingual appliances by auditive analysis and subjective assessment.

    PubMed

    Hohoff, Ariane; Stamm, Thomas; Goder, Gerhard; Sauerland, Cristina; Ehmer, Ulrike; Seifert, Eberhard

    2003-12-01

    The aim of this prospective study was to compare for the first time the influences of lingual appliances of different dimensions on sound performance and oral comfort. The study group comprised 12 subjects (10 women, 2 men; mean age, 33.96 years). Their sound production was recorded by means of a digital audio tape recorder before, 10 minutes after, and 24 hours after placement of the different appliances for semiobjective assessment by 3 blinded speech professionals. This was followed by supplementary subjective ratings of sound performance and oral comfort by the patients. All lingual appliances induced significant impairment in sound performance and oral comfort. However, they varied significantly with respect to the degree of impairment. The smaller the appliance, the less pronounced the impairments it induced. The smallest changes were induced by a bonded canine-to-canine retainer, followed by customized lingual brackets and prefabricated lingual brackets. By using lower-profile customized brackets, the orthodontist can significantly enhance patient comfort and significantly reduce impairments of sound performance in comparison with prefabricated brackets with larger dimensions. Before placing a lingual appliance, however, patients should be briefed on possible effects such as impaired sound production and decreased oral comfort.

  7. Repeated imitation makes human vocalizations more word-like.

    PubMed

    Edmiston, Pierce; Perlman, Marcus; Lupyan, Gary

    2018-03-14

    People have long pondered the evolution of language and the origin of words. Here, we investigate how conventional spoken words might emerge from imitations of environmental sounds. Does the repeated imitation of an environmental sound gradually give rise to more word-like forms? In what ways do these forms resemble the original sounds that motivated them (i.e. exhibit iconicity)? Participants played a version of the children's game 'Telephone'. The first generation of participants imitated recognizable environmental sounds (e.g. glass breaking, water splashing). Subsequent generations imitated the previous generation of imitations for a maximum of eight generations. The results showed that the imitations became more stable and word-like, and later imitations were easier to learn as category labels. At the same time, even after eight generations, both spoken imitations and their written transcriptions could be matched above chance to the category of environmental sound that motivated them. These results show how repeated imitation can create progressively more word-like forms while continuing to retain a resemblance to the original sound that motivated them, and speak to the possible role of human vocal imitation in explaining the origins of at least some spoken words. © 2018 The Author(s).

  8. Spatial Cues Provided by Sound Improve Postural Stabilization: Evidence of a Spatial Auditory Map?

    PubMed Central

    Gandemer, Lennie; Parseihian, Gaetan; Kronland-Martinet, Richard; Bourdin, Christophe

    2017-01-01

    It has long been suggested that sound plays a role in the postural control process. Few studies however have explored sound and posture interactions. The present paper focuses on the specific impact of audition on posture, seeking to determine the attributes of sound that may be useful for postural purposes. We investigated the postural sway of young, healthy blindfolded subjects in two experiments involving different static auditory environments. In the first experiment, we compared effect on sway in a simple environment built from three static sound sources in two different rooms: a normal vs. an anechoic room. In the second experiment, the same auditory environment was enriched in various ways, including the ambisonics synthesis of a immersive environment, and subjects stood on two different surfaces: a foam vs. a normal surface. The results of both experiments suggest that the spatial cues provided by sound can be used to improve postural stability. The richer the auditory environment, the better this stabilization. We interpret these results by invoking the “spatial hearing map” theory: listeners build their own mental representation of their surrounding environment, which provides them with spatial landmarks that help them to better stabilize. PMID:28694770

  9. Adjustment of interaural time difference in head related transfer functions based on listeners' anthropometry and its effect on sound localization

    NASA Astrophysics Data System (ADS)

    Suzuki, Yôiti; Watanabe, Kanji; Iwaya, Yukio; Gyoba, Jiro; Takane, Shouichi

    2005-04-01

    Because the transfer functions governing subjective sound localization (HRTFs) show strong individuality, sound localization systems based on synthesis of HRTFs require suitable HRTFs for individual listeners. However, it is impractical to obtain HRTFs for all listeners based on measurements. Improving sound localization by adjusting non-individualized HRTFs to a specific listener based on that listener's anthropometry might be a practical method. This study first developed a new method to estimate interaural time differences (ITDs) using HRTFs. Then correlations between ITDs and anthropometric parameters were analyzed using the canonical correlation method. Results indicated that parameters relating to head size, and shoulder and ear positions are significant. Consequently, it was attempted to express ITDs based on listener's anthropometric data. In this process, the change of ITDs as a function of azimuth angle was parameterized as a sum of sine functions. Then the parameters were analyzed using multiple regression analysis, in which the anthropometric parameters were used as explanatory variables. The predicted or individualized ITDs were installed in the nonindividualized HRTFs to evaluate sound localization performance. Results showed that individualization of ITDs improved horizontal sound localization.

  10. [Echolocation calls of free-flying Himalayan swiftlets (Aerodramus brevirostris)].

    PubMed

    Wang, Bin; Ma, Jian-Zhang; Chen, Yi; Tan, Liang-Jing; Liu, Qi; Shen, Qi-Qi; Liao, Qing-Yi; Zhang, Li-Biao

    2013-02-01

    Here, we present our findings of free-flying echolocation calls of Himalayan swiftlets (Aerodramus brevirostris), which were recorded in Shenjing Cave, Hupingshan National Reserve, Shimen County, Hunan Province in June 2012, using Avisoft-UltraSoundGate 116(e). We noted that after foraging at dusk, the Himalayan swiftlets flew fast into the cave without clicks, and then slowed down in dark area in the cave, but with sounds. The echolocation sounds of Himalayan swiftlets are broadband, double noise burst clicks, separated by a short pause. The inter-pulse intervals between double clicks (99.3±3.86 ms)were longer than those within double clicks (6.6±0.42 ms) (P<0.01). With the exception of peak frequency, between 6.2±0.08 kHz and 6.2±0.10 kHz, (P>0.05) and pulse duration 2.9±0.12 ms, 3.2±0.17 ms, (P>0.05) between the first and second, other factors-maximum frequency, minimum frequency, frequency bandwidth, and power-were significantly different between the clicks. The maximum frequency of the first pulse (20.1±1.10 kHz) was higher than that of second (15.4±0.98 kHz) (P<0.01), while the minimum frequency of the first pulse (3.7±0.12 kHz) was lower than that of second (4.0±0.09 kHz) (P<0.05); resulting in the frequency bandwidth of the first pulse (16.5±1.17 kHz) longer than that of second (11.4±1.01 kHz) (P<0.01). The power of the first pulse (-32.5±0.60 dB) was higher than that of second (-35.2±0.94 dB) (P<0.05). More importantly, we found that Himalayan swiftlets emitted echolocation pulses including ultrasonic sound, with a maximum frequency reaching 33.2 kHz.

  11. Nonlinear frequency compression: effects on sound quality ratings of speech and music.

    PubMed

    Parsa, Vijay; Scollie, Susan; Glista, Danielle; Seelisch, Andreas

    2013-03-01

    Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality.

  12. Developmental changes in brain activation involved in the production of novel speech sounds in children.

    PubMed

    Hashizume, Hiroshi; Taki, Yasuyuki; Sassa, Yuko; Thyreau, Benjamin; Asano, Michiko; Asano, Kohei; Takeuchi, Hikaru; Nouchi, Rui; Kotozaki, Yuka; Jeong, Hyeonjeong; Sugiura, Motoaki; Kawashima, Ryuta

    2014-08-01

    Older children are more successful at producing unfamiliar, non-native speech sounds than younger children during the initial stages of learning. To reveal the neuronal underpinning of the age-related increase in the accuracy of non-native speech production, we examined the developmental changes in activation involved in the production of novel speech sounds using functional magnetic resonance imaging. Healthy right-handed children (aged 6-18 years) were scanned while performing an overt repetition task and a perceptual task involving aurally presented non-native and native syllables. Productions of non-native speech sounds were recorded and evaluated by native speakers. The mouth regions in the bilateral primary sensorimotor areas were activated more significantly during the repetition task relative to the perceptual task. The hemodynamic response in the left inferior frontal gyrus pars opercularis (IFG pOp) specific to non-native speech sound production (defined by prior hypothesis) increased with age. Additionally, the accuracy of non-native speech sound production increased with age. These results provide the first evidence of developmental changes in the neural processes underlying the production of novel speech sounds. Our data further suggest that the recruitment of the left IFG pOp during the production of novel speech sounds was possibly enhanced due to the maturation of the neuronal circuits needed for speech motor planning. This, in turn, would lead to improvement in the ability to immediately imitate non-native speech. Copyright © 2014 Wiley Periodicals, Inc.

  13. History of animal bioacoustics

    NASA Astrophysics Data System (ADS)

    Popper, Arthur N.; Dooling, Robert J.

    2002-11-01

    The earliest studies on animal bioacoustics dealt largely with descriptions of sounds. Only later did they address issues of detection, discrimination, and categorization of complex communication sounds. This literature grew substantially over the last century. Using the Journal of the Acoustical Society of America as an example, the number of papers that fall broadly within the realm of animal sound production, communication, and hearing rose from two in the partial first decade of the journal in the 1930's, to 20 in the 1970's, to 92 in the first 2 years of this millennium. During this time there has been a great increase in the diversity of species studied, the sophistication of the methods used, and the complexity of the questions addressed. As an example, the first papers in JASA focused on a guinea pig and a bird. In contrast, since the year 2000 studies are often highly comparative and include fish, birds, dolphins, dogs, ants, crickets, and snapping shrimp. This paper on the history of animal bioacoustics will consider trends in work over the decades and discuss the formative work of a number of investigators who have spurred the field by making critical theoretical and experimental observations.

  14. Influences on infant speech processing: toward a new synthesis.

    PubMed

    Werker, J F; Tees, R C

    1999-01-01

    To comprehend and produce language, we must be able to recognize the sound patterns of our language and the rules for how these sounds "map on" to meaning. Human infants are born with a remarkable array of perceptual sensitivities that allow them to detect the basic properties that are common to the world's languages. During the first year of life, these sensitivities undergo modification reflecting an exquisite tuning to just that phonological information that is needed to map sound to meaning in the native language. We review this transition from language-general to language-specific perceptual sensitivity that occurs during the first year of life and consider whether the changes propel the child into word learning. To account for the broad-based initial sensitivities and subsequent reorganizations, we offer an integrated transactional framework based on the notion of a specialized perceptual-motor system that has evolved to serve human speech, but which functions in concert with other developing abilities. In so doing, we highlight the links between infant speech perception, babbling, and word learning.

  15. Observation of the solar eclipse of 20 March 2015 at the Pruhonice station

    NASA Astrophysics Data System (ADS)

    Mošna, Zbyšek; Boška, Josef; Knížová, Petra Koucká; Šindelářová, Tereza; Kouba, Daniel; Chum, Jaroslav; Rejfek, Luboš; Potužníková, Kateřina; Arikan, Feza; Toker, Cenk

    2018-06-01

    Response of the atmosphere to the Solar Eclipse on 20 March 2015 is described for mid-latitude region of Czech Republic. For the first time we show join analysis using Digisonde vertical sounding, manually processed Digisonde drift measurement, and Continuous Doppler Sounding for the solar eclipse study. The critical frequencies foE, foF1 and foF2 show changes with different time offset connected to the solar eclipse. Digisonde drift measurement shows significant vertical plasma drifts in F2 region deviating from daily mean course with amplitudes reaching 15-20 m/s corresponding to the time of solar eclipse. Continuous Doppler Sounding shows propagation of waves in the NE direction with velocities between 70 and 100 m/s with a peak 30 min after first contact. We observed increased and persistent wave activity at heights between 150 and 250 km at time about 20-40 min after beginning of SE with central period 65 min.

  16. Earth Observing System (EOS) Advanced Microwave Sounding Unit-A (AMSU-A): Instrumentation interface control document

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This Interface Control Document (ICD) defines the specific details of the complete accomodation information between the Earth Observing System (EOS) PM Spacecraft and the Advanced Microwave Sounding Unit (AMSU-A)Instrument. This is the first submittal of the ICN: it will be updated periodically throughout the life of the program. The next update is planned prior to Critical Design Review (CDR).

  17. Fluid Flow and Sound Generation at Hydrothermal Vent Fields

    DTIC Science & Technology

    1988-04-01

    Pacific Rise The first evidence of vent sound generation came from data collected near hydrothermal vents at 21 N on the EPR where an array of ocean...associated with hydrothermal centers, one at 21 N on the East Pacific Rise (EPR) (Reidesel et al., 1982) and one on the Juan de Fuca Ridge (Bibee and Jacobson... East Pacific Rise at 210 N : the volcanic, tectonic and hydrothermal processes at

  18. The Hmong Language: Sounds and Alphabets. General Information Series, No. 14. Indochinese Refugee Education Guides.

    ERIC Educational Resources Information Center

    Center for Applied Linguistics, Arlington, VA.

    The purpose of this guide is to provide Americans working with the Hmongs with: (1) some practical information on the Hmongs, their origins and language; (2) a detailed description of the sounds of the Hmong language; and (3) a discussion on Hmong as an unwritten language. This is the first of three guides to be published on the Hmongs, a people…

  19. Issues and Strategies for Improving Constructibility.

    DTIC Science & Technology

    1988-09-01

    materials. First, the roof design called for the use of an asphalt coated roof felt layer below an EPDM membrane. The asphalt coated felt is not needed when a...being prepared by people trained in subjects foreign to construction. As designers, we were in fact contractually and professionally isolated from...specially constructed for sound isolation . The architect* correctly specified special sound seals around the doors between the rooms in this area, but

  20. Imitation of novel conspecific and human speech sounds in the killer whale (Orcinus orca).

    PubMed

    Abramson, José Z; Hernández-Lloreda, Mª Victoria; García, Lino; Colmenares, Fernando; Aboitiz, Francisco; Call, Josep

    2018-01-31

    Vocal imitation is a hallmark of human spoken language, which, along with other advanced cognitive skills, has fuelled the evolution of human culture. Comparative evidence has revealed that although the ability to copy sounds from conspecifics is mostly uniquely human among primates, a few distantly related taxa of birds and mammals have also independently evolved this capacity. Remarkably, field observations of killer whales have documented the existence of group-differentiated vocal dialects that are often referred to as traditions or cultures and are hypothesized to be acquired non-genetically. Here we use a do-as-I-do paradigm to study the abilities of a killer whale to imitate novel sounds uttered by conspecific (vocal imitative learning) and human models (vocal mimicry). We found that the subject made recognizable copies of all familiar and novel conspecific and human sounds tested and did so relatively quickly (most during the first 10 trials and three in the first attempt). Our results lend support to the hypothesis that the vocal variants observed in natural populations of this species can be socially learned by imitation. The capacity for vocal imitation shown in this study may scaffold the natural vocal traditions of killer whales in the wild. © 2018 The Author(s).

  1. Design and qualification of an UHV system for operation on sounding rockets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grosse, Jens, E-mail: jens.grosse@dlr.de; Braxmaier, Claus; Seidel, Stephan Tobias

    The sounding rocket mission MAIUS-1 has the objective to create the first Bose–Einstein condensate in space; therefore, its scientific payload is a complete cold atom experiment built to be launched on a VSB-30 sounding rocket. An essential part of the setup is an ultrahigh vacuum system needed in order to sufficiently suppress interactions of the cooled atoms with the residual background gas. Contrary to vacuum systems on missions aboard satellites or the international space station, the required vacuum environment has to be reached within 47 s after motor burn-out. This paper contains a detailed description of the MAIUS-1 vacuum system, asmore » well as a description of its qualification process for the operation under vibrational loads of up to 8.1 g{sub RMS} (where RMS is root mean square). Even though a pressure rise dependent on the level of vibration was observed, the design presented herein is capable of regaining a pressure of below 5 × 10{sup −10} mbar in less than 40 s when tested at 5.4 g{sub RMS}. To the authors' best knowledge, it is the first UHV system qualified for operation on a sounding rocket.« less

  2. 23. Interior view of SE corner of first floor of ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    23. Interior view of SE corner of first floor of 1896 south section of building, showing windows and column. Camera pointed SE. - Puget Sound Naval Shipyard, Pattern Shop, Farragut Avenue, Bremerton, Kitsap County, WA

  3. Food words distract the hungry: Evidence of involuntary semantic processing of task-irrelevant but biologically-relevant unexpected auditory words.

    PubMed

    Parmentier, Fabrice B R; Pacheco-Unguetti, Antonia P; Valero, Sara

    2018-01-01

    Rare changes in a stream of otherwise repeated task-irrelevant sounds break through selective attention and disrupt performance in an unrelated visual task by triggering shifts of attention to and from the deviant sound (deviance distraction). Evidence indicates that the involuntary orientation of attention to unexpected sounds is followed by their semantic processing. However, past demonstrations relied on tasks in which the meaning of the deviant sounds overlapped with features of the primary task. Here we examine whether such processing is observed when no such overlap is present but sounds carry some relevance to the participants' biological need to eat when hungry. We report the results of an experiment in which hungry and satiated participants partook in a cross-modal oddball task in which they categorized visual digits (odd/even) while ignoring task-irrelevant sounds. On most trials the irrelevant sound was a sinewave tone (standard sound). On the remaining trials, deviant sounds consisted of spoken words related to food (food deviants) or control words (control deviants). Questionnaire data confirmed state (but not trait) differences between the two groups with respect to food craving, as well as a greater desire to eat the food corresponding to the food-related words in the hungry relative to the satiated participants. The results of the oddball task revealed that food deviants produced greater distraction (longer response times) than control deviants in hungry participants while the reverse effect was observed in satiated participants. This effect was observed in the first block of trials but disappeared thereafter, reflecting semantic saturation. Our results suggest that (1) the semantic content of deviant sounds is involuntarily processed even when sharing no feature with the primary task; and that (2) distraction by deviant sounds can be modulated by the participants' biological needs.

  4. Food words distract the hungry: Evidence of involuntary semantic processing of task-irrelevant but biologically-relevant unexpected auditory words

    PubMed Central

    Pacheco-Unguetti, Antonia P.; Valero, Sara

    2018-01-01

    Rare changes in a stream of otherwise repeated task-irrelevant sounds break through selective attention and disrupt performance in an unrelated visual task by triggering shifts of attention to and from the deviant sound (deviance distraction). Evidence indicates that the involuntary orientation of attention to unexpected sounds is followed by their semantic processing. However, past demonstrations relied on tasks in which the meaning of the deviant sounds overlapped with features of the primary task. Here we examine whether such processing is observed when no such overlap is present but sounds carry some relevance to the participants’ biological need to eat when hungry. We report the results of an experiment in which hungry and satiated participants partook in a cross-modal oddball task in which they categorized visual digits (odd/even) while ignoring task-irrelevant sounds. On most trials the irrelevant sound was a sinewave tone (standard sound). On the remaining trials, deviant sounds consisted of spoken words related to food (food deviants) or control words (control deviants). Questionnaire data confirmed state (but not trait) differences between the two groups with respect to food craving, as well as a greater desire to eat the food corresponding to the food-related words in the hungry relative to the satiated participants. The results of the oddball task revealed that food deviants produced greater distraction (longer response times) than control deviants in hungry participants while the reverse effect was observed in satiated participants. This effect was observed in the first block of trials but disappeared thereafter, reflecting semantic saturation. Our results suggest that (1) the semantic content of deviant sounds is involuntarily processed even when sharing no feature with the primary task; and that (2) distraction by deviant sounds can be modulated by the participants’ biological needs. PMID:29300763

  5. Effects of Temperature on Sound Production and Auditory Abilities in the Striped Raphael Catfish Platydoras armatulus (Family Doradidae)

    PubMed Central

    Papes, Sandra; Ladich, Friedrich

    2011-01-01

    Background Sound production and hearing sensitivity of ectothermic animals are affected by the ambient temperature. This is the first study investigating the influence of temperature on both sound production and on hearing abilities in a fish species, namely the neotropical Striped Raphael catfish Platydoras armatulus. Methodology/Principal Findings Doradid catfishes produce stridulation sounds by rubbing the pectoral spines in the shoulder girdle and drumming sounds by an elastic spring mechanism which vibrates the swimbladder. Eight fish were acclimated for at least three weeks to 22°, then to 30° and again to 22°C. Sounds were recorded in distress situations when fish were hand-held. The stridulation sounds became shorter at the higher temperature, whereas pulse number, maximum pulse period and sound pressure level did not change with temperature. The dominant frequency increased when the temperature was raised to 30°C and the minimum pulse period became longer when the temperature decreased again. The fundamental frequency of drumming sounds increased at the higher temperature. Using the auditory evoked potential (AEP) recording technique, the hearing thresholds were tested at six different frequencies from 0.1 to 4 kHz. The temporal resolution was determined by analyzing the minimum resolvable click period (0.3–5 ms). The hearing sensitivity was higher at the higher temperature and differences were more pronounced at higher frequencies. In general, latencies of AEPs in response to single clicks became shorter at the higher temperature, whereas temporal resolution in response to double-clicks did not change. Conclusions/Significance These data indicate that sound characteristics as well as hearing abilities are affected by temperatures in fishes. Constraints imposed on hearing sensitivity at different temperatures cannot be compensated even by longer acclimation periods. These changes in sound production and detection suggest that acoustic orientation and communication are affected by temperature changes in the neotropical catfish P. armatulus. PMID:22022618

  6. Experience with speech sounds is not necessary for cue trading by budgerigars (Melopsittacus undulatus)

    PubMed Central

    Flaherty, Mary; Dent, Micheal L.; Sawusch, James R.

    2017-01-01

    The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT) and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated), Passive speech exposure (regular exposure to human speech), and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with “d” or “t” and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal. PMID:28562597

  7. Experience with speech sounds is not necessary for cue trading by budgerigars (Melopsittacus undulatus).

    PubMed

    Flaherty, Mary; Dent, Micheal L; Sawusch, James R

    2017-01-01

    The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT) and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated), Passive speech exposure (regular exposure to human speech), and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with "d" or "t" and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal.

  8. Sound production by dusky grouper Epinephelus marginatus at spawning aggregation sites.

    PubMed

    Bertucci, F; Lejeune, P; Payrot, J; Parmentier, E

    2015-08-01

    Sound production by the dusky grouper Epinephelus marginatus was monitored both in captivity and at two Mediterranean spawning sites during the summers of 2012 and 2013. The results of long-term passive acoustic recordings provide for the first time a description of the sounds produced by E. marginatus. Two types of sounds were mainly recorded and consisted of low-frequency booms that can be produced singly or in series with dominant frequencies below 100 Hz. Recordings in captivity validated these sounds as belonging to E. marginatus and suggested that they may be associated with reproductive displays usually performed during early stages of courtship behaviour. This study also allowed the identification of a third, low-frequency growl-like type of sound typically found in other grouper species. These growls were, however, not recorded in tanks and it is cautiously proposed that they are produced by E. marginatus. Acoustic signals attributed to E. marginatus were produced throughout the spawning season, with a diel pattern showing an increase before dusk, i.e., from 1900 to 2200 hours, before decreasing until the morning. The occurrence of sounds during the spawning season of this species suggests that they are probably involved in social activity occurring close to aggregation sites. Passive acoustics offer a helpful tool to monitor aggregation sites of this emblematic species in order to improve conservation efforts. © 2015 The Fisheries Society of the British Isles.

  9. Basilar membrane vibration is not involved in the reverse propagation of otoacoustic emissions

    PubMed Central

    He, W.; Ren, T.

    2013-01-01

    To understand how the inner ear-generated sound, i.e., otoacoustic emission, exits the cochlea, we created a sound source electrically in the second turn and measured basilar membrane vibrations at two longitudinal locations in the first turn in living gerbil cochleae using a laser interferometer. For a given longitudinal location, electrically evoked basilar membrane vibrations showed the same tuning and phase lag as those induced by sounds. For a given frequency, the phase measured at a basal location led that at a more apical location, indicating that either an electrical or an acoustical stimulus evoked a forward travelling wave. Under postmortem conditions, the electrically evoked emissions showed no significant change while the basilar membrane vibration nearly disappeared. The current data indicate that basilar membrane vibration was not involved in the backward propagation of otoacoustic emissions and that sounds exit the cochlea probably through alternative media, such as cochlear fluids. PMID:23695199

  10. Unsupervised Feature Learning for Heart Sounds Classification Using Autoencoder

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Lv, Jiancheng; Liu, Dongbo; Chen, Yao

    2018-04-01

    Cardiovascular disease seriously threatens the health of many people. It is usually diagnosed during cardiac auscultation, which is a fast and efficient method of cardiovascular disease diagnosis. In recent years, deep learning approach using unsupervised learning has made significant breakthroughs in many fields. However, to our knowledge, deep learning has not yet been used for heart sound classification. In this paper, we first use the average Shannon energy to extract the envelope of the heart sounds, then find the highest point of S1 to extract the cardiac cycle. We convert the time-domain signals of the cardiac cycle into spectrograms and apply principal component analysis whitening to reduce the dimensionality of the spectrogram. Finally, we apply a two-layer autoencoder to extract the features of the spectrogram. The experimental results demonstrate that the features from the autoencoder are suitable for heart sound classification.

  11. Good vibrations: Controlling light with sound (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Eggleton, Benjamin J.; Choudhary, Amol

    2016-10-01

    One of the surprises of nonlinear optics, is that light may interact strongly with sound. Intense laser light literally "shakes" the glass in optical fibres, exciting acoustic waves (sound) in the fibre. Under the right conditions, it leads to a positive feedback loop between light and sound termed "Stimulated Brillouin Scattering," or simply SBS. This nonlinear interaction can amplify or filter light waves with extreme precision in frequency which makes it uniquely suited to solve key problems in the fields of defence, biomedicine, wireless communications, spectroscopy and imaging. We have achieved the first demonstration of SBS in compact chip-scale structures, carefully designed so that the optical fields and the acoustic fields are simultaneously confined and guided. This new platform has opened a range of new functionalities that are being applied in communications and defence with breathtaking performance and compactness. My talk will introduce this new field and review our progress and achievements, including silicon based optical phononic processor.

  12. Slip-stick excitation and travelling waves excite silo honking

    NASA Astrophysics Data System (ADS)

    Warburton, Katarzyna; Porte, Elze; Vriend, Nathalie

    2017-06-01

    Silo honking is the harmonic sound generated by the discharge of a silo filled with a granular material. In industrial storage silos, the acoustic emission during discharge of PET-particles forms a nuisance for the environment and may ultimately result in structural failure. This work investigates the phenomenon experimentally using a laboratory-scale silo, and successfully correlates the frequency of the emitted sound with the periodicity of the mechanical motion of the grains. The key driver is the slip-stick interaction between the wall and the particles, characterized as a wave moving upwards through the silo. A quantitative correlation is established for the first time between the frequency of the sound, measured with an electret microphone, and the slip-frequency, measured with a high-speed camera. In the lower regions of the tube, both the slip-stick motion and the honking sound disappear.

  13. Active/Passive Control of Sound Radiation from Panels using Constrained Layer Damping

    NASA Technical Reports Server (NTRS)

    Gibbs, Gary P.; Cabell, Randolph H.

    2003-01-01

    A hybrid passive/active noise control system utilizing constrained layer damping and model predictive feedback control is presented. This system is used to control the sound radiation of panels due to broadband disturbances. To facilitate the hybrid system design, a methodology for placement of constrained layer damping which targets selected modes based on their relative radiated sound power is developed. The placement methodology is utilized to determine two constrained layer damping configurations for experimental evaluation of a hybrid system. The first configuration targets the (4,1) panel mode which is not controllable by the piezoelectric control actuator, and the (2,3) and (5,2) panel modes. The second configuration targets the (1,1) and (3,1) modes. The experimental results demonstrate the improved reduction of radiated sound power using the hybrid passive/active control system as compared to the active control system alone.

  14. A lightweight low-frequency sound insulation membrane-type acoustic metamaterial

    NASA Astrophysics Data System (ADS)

    Lu, Kuan; Wu, Jiu Hui; Guan, Dong; Gao, Nansha; Jing, Li

    2016-02-01

    A novel membrane-type acoustic metamaterial with a high sound transmission loss (STL) at low frequencies (⩽500Hz) was designed and the mechanisms were investigated by using negative mass density theory. This metamaterial's structure is like a sandwich with a thin (thickness=0.25mm) lightweight flexible rubber material within two layers of honeycomb cell plates. Negative mass density was demonstrated at frequencies below the first natural frequency, which results in the excellent low-frequency sound insulation. The effects of different structural parameters of the membrane on the sound-proofed performance at low frequencies were investigated by using finite element method (FEM). The numerical results show that, the STL can be modulated to higher value by changing the structural parameters, such as the membrane surface density, the unite cell film shape, and the membrane tension. The acoustic metamaterial proposed in this study could provide a potential application in the low-frequency noise insulation.

  15. New Research on MEMS Acoustic Vector Sensors Used in Pipeline Ground Markers

    PubMed Central

    Song, Xiaopeng; Jian, Zeming; Zhang, Guojun; Liu, Mengran; Guo, Nan; Zhang, Wendong

    2015-01-01

    According to the demands of current pipeline detection systems, the above-ground marker (AGM) system based on sound detection principle has been a major development trend in pipeline technology. A novel MEMS acoustic vector sensor for AGM systems which has advantages of high sensitivity, high signal-to-noise ratio (SNR), and good low frequency performance has been put forward. Firstly, it is presented that the frequency of the detected sound signal is concentrated in a lower frequency range, and the sound attenuation is relatively low in soil. Secondly, the MEMS acoustic vector sensor structure and basic principles are introduced. Finally, experimental tests are conducted and the results show that in the range of 0°∼90°, when r = 5 m, the proposed MEMS acoustic vector sensor can effectively detect sound signals in soil. The measurement errors of all angles are less than 5°. PMID:25609046

  16. Amplification, attenuation, and dispersion of sound in inhomogeneous flows

    NASA Technical Reports Server (NTRS)

    Kentzer, C. P.

    1975-01-01

    First order effects of gradients in nonuniform potential flows of a compressible gas are included in a dispersion relation for sound waves. Three nondimensional numbers, the ratio of the change in the kinetic energy in one wavelength to the thermal energy of the gas, the ratio of the change in the total energy in one wavelength to the thermal energy, and the ratio of the dillatation frequency (the rate of expansion per unit volume) to the acoustic frequency, play a role in the separation of the effects of flow gradients into isotropic and anisotropic effects. Dispersion and attenuation (or amplification) of sound are found to be proportional to the wavelength for small wavelength, and depend on the direction of wave propagation relative to flow gradients. Modification of ray acoustics for the effects of flow gradients is suggested, and conditions for amplification and attenuation of sound are discussed.

  17. Influence of current input-output and age of first exposure on phonological acquisition in early bilingual Spanish-English-speaking kindergarteners.

    PubMed

    Ruiz-Felter, Roxanna; Cooperson, Solaman J; Bedore, Lisa M; Peña, Elizabeth D

    2016-07-01

    Although some investigations of phonological development have found that segmental accuracy is comparable in monolingual children and their bilingual peers, there is evidence that language use affects segmental accuracy in both languages. To investigate the influence of age of first exposure to English and the amount of current input-output on phonological accuracy in English and Spanish in early bilingual Spanish-English kindergarteners. Also whether parent and teacher ratings of the children's intelligibility are correlated with phonological accuracy and the amount of experience with each language. Data for 91 kindergarteners (mean age = 5;6 years) were selected from a larger dataset focusing on Spanish-English bilingual language development. All children were from Central Texas, spoke a Mexican Spanish dialect and were learning American English. Children completed a single-word phonological assessment with separate forms for English and Spanish. The assessment was analyzed for segmental accuracy: percentage of consonants and vowels correct and percentage of early-, middle- and late-developing (EML) sounds correct were calculated. Children were more accurate on vowel production than consonant production and showed a decrease in accuracy from early to middle to late sounds. The amount of current input-output explained more of the variance in phonological accuracy than age of first English exposure. Although greater current input-output of a language was associated with greater accuracy in that language, English-dominant children were only significantly more accurate in English than Spanish on late sounds, whereas Spanish-dominant children were only significantly more accurate in Spanish than English on early sounds. Higher parent and teacher ratings of intelligibility in Spanish were correlated with greater consonant accuracy in Spanish, but the same did not hold for English. Higher intelligibility ratings in English were correlated with greater current English input-output, and the same held for Spanish. Current input-output appears to be a better predictor of phonological accuracy than age of first English exposure for early bilinguals, consistent with findings on the effect of language experience on performance in other language domains in bilingual children. Although greater current input-output in a language predicts higher accuracy in that language, this interacts with sound complexity. The results highlight the utility of the EML classification in assessing bilingual children's phonology. The relationships of intelligibility ratings with current input-output and sound accuracy can shed light on the process of referral of bilingual children for speech and language services. © 2016 Royal College of Speech and Language Therapists.

  18. Xinyinqin: a computer-based heart sound simulator.

    PubMed

    Zhan, X X; Pei, J H; Xiao, Y H

    1995-01-01

    "Xinyinqin" is the Chinese phoneticized name of the Heart Sound Simulator (HSS). The "qin" in "Xinyinqin" is the Chinese name of a category of musical instruments, which means that the operation of HSS is very convenient--like playing an electric piano with the keys. HSS is connected to the GAME I/O of an Apple microcomputer. The generation of sound is controlled by a program. Xinyinqin is used as a teaching aid of Diagnostics. It has been applied in teaching for three years. In this demonstration we will introduce the following functions of HSS: 1) The main program has two modules. The first one is the heart auscultation training module. HSS can output a heart sound selected by the student. Another program module is used to test the student's learning condition. The computer can randomly simulate a certain heart sound and ask the student to name it. The computer gives the student's answer an assessment: "correct" or "incorrect." When the answer is incorrect, the computer will output that heart sound again for the student to listen to; this process is repeated until she correctly identifies it. 2) The program is convenient to use and easy to control. By pressing the S key, it is able to output a slow heart rate until the student can clearly identify the rhythm. The heart rate, like the actual rate of a patient, can then be restored by hitting any key. By pressing the SPACE BAR, the heart sound output can be stopped to allow the teacher to explain something to the student. The teacher can resume playing the heart sound again by hitting any key; she can also change the content of the training by hitting RETURN key. In the future, we plan to simulate more heart sounds and incorporate relevant graphs.

  19. Metagenomic profiling of microbial composition and antibiotic resistance determinants in Puget Sound.

    PubMed

    Port, Jesse A; Wallace, James C; Griffith, William C; Faustman, Elaine M

    2012-01-01

    Human-health relevant impacts on marine ecosystems are increasing on both spatial and temporal scales. Traditional indicators for environmental health monitoring and microbial risk assessment have relied primarily on single species analyses and have provided only limited spatial and temporal information. More high-throughput, broad-scale approaches to evaluate these impacts are therefore needed to provide a platform for informing public health. This study uses shotgun metagenomics to survey the taxonomic composition and antibiotic resistance determinant content of surface water bacterial communities in the Puget Sound estuary. Metagenomic DNA was collected at six sites in Puget Sound in addition to one wastewater treatment plant (WWTP) that discharges into the Sound and pyrosequenced. A total of ~550 Mbp (1.4 million reads) were obtained, 22 Mbp of which could be assembled into contigs. While the taxonomic and resistance determinant profiles across the open Sound samples were similar, unique signatures were identified when comparing these profiles across the open Sound, a nearshore marina and WWTP effluent. The open Sound was dominated by α-Proteobacteria (in particular Rhodobacterales sp.), γ-Proteobacteria and Bacteroidetes while the marina and effluent had increased abundances of Actinobacteria, β-Proteobacteria and Firmicutes. There was a significant increase in the antibiotic resistance gene signal from the open Sound to marina to WWTP effluent, suggestive of a potential link to human impacts. Mobile genetic elements associated with environmental and pathogenic bacteria were also differentially abundant across the samples. This study is the first comparative metagenomic survey of Puget Sound and provides baseline data for further assessments of community composition and antibiotic resistance determinants in the environment using next generation sequencing technologies. In addition, these genomic signals of potential human impact can be used to guide initial public health monitoring as well as more targeted and functionally-based investigations.

  20. A numerical study of fundamental shock noise mechanisms. Ph.D. Thesis - Cornell Univ.

    NASA Technical Reports Server (NTRS)

    Meadows, Kristine R.

    1995-01-01

    The results of this thesis demonstrate that direct numerical simulation can predict sound generation in unsteady aerodynamic flows containing shock waves. Shock waves can be significant sources of sound in high speed jet flows, on helicopter blades, and in supersonic combustion inlets. Direct computation of sound permits the prediction of noise levels in the preliminary design stage and can be used as a tool to focus experimental studies, thereby reducing cost and increasing the probability of a successfully quiet product in less time. This thesis reveals and investigates two mechanisms fundamental to sound generation by shocked flows: shock motion and shock deformation. Shock motion is modeled by the interaction of a sound wave with a shock. During the interaction, the shock wave begins to move and the sound pressure is amplified as the wave passes through the shock. The numerical approach presented in this thesis is validated by the comparison of results obtained in a quasi-one dimensional simulation with linear theory. Analysis of the perturbation energy demonstrated for the first time that acoustic energy is generated by the interaction. Shock deformation is investigated by the numerical simulation of a ring vortex interacting with a shock. This interaction models the passage of turbulent structures through the shock wave. The simulation demonstrates that both acoustic waves and contact surfaces are generated downstream during the interaction. Analysis demonstrates that the acoustic wave spreads cylindrically, that the sound intensity is highly directional, and that the sound pressure level increases significantly with increasing shock strength. The effect of shock strength on sound pressure level is consistent with experimental observations of shock noise, indicating that the interaction of a ring vortex with a shock wave correctly models a dominant mechanism of shock noise generation.

  1. Perception of Water-Based Masking Sounds-Long-Term Experiment in an Open-Plan Office.

    PubMed

    Hongisto, Valtteri; Varjo, Johanna; Oliva, David; Haapakangas, Annu; Benway, Evan

    2017-01-01

    A certain level of masking sound is necessary to control the disturbance caused by speech sounds in open-plan offices. The sound is usually provided with evenly distributed loudspeakers. Pseudo-random noise is often used as a source of artificial sound masking (PRMS). A recent laboratory experiment suggested that water-based masking sound (WBMS) could be more favorable than PRMS. The purpose of our study was to determine how the employees perceived different WBMSs compared to PRMS. The experiment was conducted in an open-plan office of 77 employees who had been accustomed to work under PRMS (44 dB L Aeq ). The experiment consisted of five masking conditions: the original PRMS, four different WBMSs and return to the original PRMS. The exposure time of each condition was 3 weeks. The noise level was nearly equal between the conditions (43-45 dB L Aeq ) but the spectra and the nature of the sounds were very different. A questionnaire was completed at the end of each condition. Acoustic satisfaction was worse during the WBMSs than during the PRMS. The disturbance caused by three out of four WBMSs was larger than that of PRMS. Several attributes describing the sound quality itself were in favor of PRMS. Colleagues' speech sounds disturbed more during WBMSs. None of the WBMSs produced better subjective ratings than PRMS. Although the first WBMS was equal with the PRMS for several variables, the overall results cannot be seen to support the use of WBMSs in office workplaces. Because the experiment suffered from some methodological weaknesses, conclusions about the adequacy of WBMSs cannot yet be drawn.

  2. Action sounds update the mental representation of arm dimension: contributions of kinaesthesia and agency

    PubMed Central

    Tajadura-Jiménez, Ana; Tsakiris, Manos; Marquardt, Torsten; Bianchi-Berthouze, Nadia

    2015-01-01

    Auditory feedback accompanies almost all our actions, but its contribution to body-representation is understudied. Recently it has been shown that the auditory distance of action sounds recalibrates perceived tactile distances on one’s arm, suggesting that action sounds can change the mental representation of arm length. However, the question remains open of what factors play a role in this recalibration. In this study we investigate two of these factors, kinaesthesia, and sense of agency. Across two experiments, we asked participants to tap with their arm on a surface while extending their arm. We manipulated the tapping sounds to originate at double the distance to the tapping locations, as well as their synchrony to the action, which is known to affect feelings of agency over the sounds. Kinaesthetic cues were manipulated by having additional conditions in which participants did not displace their arm but kept tapping either close (Experiment 1) or far (Experiment 2) from their body torso. Results show that both the feelings of agency over the action sounds and kinaesthetic cues signaling arm displacement when displacement of the sound source occurs are necessary to observe changes in perceived tactile distance on the arm. In particular, these cues resulted in the perceived tactile distances on the arm being felt smaller, as compared to distances on a reference location. Moreover, our results provide the first evidence of consciously perceived changes in arm-representation evoked by action sounds and suggest that the observed changes in perceived tactile distance relate to experienced arm elongation. We discuss the observed effects in the context of forward internal models of sensorimotor integration. Our results add to these models by showing that predictions related to action sounds must fit with kinaesthetic cues in order for auditory inputs to change body-representation. PMID:26074843

  3. Dynamic Spatial Hearing by Human and Robot Listeners

    NASA Astrophysics Data System (ADS)

    Zhong, Xuan

    This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.

  4. Dual-task interference effects on cross-modal numerical order and sound intensity judgments: the more the louder?

    PubMed

    Alards-Tomalin, Doug; Walker, Alexander C; Nepon, Hillary; Leboe-McGowan, Launa C

    2017-09-01

    In the current study, cross-task interactions between number order and sound intensity judgments were assessed using a dual-task paradigm. Participants first categorized numerical sequences composed of Arabic digits as either ordered (ascending, descending) or non-ordered. Following each number sequence, participants then had to judge the intensity level of a target sound. Experiment 1 emphasized processing the two tasks independently (serial processing), while Experiments 2 and 3 emphasized processing the two tasks simultaneously (parallel processing). Cross-task interference occurred only when the task required parallel processing and was specific to ascending numerical sequences, which led to a higher proportion of louder sound intensity judgments. In Experiment 4 we examined whether this unidirectional interaction was the result of participants misattributing enhanced processing fluency experienced on ascending sequences as indicating a louder target sound. The unidirectional finding could not be entirely attributed to misattributed processing fluency, and may also be connected to experientially derived conceptual associations between ascending number sequences and greater magnitude, consistent with conceptual mapping theory.

  5. Influence of Gestational Age and Postnatal Age on Speech Sound Processing in NICU infants

    PubMed Central

    Key, Alexandra P.F.; Lambert, E. Warren; Aschner, Judy L.; Maitre, Nathalie L.

    2012-01-01

    The study examined the effect of gestational (GA) and postnatal (PNA) age on speech sound perception in infants. Auditory ERPs were recorded in response to speech sounds (CV syllables) in 50 infant NICU patients (born at 24–40 weeks gestation) prior to discharge. Efficiency of speech perception was quantified as absolute difference in mean amplitudes of ERPs in response to vowel (/a/–/u/) and consonant (/b/–/g/, /d/–/g/) contrasts within 150–250, 250–400, 400–700 ms after stimulus onset. Results indicated that both GA and PNA affected speech sound processing. These effects were more pronounced for consonant than vowel contrasts. Increasing PNA was associated with greater sound discrimination in infants born at or after 30 weeks GA, while minimal PNA-related changes were observed for infants with GA less than 30 weeks. Our findings suggest that a certain level of brain maturity at birth is necessary to benefit from postnatal experience in the first 4 months of life, and both gestational and postnatal ages need to be considered when evaluating infant brain responses. PMID:22332725

  6. Sound field simulation and acoustic animation in urban squares

    NASA Astrophysics Data System (ADS)

    Kang, Jian; Meng, Yan

    2005-04-01

    Urban squares are important components of cities, and the acoustic environment is important for their usability. While models and formulae for predicting the sound field in urban squares are important for their soundscape design and improvement, acoustic animation tools would be of great importance for designers as well as for public participation process, given that below a certain sound level, the soundscape evaluation depends mainly on the type of sounds rather than the loudness. This paper first briefly introduces acoustic simulation models developed for urban squares, as well as empirical formulae derived from a series of simulation. It then presents an acoustic animation tool currently being developed. In urban squares there are multiple dynamic sound sources, so that the computation time becomes a main concern. Nevertheless, the requirements for acoustic animation in urban squares are relatively low compared to auditoria. As a result, it is important to simplify the simulation process and algorithms. Based on a series of subjective tests in a virtual reality environment with various simulation parameters, a fast simulation method with acceptable accuracy has been explored. [Work supported by the European Commission.

  7. On the improvement for charging large-scale flexible electrostatic actuators

    NASA Astrophysics Data System (ADS)

    Liao, Hsu-Ching; Chen, Han-Long; Su, Yu-Hao; Chen, Yu-Chi; Ko, Wen-Ching; Liou, Chang-Ho; Wu, Wen-Jong; Lee, Chih-Kung

    2011-04-01

    Recently, the development of flexible electret based electrostatic actuator has been widely discussed. The devices was shown to have high sound quality, energy saving, flexible structure and can be cut to any shape. However, achieving uniform charge on the electret diaphragm is one of the most critical processes needed to have the speaker ready for large-scale production. In this paper, corona discharge equipment contains multi-corona probes and grid bias was set up to inject spatial charges within the electret diaphragm. The optimal multi-corona probes system was adjusted to achieve uniform charge distribution of electret diaphragm. The processing conditions include the distance between the corona probes, the voltages of corona probe and grid bias, etc. We assembled the flexible electret loudspeakers first and then measured their sound pressure and beam pattern. The uniform charge distribution within the electret diaphragm based flexible electret loudspeaker provided us with the opportunity to shape the loudspeaker arbitrarily and to tailor the sound distribution per specifications request. Some of the potential futuristic applications for this device such as sound poster, smart clothes, and sound wallpaper, etc. were discussed as well.

  8. Acoustic design by topology optimization

    NASA Astrophysics Data System (ADS)

    Dühring, Maria B.; Jensen, Jakob S.; Sigmund, Ole

    2008-11-01

    To bring down noise levels in human surroundings is an important issue and a method to reduce noise by means of topology optimization is presented here. The acoustic field is modeled by Helmholtz equation and the topology optimization method is based on continuous material interpolation functions in the density and bulk modulus. The objective function is the squared sound pressure amplitude. First, room acoustic problems are considered and it is shown that the sound level can be reduced in a certain part of the room by an optimized distribution of reflecting material in a design domain along the ceiling or by distribution of absorbing and reflecting material along the walls. We obtain well defined optimized designs for a single frequency or a frequency interval for both 2D and 3D problems when considering low frequencies. Second, it is shown that the method can be applied to design outdoor sound barriers in order to reduce the sound level in the shadow zone behind the barrier. A reduction of up to 10 dB for a single barrier and almost 30 dB when using two barriers are achieved compared to utilizing conventional sound barriers.

  9. Geographic patterns of fishes and jellyfish in Puget Sound surface waters

    USGS Publications Warehouse

    Rice, Casimir A.; Duda, Jeffrey J.; Greene, Correigh M.; Karr, James R.

    2012-01-01

    We explored patterns of small pelagic fish assemblages and biomass of gelatinous zooplankton (jellyfish) in surface waters across four oceanographic subbasins of greater Puget Sound. Our study is the first to collect data documenting biomass of small pelagic fishes and jellyfish throughout Puget Sound; sampling was conducted opportunistically as part of a juvenile salmon survey of daytime monthly surface trawls at 52 sites during May–August 2003. Biomass composition differed spatially and temporally, but spatial differences were more distinct. Fish dominated in the two northern basins of Puget Sound, whereas jellyfish dominated in the two southern basins. Absolute and relative abundance of jellyfish, hatchery Chinook salmon Oncorhynchus tshawytscha, and chum salmon O. keta decreased with increasing latitude, whereas the absolute and relative abundance of most fish species and the average fish species richness increased with latitude. The abiotic factors with the strongest relationship to biomass composition were latitude, water clarity, and sampling date. Further study is needed to understand the spatial and temporal heterogeneity in the taxonomic composition we observed in Puget Sound surface waters, especially as they relate to natural and anthropogenic influences.

  10. Evidence for size-selective mortality after the first summer of ocean growth by pink salmon

    USGS Publications Warehouse

    Moss, J.H.; Beauchamp, D.A.; Cross, A.D.; Myers, K.W.; Farley, Edward V.; Murphy, J.M.; Helle, J.H.

    2005-01-01

    Pink salmon Onchorhynchus gorbuscha with identifiable thermal otolith marks from Prince William Sound hatchery release groups during 2001 were used to test the hypothesis that faster-growing fish during their first summer in the ocean had higher survival rates than slower-growing fish. Marked juvenile pink salmon were sampled monthly in Prince William Sound and the Gulf of Alaska, and adults that survived to maturity were recovered at hatchery release sites the following year. Surviving fish exhibited significantly wider circuli spacing on the region of the scale formed during early marine residence than did juveniles collected at sea during their first ocean summer, indicating that marine survival after the first growing season was related to increases in early marine growth. At the same circuli, a significantly larger average scale radius for returning adults than for juveniles from the same hatchery would suggest that larger, faster-growing juveniles had a higher survival rate and that significant size-selective mortality occurred after the juveniles were sampled. Growth patterns inferred from intercirculi spacing on scales varied among hatchery release groups, suggesting that density-dependent processes differed among release groups and occurred across Prince William Sound and the coastal Gulf of Alaska. These observations support other studies that have found that larger, faster-growing fish are more likely to survive until maturity. ?? Copyright by the American Fisheries Society 2005.

  11. Nonlocal description of sound propagation through an array of Helmholtz resonators

    NASA Astrophysics Data System (ADS)

    Nemati, Navid; Kumar, Anshuman; Lafarge, Denis; Fang, Nicholas X.

    2015-12-01

    A generalized macroscopic nonlocal theory of sound propagation in rigid-framed porous media saturated with a viscothermal fluid has been recently proposed, which takes into account both temporal and spatial dispersion. Here, we consider applying this theory, which enables the description of resonance effects, to the case of sound propagation through an array of Helmholtz resonators whose unusual metamaterial properties, such as negative bulk moduli, have been experimentally demonstrated. Three different calculations are performed, validating the results of the nonlocal theory, related to the frequency-dependent Bloch wavenumber and bulk modulus of the first normal mode, for 1D propagation in 2D or 3D periodic structures. xml:lang="fr"

  12. Influences of pressure on methyl group, elasticity, sound velocity and sensitivity of solid nitromethane

    NASA Astrophysics Data System (ADS)

    Zhong, Mi; Liu, Qi-Jun; Qin, Han; Jiao, Zhen; Zhao, Feng; Shang, Hai-Lin; Liu, Fu-Sheng; Liu, Zheng-Tang

    2017-06-01

    First-principles calculations were employed to investigate the influences of pressure on methyl group, elasticity, sound velocity and sensitivity of solid nitromethane. The obtained structural parameters based on the GGA-PB E +G calculations are in good agreement with theoretical and experimental data. The rotation of methyl group appears under pressure, which influences the mechanical, thermal properties and sensitivity of solid NM. The anisotropy of elasticity, sound velocity and Debye temperature under pressure have been shown, which are related to the thermal properties of solid NM. The enhanced sensitivity with the increasing pressure has been discussed and the change of the most likely transition path is associated with methyl group.

  13. Automatic moment segmentation and peak detection analysis of heart sound pattern via short-time modified Hilbert transform.

    PubMed

    Sun, Shuping; Jiang, Zhongwei; Wang, Haibin; Fang, Yu

    2014-05-01

    This paper proposes a novel automatic method for the moment segmentation and peak detection analysis of heart sound (HS) pattern, with special attention to the characteristics of the envelopes of HS and considering the properties of the Hilbert transform (HT). The moment segmentation and peak location are accomplished in two steps. First, by applying the Viola integral waveform method in the time domain, the envelope (E(T)) of the HS signal is obtained with an emphasis on the first heart sound (S1) and the second heart sound (S2). Then, based on the characteristics of the E(T) and the properties of the HT of the convex and concave functions, a novel method, the short-time modified Hilbert transform (STMHT), is proposed to automatically locate the moment segmentation and peak points for the HS by the zero crossing points of the STMHT. A fast algorithm for calculating the STMHT of E(T) can be expressed by multiplying the E(T) by an equivalent window (W(E)). According to the range of heart beats and based on the numerical experiments and the important parameters of the STMHT, a moving window width of N=1s is validated for locating the moment segmentation and peak points for HS. The proposed moment segmentation and peak location procedure method is validated by sounds from Michigan HS database and sounds from clinical heart diseases, such as a ventricular septal defect (VSD), an aortic septal defect (ASD), Tetralogy of Fallot (TOF), rheumatic heart disease (RHD), and so on. As a result, for the sounds where S2 can be separated from S1, the average accuracies achieved for the peak of S1 (AP₁), the peak of S2 (AP₂), the moment segmentation points from S1 to S2 (AT₁₂) and the cardiac cycle (ACC) are 98.53%, 98.31% and 98.36% and 97.37%, respectively. For the sounds where S1 cannot be separated from S2, the average accuracies achieved for the peak of S1 and S2 (AP₁₂) and the cardiac cycle ACC are 100% and 96.69%. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. The opponent channel population code of sound location is an efficient representation of natural binaural sounds.

    PubMed

    Młynarski, Wiktor

    2015-05-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.

  15. Sound production in Japanese medaka (Oryzias latipes) and its alteration by exposure to aldicarb and copper sulfate.

    PubMed

    Kang, Ik Joon; Qiu, Xuchun; Moroishi, Junya; Oshima, Yuji

    2017-08-01

    This study is the first to report sound production in Japanese medaka (Oryzias latipes). Sound production was affected by exposure to the carbamate insecticide (aldicarb) and heavy-metal compound (copper sulfate). Medaka were exposed at four concentrations (aldicarb: 0, 0.25, 0.5, and 1 mg L -1 ; copper sulfate: 0, 0.5, 1, and 2 mg L -1 ), and sound characteristics were monitored for 5 h after exposure. We observed constant average interpulse intervals (approx 0.2 s) in all test groups before exposure, and in the control groups throughout the experiment. The average interpulse interval became significantly longer during the recording periods after 50 min of exposure to aldicarb, and reached a length of more than 0.3 s during the recording periods after 120 min exposure. Most medaka fish stopped to produce sound after 50 min of exposure to copper sulfate at 1 and 2 mg L -1 , resulting in significantly declined number of sound pulses and pulse groups. Relative shortened interpulse intervals of sound were occasionally observed in medaka fish exposed to 0.5 mg L -1 copper sulfate. These alternations in sound characteristics due to toxicants exposure suggested that they might impair acoustic communication of medaka fish, which may be important for their reproduction and survival. Our results suggested that using acoustic changes of medaka has potential to monitor precipitate water pollutions, such as intentional poisoning or accidental leakage of industrial waste. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2011-10-01

    We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's "woofing") and spoken words (e.g., /dɔg/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task.

  17. Geometric and boundary element method simulations of acoustic reflections from rough, finite, or non-planar surfaces

    NASA Astrophysics Data System (ADS)

    Rathsam, Jonathan

    This dissertation seeks to advance the current state of computer-based sound field simulations for room acoustics. The first part of the dissertation assesses the reliability of geometric sound-field simulations, which are approximate in nature. The second part of the dissertation uses the rigorous boundary element method (BEM) to learn more about reflections from finite reflectors: planar and non-planar. Acoustical designers commonly use geometric simulations to predict sound fields quickly. Geometric simulation of reflections from rough surfaces is still under refinement. The first project in this dissertation investigates the scattering coefficient, which quantifies the degree of diffuse reflection from rough surfaces. The main result is that predicted reverberation time varies inversely with scattering coefficient if the sound field is nondiffuse. Additional results include a flow chart that enables acoustical designers to gauge how sensitive predicted results are to their choice of scattering coefficient. Geometric acoustics is a high-frequency approximation to wave acoustics. At low frequencies, more pronounced wave phenomena cause deviations between real-world values and geometric predictions. Acoustical designers encounter the limits of geometric acoustics in particular when simulating the low frequency response from finite suspended reflector panels. This dissertation uses the rigorous BEM to develop an improved low-frequency radiation model for smooth, finite reflectors. The improved low frequency model is suggested in two forms for implementation in geometric models. Although BEM simulations require more computation time than geometric simulations, BEM results are highly accurate. The final section of this dissertation uses the BEM to investigate the sound field around non-planar reflectors. The author has added convex edges rounded away from the source side of finite, smooth reflectors to minimize coloration of reflections caused by interference from boundary waves. Although the coloration could not be fully eliminated, the convex edge increases the sound energy reflected into previously nonspecular zones. This excess reflected energy is marginally audible using a standard of 20 dB below direct sound energy. The convex-edged panel is recommended for use when designers want to extend reflected energy spatially beyond the specular reflection zone of a planar panel.

  18. Long-Term Trajectories of the Development of Speech Sound Production in Pediatric Cochlear Implant Recipients

    PubMed Central

    Tomblin, J. Bruce; Peng, Shu-Chen; Spencer, Linda J.; Lu, Nelson

    2011-01-01

    Purpose This study characterized the development of speech sound production in prelingually deaf children with a minimum of 8 years of cochlear implant (CI) experience. Method Twenty-seven pediatric CI recipients' spontaneous speech samples from annual evaluation sessions were phonemically transcribed. Accuracy for these speech samples was evaluated in piecewise regression models. Results As a group, pediatric CI recipients showed steady improvement in speech sound production following implantation, but the improvement rate declined after 6 years of device experience. Piecewise regression models indicated that the slope estimating the participants' improvement rate was statistically greater than 0 during the first 6 years postimplantation, but not after 6 years. The group of pediatric CI recipients' accuracy of speech sound production after 4 years of device experience reasonably predicts their speech sound production after 5–10 years of device experience. Conclusions The development of speech sound production in prelingually deaf children stabilizes after 6 years of device experience, and typically approaches a plateau by 8 years of device use. Early growth in speech before 4 years of device experience did not predict later rates of growth or levels of achievement. However, good predictions could be made after 4 years of device use. PMID:18695018

  19. Contingent sounds change the mental representation of one's finger length.

    PubMed

    Tajadura-Jiménez, Ana; Vakali, Maria; Fairhurst, Merle T; Mandrigin, Alisa; Bianchi-Berthouze, Nadia; Deroy, Ophelia

    2017-07-18

    Mental body-representations are highly plastic and can be modified after brief exposure to unexpected sensory feedback. While the role of vision, touch and proprioception in shaping body-representations has been highlighted by many studies, the auditory influences on mental body-representations remain poorly understood. Changes in body-representations by the manipulation of natural sounds produced when one's body impacts on surfaces have recently been evidenced. But will these changes also occur with non-naturalistic sounds, which provide no information about the impact produced by or on the body? Drawing on the well-documented capacity of dynamic changes in pitch to elicit impressions of motion along the vertical plane and of changes in object size, we asked participants to pull on their right index fingertip with their left hand while they were presented with brief sounds of rising, falling or constant pitches, and in the absence of visual information of their hands. Results show an "auditory Pinocchio" effect, with participants feeling and estimating their finger to be longer after the rising pitch condition. These results provide the first evidence that sounds that are not indicative of veridical movement, such as non-naturalistic sounds, can induce a Pinocchio-like change in body-representation when arbitrarily paired with a bodily action.

  20. Visual motion disambiguation by a subliminal sound.

    PubMed

    Dufour, Andre; Touzalin, Pascale; Moessinger, Michèle; Brochard, Renaud; Després, Olivier

    2008-09-01

    There is growing interest in the effect of sound on visual motion perception. One model involves the illusion created when two identical objects moving towards each other on a two-dimensional visual display can be seen to either bounce off or stream through each other. Previous studies show that the large bias normally seen toward the streaming percept can be modulated by the presentation of an auditory event at the moment of coincidence. However, no reports to date provide sufficient evidence to indicate whether the sound bounce-inducing effect is due to a perceptual binding process or merely to an explicit inference resulting from the transient auditory stimulus resembling a physical collision of two objects. In the present study, we used a novel experimental design in which a subliminal sound was presented either 150 ms before, at, or 150 ms after the moment of coincidence of two disks moving towards each other. The results showed that there was an increased perception of bouncing (rather than streaming) when the subliminal sound was presented at or 150 ms after the moment of coincidence compared to when no sound was presented. These findings provide the first empirical demonstration that activation of the human auditory system without reaching consciousness affects the perception of an ambiguous visual motion display.

  1. The reduction of gunshot noise and auditory risk through the use of firearm suppressors and low-velocity ammunition.

    PubMed

    Murphy, William J; Flamme, Gregory A; Campbell, Adam R; Zechmann, Edward L; Tasko, Stephen M; Lankford, James E; Meinke, Deanna K; Finan, Donald S; Stewart, Michael

    2018-02-01

    This research assessed the reduction of peak levels, equivalent energy and sound power of firearm suppressors. The first study evaluated the effect of three suppressors at four microphone positions around four firearms. The second study assessed the suppressor-related reduction of sound power with a 3 m hemispherical microphone array for two firearms. The suppressors reduced exposures at the ear between 17 and 24 dB peak sound pressure level and reduced the 8 h equivalent A-weighted energy between 9 and 21 dB depending upon the firearm and ammunition. Noise reductions observed for the instructor's position about a metre behind the shooter were between 20 and 28 dB peak sound pressure level and between 11 and 26 dB L Aeq,8h . Firearm suppressors reduced the measured sound power levels between 2 and 23 dB. Sound power reductions were greater for the low-velocity ammunition than for the same firearms fired with high-velocity ammunition due to the effect of N-waves produced by a supersonic bullet. Firearm suppressors may reduce noise exposure, and the cumulative exposures of suppressed firearms can still present a significant hearing risk. Therefore, firearm users should always wear hearing protection whenever target shooting or hunting.

  2. BIAS: A Regional Management of Underwater Sound in the Baltic Sea.

    PubMed

    Sigray, Peter; Andersson, Mathias; Pajala, Jukka; Laanearu, Janek; Klauson, Aleksander; Tegowski, Jaroslaw; Boethling, Maria; Fischer, Jens; Tougaard, Jakob; Wahlberg, Magnus; Nikolopoulos, Anna; Folegot, Thomas; Matuschek, Rainer; Verfuss, Ursula

    2016-01-01

    Management of the impact of underwater sound is an emerging concern worldwide. Several countries are in the process of implementing regulatory legislations. In Europe, the Marine Strategy Framework Directive was launched in 2008. This framework addresses noise impacts and the recommendation is to deal with it on a regional level. The Baltic Sea is a semienclosed area with nine states bordering the sea. The number of ships is one of the highest in Europe. Furthermore, the number of ships is estimated to double by 2030. Undoubtedly, due to the unbound character of noise, an efficient management of sound in the Baltic Sea must be done on a regional scale. In line with the European Union directive, the Baltic Sea Information on the Acoustic Soundscape (BIAS) project was established to implement Descriptor 11 of the Marine Strategy Framework Directive in the Baltic Sea region. BIAS will develop tools, standards, and methodologies that will allow for cross-border handling of data and results, measure sound in 40 locations for 1 year, establish a seasonal soundscape map by combining measured sound with advanced three-dimensional modeling, and, finally, establish standards for measuring continuous sound. Results from the first phase of BIAS are presented here, with an emphasis on standards and soundscape mapping as well as the challenges related to regional handling.

  3. Structure-borne sound and vibration from building-mounted wind turbines

    NASA Astrophysics Data System (ADS)

    Moorhouse, Andy; Elliott, Andy; Eastwick, Graham; Evans, Tomos; Ryan, Andy; von Hunerbein, Sabine; le Bescond, Valentin; Waddington, David

    2011-07-01

    Noise continues to be a significant factor in the development of wind energy resources. In the case of building-mounted wind turbines (BMWTs), in addition to the usual airborne sound there is the potential for occupants to be affected by structure-borne sound and vibration transmitted through the building structure. Usual methods for prediction and evaluation of noise from large and small WTs are not applicable to noise of this type. This letter describes an investigation aiming to derive a methodology for prediction of structure-borne sound and vibration inside attached dwellings. Jointly funded by three UK government departments, the work was motivated by a desire to stimulate renewable energy generation by the removal of planning restrictions where possible. A method for characterizing BMWTs as sources of structure-borne sound was first developed during a field survey of two small wind turbines under variable wind conditions. The 'source strength' was established as a function of rotor speed although a general relationship to wind speed could not be established. The influence of turbulence was also investigated. The prediction methodology, which also accounts for the sound transmission properties of the mast and supporting building, was verified in a field survey of existing installations. Significant differences in behavior and subjective character were noted between the airborne and structure-borne noise from BMWTs.

  4. The Influence of refractoriness upon comprehension of non-verbal auditory stimuli.

    PubMed

    Crutch, Sebastian J; Warrington, Elizabeth K

    2008-01-01

    An investigation of non-verbal auditory comprehension in two patients with global aphasia following stroke is reported. The primary aim of the investigation was to establish whether refractory access disorders can affect non-verbal input modalities. All previous reports of refractoriness, a cognitive syndrome characterized by response inconsistency, sensitivity to temporal factors and insensitivity to item frequency, have involved comprehension tasks which have a verbal component. Two main experiments are described. The first consists of a novel sound-to-picture and sound-to-word matching task in which comprehension of environmental sounds is probed under conditions of semantic relatedness and semantic unrelatedness. In addition to the two stroke patients, the performance of a group of 10 control patients with non-vascular pathology is reported, along with evidence of semantic relatedness effects in sound comprehension. The second experiment examines environmental sound comprehension within a repetitive probing paradigm which affords assessment of the effects of semantic relatedness, response consistency and presentation rate. It is demonstrated that the two stroke patients show a significant increase in error rate across multiple probes of the same set of sound stimuli, indicating the presence of refractoriness within this non-verbal domain. The implications of the results are discussed with reference to our current understanding of the mechanisms of refractoriness.

  5. Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation

    PubMed Central

    Oliva, Aude

    2017-01-01

    Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals. PMID:28451630

  6. Audio Restoration

    NASA Astrophysics Data System (ADS)

    Esquef, Paulo A. A.

    The first reproducible recording of human voice was made in 1877 on a tinfoil cylinder phonograph devised by Thomas A. Edison. Since then, much effort has been expended to find better ways to record and reproduce sounds. By the mid-1920s, the first electrical recordings appeared and gradually took over purely acoustic recordings. The development of electronic computers, in conjunction with the ability to record data onto magnetic or optical media, culminated in the standardization of compact disc format in 1980. Nowadays, digital technology is applied to several audio applications, not only to improve the quality of modern and old recording/reproduction techniques, but also to trade off sound quality for less storage space and less taxing transmission capacity requirements.

  7. Science & Technology Review September 2009

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bearinger, J P

    This month's issue has the following articles: (1) Remembering the Laboratory's First Director - Commentary by Harold Brown; (2) Herbert F. York (1921-2009): A Life of Firsts, an Ambassador for Peace - The Laboratory's first director, who died on May 19, 2009, used his expertise in science and technology to advance arms control and prevent nuclear war; (3) Searching for Life in Extreme Environments - DNA will help researchers discover new marine species and prepare to search for life on other planets; (4) Energy Goes with the Flow - Lawrence Livermore is one of the few organizations that distills themore » big picture about energy resources and use into a concise diagram; and (5) The Radiant Side of Sound - An experimental method that converts sound waves into light may lead to new technologies for scientific and industrial applications.« less

  8. Water quality and bed sediment quality in the Albemarle Sound, North Carolina, 2012–14

    USGS Publications Warehouse

    Moorman, Michelle C.; Fitzgerald, Sharon A.; Gurley, Laura N.; Rhoni-Aref, Ahmed; Loftin, Keith A.

    2017-01-23

    The Albemarle Sound region was selected in 2012 as one of two demonstration sites in the Nation to test and improve the design of the National Water Quality Monitoring Council’s National Monitoring Network (NMN) for U.S. Coastal Waters and Tributaries. The goal of the NMN for U.S. Coastal Waters and Tributaries is to provide information about the health of our oceans, coastal ecosystems, and inland influences on coastal waters for improved resource management. The NMN is an integrated, multidisciplinary, and multi-organizational program using multiple sources of data and information to augment current monitoring programs.This report presents and summarizes selected water-quality and bed sediment-quality data collected as part of the demonstration project conducted in two phases. The first phase was an occurrence and distribution study to assess nutrients, metals, pesticides, cyanotoxins, and phytoplankton communities in the Albemarle Sound during the summer of 2012 at 34 sites in Albemarle Sound, nearby sounds, and various tributaries. The second phase consisted of monthly sampling over a year (March 2013 through February 2014) to assess seasonality in a more limited set of constituents including nutrients, cyanotoxins, and phytoplankton communities at a subset (eight) of the sites sampled in the first phase. During the summer of 2012, few constituent concentrations exceeded published water-quality thresholds; however, elevated levels of chlorophyll a and pH were observed in the northern embayments and in Currituck Sound. Chlorophyll a, and metals (copper, iron, and zinc) were detected above a water-quality threshold. The World Health Organization provisional guideline based on cyanobacterial density for high recreational risk was exceeded in approximately 50 percent of water samples collected during the summer of 2012. Cyanobacteria capable of producing toxins were present, but only low levels of cyanotoxins below human health benchmarks were detected. Finally, 12 metals in surficial bed sediments were detected at levels above a published sediment-quality threshold. These metals included chromium, mercury, copper, lead, arsenic, nickel, and cadmium. Sites with several metal concentrations above the respective thresholds had relatively high concentrations of organic carbon or fine sediment (silt plus clay), or both and were predominantly located in the western and northwestern parts of the Albemarle Sound.Results from the second phase were generally similar to those of the first in that relatively few constituents exceeded a water-quality threshold, both pH and chlorophyll a were detected above the respective water-quality thresholds, and many of these elevated concentrations occurred in the northern embayments and in Currituck Sound. In contrast to the results from phase one, the cyanotoxin, microcystin was detected at more than 10 times the water-quality threshold during a phytoplankton bloom on the Chowan River at Mount Gould, North Carolina in August of 2013. This was the only cyanotoxin concentration measured during the entire study that exceeded a respective water-quality threshold.The information presented in this report can be used to improve understanding of water-quality conditions in the Albemarle Sound, particularly when evaluating causal and response variables that are indicators of eutrophication. In particular, this information can be used by State agencies to help develop water-quality criteria for nutrients, and to understand factors like cyanotoxins that may affect fisheries and recreation in the Albemarle Sound region.

  9. Spectral Characteristics of Wake Vortex Sound During Roll-Up

    NASA Technical Reports Server (NTRS)

    Booth, Earl R., Jr. (Technical Monitor); Zhang, Yan; Wang, Frank Y.; Hardin, Jay C.

    2003-01-01

    This report presents an analysis of the sound spectra generated by a trailing aircraft vortex during its rolling-up process. The study demonstrates that a rolling-up vortex could produce low frequency (less than 100 Hz) sound with very high intensity (60 dB above threshold of human hearing) at a distance of 200 ft from the vortex core. The spectrum then drops o rapidly thereafter. A rigorous analytical approach has been adopted in this report to derive the spectrum of vortex sound. First, the sound pressure was solved from an alternative treatment of the Lighthill s acoustic analogy approach [1]. After the application of Green s function for free space, a tensor analysis was applied to permit the removal of the source term singularity of the wave equation in the far field. Consequently, the sound pressure is expressed in terms of the retarded time that indicates the time history and spacial distribution of the sound source. The Fourier transformation is then applied to the sound pressure to compute its spectrum. As a result, the Fourier transformation greatly simplifies the expression of the vortex sound pressure involving the retarded time, so that the numerical computation is applicable with ease for axisymmetric line vortices during the rolling-up process. The vortex model assumes that the vortex circulation is proportional to the time and the core radius is a constant. In addition, the velocity profile is assumed to be self-similar along the aircraft flight path, so that a benchmark vortex velocity profile can be devised to obtain a closed form solution, which is then used to validate the numerical calculations for other more realistic vortex profiles for which no closed form solutions are available. The study suggests that acoustic sensors operating at low frequency band could be profitably deployed for detecting the vortex sound during the rolling-up process.

  10. Characterization and Generation of Male Courtship Song in Cotesia congregata (Hymenoptera: Braconidae)

    PubMed Central

    Bredlau, Justin P.; Mohajer, Yasha J.; Cameron, Timothy M.; Kester, Karen M.; Fine, Michael L.

    2013-01-01

    Background Male parasitic wasps attract females with a courtship song produced by rapid wing fanning. Songs have been described for several parasitic wasp species; however, beyond association with wing fanning, the mechanism of sound generation has not been examined. We characterized the male courtship song of Cotesia congregata (Hymenoptera: Braconidae) and investigated the biomechanics of sound production. Methods and Principal Findings Courtship songs were recorded using high-speed videography (2,000 fps) and audio recordings. The song consists of a long duration amplitude-modulated “buzz” followed by a series of pulsatile higher amplitude “boings,” each decaying into a terminal buzz followed by a short inter-boing pause while wings are stationary. Boings have higher amplitude and lower frequency than buzz components. The lower frequency of the boing sound is due to greater wing displacement. The power spectrum is a harmonic series dominated by wing repetition rate ∼220 Hz, but the sound waveform indicates a higher frequency resonance ∼5 kHz. Sound is not generated by the wings contacting each other, the substrate, or the abdomen. The abdomen is elevated during the first several wing cycles of the boing, but its position is unrelated to sound amplitude. Unlike most sounds generated by volume velocity, the boing is generated at the termination of the wing down stroke when displacement is maximal and wing velocity is zero. Calculation indicates a low Reynolds number of ∼1000. Conclusions and Significance Acoustic pressure is proportional to velocity for typical sound sources. Our finding that the boing sound was generated at maximal wing displacement coincident with cessation of wing motion indicates that it is caused by acceleration of the wing tips, consistent with a dipole source. The low Reynolds number requires a high wing flap rate for flight and predisposes wings of small insects for sound production. PMID:23630622

  11. Sound-Making Actions Lead to Immediate Plastic Changes of Neuromagnetic Evoked Responses and Induced β-Band Oscillations during Perception.

    PubMed

    Ross, Bernhard; Barat, Masihullah; Fujioka, Takako

    2017-06-14

    Auditory and sensorimotor brain areas interact during the action-perception cycle of sound making. Neurophysiological evidence of a feedforward model of the action and its outcome has been associated with attenuation of the N1 wave of auditory evoked responses elicited by self-generated sounds, such as talking and singing or playing a musical instrument. Moreover, neural oscillations at β-band frequencies have been related to predicting the sound outcome after action initiation. We hypothesized that a newly learned action-perception association would immediately modify interpretation of the sound during subsequent listening. Nineteen healthy young adults (7 female, 12 male) participated in three magnetoencephalographic recordings while first passively listening to recorded sounds of a bell ringing, then actively striking the bell with a mallet, and then again listening to recorded sounds. Auditory cortex activity showed characteristic P1-N1-P2 waves. The N1 was attenuated during sound making, while P2 responses were unchanged. In contrast, P2 became larger when listening after sound making compared with the initial naive listening. The P2 increase occurred immediately, while in previous learning-by-listening studies P2 increases occurred on a later day. Also, reactivity of β-band oscillations, as well as θ coherence between auditory and sensorimotor cortices, was stronger in the second listening block. These changes were significantly larger than those observed in control participants (eight female, five male), who triggered recorded sounds by a key press. We propose that P2 characterizes familiarity with sound objects, whereas β-band oscillation signifies involvement of the action-perception cycle, and both measures objectively indicate functional neuroplasticity in auditory perceptual learning. SIGNIFICANCE STATEMENT While suppression of auditory responses to self-generated sounds is well known, it is not clear whether the learned action-sound association modifies subsequent perception. Our study demonstrated the immediate effects of sound-making experience on perception using magnetoencephalographic recordings, as reflected in the increased auditory evoked P2 wave, increased responsiveness of β oscillations, and enhanced connectivity between auditory and sensorimotor cortices. The importance of motor learning was underscored as the changes were much smaller in a control group using a key press to generate the sounds instead of learning to play the musical instrument. The results support the rapid integration of a feedforward model during perception and provide a neurophysiological basis for the application of music making in motor rehabilitation training. Copyright © 2017 the authors 0270-6474/17/375948-12$15.00/0.

  12. SCORE - Sounding-rocket Coronagraphic Experiment

    NASA Astrophysics Data System (ADS)

    Fineschi, Silvano; Moses, Dan; Romoli, Marco

    The Sounding-rocket Coronagraphic Experiment - SCORE - is a The Sounding-rocket Coronagraphic Experiment - SCORE - is a coronagraph for multi-wavelength imaging of the coronal Lyman-alpha lines, HeII 30.4 nm and HI 121.6 nm, and for the broad.band visible-light emission of the polarized K-corona. SCORE has flown successfully in 2009 acquiring the first images of the HeII line-emission from the extended corona. The simultaneous observation of the coronal Lyman-alpha HI 121.6 nm, has allowed the first determination of the absolute helium abundance in the extended corona. This presentation will describe the lesson learned from the first flight and will illustrate the preparations and the science perspectives for the second re-flight approved by NASA and scheduled for 2016. The SCORE optical design is flexible enough to be able to accommodate different experimental configurations with minor modifications. This presentation will describe one of such configurations that could include a polarimeter for the observation the expected Hanle effect in the coronal Lyman-alpha HI line. The linear polarization by resonance scattering of coronal permitted line-emission in the ultraviolet (UV) can be modified by magnetic fields through the Hanle effect. Thus, space-based UV spectro-polarimetry would provide an additional new tool for the diagnostics of coronal magnetism.

  13. Effect of a chamber orchestra on direct sound and early reflections for performers on stage: A boundary element method study.

    PubMed

    Panton, Lilyan; Holloway, Damien; Cabrera, Densil

    2017-04-01

    Early reflections are known to be important to musicians performing on stage, but acoustic measurements are usually made on empty stages. This work investigates how a chamber orchestra setup on stage affects early reflections from the stage enclosure. A boundary element method (BEM) model of a chamber orchestra is validated against full scale measurements with seated and standing subjects in an anechoic chamber and against auditorium measurements, demonstrating that the BEM simulation gives realistic results. Using the validated BEM model, an investigation of how a chamber orchestra attenuates and scatters both the direct sound and the first-order reflections is presented for two different sized "shoe-box" stage enclosures. The first-order reflections from the stage are investigated individually: at and above the 250 Hz band, horizontal reflections from stage walls are attenuated to varying degrees, while the ceiling reflection is relatively unaffected. Considering the overall effect of the chamber orchestra on the direct sound and first-order reflections, differences of 2-5 dB occur in the 1000 Hz octave band when the ceiling reflection is excluded (slightly reduced when including the unobstructed ceiling reflection). A tilted side wall case showed the orchestra has a reduced effect with a small elevation of the lateral reflections.

  14. Development and Current Status of the “Cambridge” Loudness Models

    PubMed Central

    2014-01-01

    This article reviews the evolution of a series of models of loudness developed in Cambridge, UK. The first model, applicable to stationary sounds, was based on modifications of the model developed by Zwicker, including the introduction of a filter to allow for the effects of transfer of sound through the outer and middle ear prior to the calculation of an excitation pattern, and changes in the way that the excitation pattern was calculated. Later, modifications were introduced to the assumed middle-ear transfer function and to the way that specific loudness was calculated from excitation level. These modifications led to a finite calculated loudness at absolute threshold, which made it possible to predict accurately the absolute thresholds of broadband and narrowband sounds, based on the assumption that the absolute threshold corresponds to a fixed small loudness. The model was also modified to give predictions of partial loudness—the loudness of one sound in the presence of another. This allowed predictions of masked thresholds based on the assumption that the masked threshold corresponds to a fixed small partial loudness. Versions of the model for time-varying sounds were developed, which allowed prediction of the masked threshold of any sound in a background of any other sound. More recent extensions incorporate binaural processing to account for the summation of loudness across ears. In parallel, versions of the model for predicting loudness for hearing-impaired ears have been developed and have been applied to the development of methods for fitting multichannel compression hearing aids. PMID:25315375

  15. Energy Flux in the Cochlea: Evidence Against Power Amplification of the Traveling Wave.

    PubMed

    van der Heijden, Marcel; Versteegh, Corstiaen P C

    2015-10-01

    Traveling waves in the inner ear exhibit an amplitude peak that shifts with frequency. The peaking is commonly believed to rely on motile processes that amplify the wave by inserting energy. We recorded the vibrations at adjacent positions on the basilar membrane in sensitive gerbil cochleae and tested the putative power amplification in two ways. First, we determined the energy flux of the traveling wave at its peak and compared it to the acoustic power entering the ear, thereby obtaining the net cochlear power gain. For soft sounds, the energy flux at the peak was 1 ± 0.6 dB less than the middle ear input power. For more intense sounds, increasingly smaller fractions of the acoustic power actually reached the peak region. Thus, we found no net power amplification of soft sounds and a strong net attenuation of intense sounds. Second, we analyzed local wave propagation on the basilar membrane. We found that the waves slowed down abruptly when approaching their peak, causing an energy densification that quantitatively matched the amplitude peaking, similar to the growth of sea waves approaching the beach. Thus, we found no local power amplification of soft sounds and strong local attenuation of intense sounds. The most parsimonious interpretation of these findings is that cochlear sensitivity is not realized by amplifying acoustic energy, but by spatially focusing it, and that dynamic compression is realized by adjusting the amount of dissipation to sound intensity.

  16. Neural spike-timing patterns vary with sound shape and periodicity in three auditory cortical fields

    PubMed Central

    Lee, Christopher M.; Osman, Ahmad F.; Volgushev, Maxim; Escabí, Monty A.

    2016-01-01

    Mammals perceive a wide range of temporal cues in natural sounds, and the auditory cortex is essential for their detection and discrimination. The rat primary (A1), ventral (VAF), and caudal suprarhinal (cSRAF) auditory cortical fields have separate thalamocortical pathways that may support unique temporal cue sensitivities. To explore this, we record responses of single neurons in the three fields to variations in envelope shape and modulation frequency of periodic noise sequences. Spike rate, relative synchrony, and first-spike latency metrics have previously been used to quantify neural sensitivities to temporal sound cues; however, such metrics do not measure absolute spike timing of sustained responses to sound shape. To address this, in this study we quantify two forms of spike-timing precision, jitter, and reliability. In all three fields, we find that jitter decreases logarithmically with increase in the basis spline (B-spline) cutoff frequency used to shape the sound envelope. In contrast, reliability decreases logarithmically with increase in sound envelope modulation frequency. In A1, jitter and reliability vary independently, whereas in ventral cortical fields, jitter and reliability covary. Jitter time scales increase (A1 < VAF < cSRAF) and modulation frequency upper cutoffs decrease (A1 > VAF > cSRAF) with ventral progression from A1. These results suggest a transition from independent encoding of shape and periodicity sound cues on short time scales in A1 to a joint encoding of these same cues on longer time scales in ventral nonprimary cortices. PMID:26843599

  17. Differential pathologies resulting from sound exposure: Tinnitus vs hearing loss

    NASA Astrophysics Data System (ADS)

    Longenecker, Ryan James

    The first step in identifying the mechanism(s) responsible for tinnitus development would be to discover a neural correlate that is differentially expressed in tinnitus-positive compared to tinnitus negative animals. Previous research has identified several neural correlates of tinnitus in animals that have tested positive for tinnitus. However it is unknown whether all or some of these correlates are linked to tinnitus or if they are a byproduct of hearing loss, a common outcome of tinnitus induction. Abnormally high spontaneous activity has frequently been linked to tinnitus. However, while some studies demonstrate that hyperactivity positively correlates with behavioral evidence of tinnitus, others show that when all animals develop hyperactivity to sound exposure, not all exposed animals show evidence of tinnitus. My working hypothesis is that certain aspects of hyperactivity are linked to tinnitus while other aspects are linked to hearing loss. The first specific aim utilized the gap induced prepulse inhibition of the acoustic startle reflex (GIPAS) to monitor the development of tinnitus in CBA/CaJ mice during one year following sound exposure. Immediately after sound exposure, GIPAS testing revealed widespread gap detection deficits across all frequencies, which was likely due to temporary threshold shifts. However, three months after sound exposure these deficits were limited to a narrow frequency band and were consistently detected up to one year after exposure. This suggests the development of chronic tinnitus is a long lasting and highly dynamic process. The second specific aim assessed hearing loss in sound exposed mice using several techniques. Acoustic brainstem responses recorded initially after sound exposure reveal large magnitude deficits in all exposed mice. However, at the three month period, thresholds return to control levels in all mice suggesting that ABRs are not a reliable tool for assessing permanent hearing loss. Input/output functions of the acoustic startle reflex show that after sound exposure the magnitude of startle responses decrease in most mice, to varying degrees. Lastly, PPI audiometry was able to detect specific behavioral threshold deficits for each mouse after sound exposure. These deficits persist past initial threshold shifts and are able to detect frequency specific permanent threshold shifts. The third specific aim examined hyperactivity and increased bursting activity in the inferior colliculus after sound exposure in relation to tinnitus and hearing loss. Spontaneous firing rates were increased in all mice after sound exposure regardless of behavioral evidence of tinnitus. However, abnormal increased bursting activity was not found in the animals identified with tinnitus but was exhibited in a mouse with broad-band severe threshold deficits. CBA/CaJ mice are a good model for both tinnitus development and noise-induced hearing loss studies. Hyperactivity which was evident in all exposed animals does not seem to be well correlated with behavioral evidence of tinnitus but more likely to be a general result of acoustic over exposure. Data from one animal strongly suggest that wide-spread severe threshold deficits are linked to an elevation of bursting activity predominantly ipsilateral to the side of sound exposure. This result is intriguing and should be followed up in further studies. Data obtained in this study provide new insights into underlying neural pathologies following sound exposure and have possible clinical applications for development of effective treatments and diagnostic tools for tinnitus and hearing loss.

  18. Real-time Integration of Biological, Optical and Physical Oceanographic Data from Multiple Vessels and Nearshore Sites using a Wireless Network

    DTIC Science & Technology

    1997-09-30

    field experiments in Puget Sound . Each research vessel will use multi- sensor profiling instrument packages which obtain high-resolution physical...field deployment of the wireless network is planned for May-July, 1998, at Orcas Island, WA. IMPACT We expect that wireless communication systems will...East Sound project to be a first step toward continental shelf and open ocean deployments with the next generation of wireless and satellite

  19. Effects of daily noise on fetuses and cerebral hemisphere specialization in children

    NASA Astrophysics Data System (ADS)

    Ando, Y.

    1988-12-01

    This paper first provides an overview of work by the author and colleagues on effects of noise on fetuses demonstrating growth inhibition. As a second issue, the effects of daily noise on the mental abilities of children are discussed in relation to task specification of cerebral hemispheres. Two different types of mental tasks were given to a total of 1286 children (7-10 years old) who live in a noisy area around an international airport or in a neighbouring quiet area, under conditions of no sound, jet-plane noise stimulus and music stimulus. In the quiet neighborhood, results may support a model that noise and calculation tasks are separately processed in the right and left cerebral hemisphere, respectively. Music perception and calculation are considered to be processed one after the other in the left hemisphere. In the pattern search task used as the right hemispheric task, no significant differences appeared under either stimulus sound, with the exception of a slight interference observed in the noise group. In the noisy living area, however, effects of temporary sound on mental tasks appeared to be quite different from the first-mentioned results. These facts suggest that daily noise affects the development of cerebral specialization of growing children. As little is known about effects of noise on growing children, it is recommended that international cooperation be initiated to establish the need for and conditions of healthy sound environments.

  20. The sound of arousal in music is context-dependent

    PubMed Central

    Blumstein, Daniel T.; Bryant, Gregory A.; Kaye, Peter

    2012-01-01

    Humans, and many non-human animals, produce and respond to harsh, unpredictable, nonlinear sounds when alarmed, possibly because these are produced when acoustic production systems (vocal cords and syrinxes) are overblown in stressful, dangerous situations. Humans can simulate nonlinearities in music and soundtracks through the use of technological manipulations. Recent work found that film soundtracks from different genres differentially contain such sounds. We designed two experiments to determine specifically how simulated nonlinearities in soundtracks influence perceptions of arousal and valence. Subjects were presented with emotionally neutral musical exemplars that had neither noise nor abrupt frequency transitions, or versions of these musical exemplars that had noise or abrupt frequency upshifts or downshifts experimentally added. In a second experiment, these acoustic exemplars were paired with benign videos. Judgements of both arousal and valence were altered by the addition of these simulated nonlinearities in the first, music-only, experiment. In the second, multi-modal, experiment, valence (but not arousal) decreased with the addition of noise or frequency downshifts. Thus, the presence of a video image suppressed the ability of simulated nonlinearities to modify arousal. This is the first study examining how nonlinear simulations in music affect emotional judgements. These results demonstrate that the perception of potentially fearful or arousing sounds is influenced by the perceptual context and that the addition of a visual modality can antagonistically suppress the response to an acoustic stimulus. PMID:22696288

  1. An experimental study of transmission, reflection and scattering of sound in a free jet flight simulation facility and comparison with theory

    NASA Technical Reports Server (NTRS)

    Ahuja, K. K.; Tanna, H. K.; Tester, B. J.

    1981-01-01

    When a free jet (or open jet) is used as a wind tunnel to simulate the effects of flight on model noise sources, it is necessary to calibrate out the effects of the free jet shear layer on the transmitted sound, since the shear layer is absent in the real flight case. In this paper, a theoretical calibration procedure for this purpose is first summarized; following this, the results of an experimental program, designed to test the validity of the various components of the calibration procedure, are described. The experiments are conducted by using a point sound source located at various axial positions within the free jet potential core. By using broadband excitation and cross-correlation methods, the angle changes associated with ray paths across the shear layer are first established. Measurements are then made simultaneously inside and outside the free jet along the proper ray paths to determine the amplitude changes across the shear layer. It is shown that both the angle and amplitude changes can be predicted accurately by theory. It is also found that internal reflection at the shear layer is significant only for large ray angles in the forward quadrant where total internal reflection occurs. Finally, the effects of sound absorption and scattering by the shear layer turbulence are also examined experimentally.

  2. The sound of arousal in music is context-dependent.

    PubMed

    Blumstein, Daniel T; Bryant, Gregory A; Kaye, Peter

    2012-10-23

    Humans, and many non-human animals, produce and respond to harsh, unpredictable, nonlinear sounds when alarmed, possibly because these are produced when acoustic production systems (vocal cords and syrinxes) are overblown in stressful, dangerous situations. Humans can simulate nonlinearities in music and soundtracks through the use of technological manipulations. Recent work found that film soundtracks from different genres differentially contain such sounds. We designed two experiments to determine specifically how simulated nonlinearities in soundtracks influence perceptions of arousal and valence. Subjects were presented with emotionally neutral musical exemplars that had neither noise nor abrupt frequency transitions, or versions of these musical exemplars that had noise or abrupt frequency upshifts or downshifts experimentally added. In a second experiment, these acoustic exemplars were paired with benign videos. Judgements of both arousal and valence were altered by the addition of these simulated nonlinearities in the first, music-only, experiment. In the second, multi-modal, experiment, valence (but not arousal) decreased with the addition of noise or frequency downshifts. Thus, the presence of a video image suppressed the ability of simulated nonlinearities to modify arousal. This is the first study examining how nonlinear simulations in music affect emotional judgements. These results demonstrate that the perception of potentially fearful or arousing sounds is influenced by the perceptual context and that the addition of a visual modality can antagonistically suppress the response to an acoustic stimulus.

  3. Displaying Composite and Archived Soundings in the Advanced Weather Interactive Processing System

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Volkmer, Matthew R.; Blottman, Peter F.; Sharp, David W.

    2008-01-01

    In a previous task, the Applied Meteorology Unit (AMU) developed spatial and temporal climatologies of lightning occurrence based on eight atmospheric flow regimes. The AMU created climatological, or composite, soundings of wind speed and direction, temperature, and dew point temperature at four rawinsonde observation stations at Jacksonville, Tampa, Miami, and Cape Canaveral Air Force Station, for each of the eight flow regimes. The composite soundings were delivered to the National Weather Service (NWS) Melbourne (MLB) office for display using the National version of the Skew-T Hodograph analysis and Research Program (NSHARP) software program. The NWS MLB requested the AMU make the composite soundings available for display in the Advanced Weather Interactive Processing System (AWIPS), so they could be overlaid on current observed soundings. This will allow the forecasters to compare the current state of the atmosphere with climatology. This presentation describes how the AMU converted the composite soundings from NSHARP Archive format to Network Common Data Form (NetCDF) format, so that the soundings could be displayed in AWl PS. The NetCDF is a set of data formats, programming interfaces, and software libraries used to read and write scientific data files. In AWIPS, each meteorological data type, such as soundings or surface observations, has a unique NetCDF format. Each format is described by a NetCDF template file. Although NetCDF files are in binary format, they can be converted to a text format called network Common data form Description Language (CDL). A software utility called ncgen is used to create a NetCDF file from a CDL file, while the ncdump utility is used to create a CDL file from a NetCDF file. An AWIPS receives soundings in Binary Universal Form for the Representation of Meteorological data (BUFR) format (http://dss.ucar.edu/docs/formats/bufr/), and then decodes them into NetCDF format. Only two sounding files are generated in AWIPS per day. One file contains all of the soundings received worldwide between 0000 UTC and 1200 UTC, and the other includes all soundings between 1200 UTC and 0000 UTC. In order to add the composite soundings into AWIPS, a procedure was created to configure, or localize, AWIPS. This involved modifying and creating several configuration text files. A unique fourcharacter site identifier was created for each of the 32 soundings so each could be viewed separately. The first three characters were based on the site identifier of the observed sounding, while the last character was based on the flow regime. While researching the localization process for soundings, the AMU discovered a method of archiving soundings so old soundings would not get purged automatically by AWl PS. This method could provide an alternative way of localizing AWl PS for composite soundings. In addition, this would allow forecasters to use archived soundings in AWIPS for case studies. A test sounding file in NetCDF format was written in order to verify the correct format for soundings in AWIPS. After the file was viewed successfully in AWIPS, the AMU wrote a software program in the Tool Command Language/Tool Kit (Tcl/Tk) language to convert the 32 composite soundings from NSHARP Archive to CDL format. The ncgen utility was then used to convert the CDL file to a NetCDF file. The NetCDF file could then be read and displayed in AWIPS.

  4. Sound Diffraction Modeling of Rotorcraft Noise Around Terrain

    NASA Technical Reports Server (NTRS)

    Stephenson, James H.; Sim, Ben W.; Chitta, Subhashini; Steinhoff, John

    2017-01-01

    A new computational technique, Wave Confinement (WC), is extended here to account for sound diffraction around arbitrary terrain. While diffraction around elementary scattering objects, such as a knife edge, single slit, disc, sphere, etc. has been studied for several decades, realistic environments still pose significant problems. This new technique is first validated against Sommerfeld's classical problem of diffraction due to a knife edge. This is followed by comparisons with diffraction over three-dimensional smooth obstacles, such as a disc and Gaussian hill. Finally, comparisons with flight test acoustics data measured behind a hill are also shown. Comparison between experiment and Wave Confinement prediction demonstrates that a Poisson spot occurred behind the isolated hill, resulting in significantly increased sound intensity near the center of the shadowed region.

  5. Old World frog and bird vocalizations contain prominent ultrasonic harmonics

    NASA Astrophysics Data System (ADS)

    Narins, Peter M.; Feng, Albert S.; Lin, Wenyu; Schnitzler, Hans-Ulrich; Denzinger, Annette; Suthers, Roderick A.; Xu, Chunhe

    2004-02-01

    Several groups of mammals such as bats, dolphins and whales are known to produce ultrasonic signals which are used for navigation and hunting by means of echolocation, as well as for communication. In contrast, frogs and birds produce sounds during night- and day-time hours that are audible to humans; their sounds are so pervasive that together with those of insects, they are considered the primary sounds of nature. Here we show that an Old World frog (Amolops tormotus) and an oscine songbird (Abroscopus albogularis) living near noisy streams reliably produce acoustic signals that contain prominent ultrasonic harmonics. Our findings provide the first evidence that anurans and passerines are capable of generating tonal ultrasonic call components and should stimulate the quest for additional ultrasonic species.

  6. Analysis of speech sounds is left-hemisphere predominant at 100-150ms after sound onset.

    PubMed

    Rinne, T; Alho, K; Alku, P; Holi, M; Sinkkonen, J; Virtanen, J; Bertrand, O; Näätänen, R

    1999-04-06

    Hemispheric specialization of human speech processing has been found in brain imaging studies using fMRI and PET. Due to the restricted time resolution, these methods cannot, however, determine the stage of auditory processing at which this specialization first emerges. We used a dense electrode array covering the whole scalp to record the mismatch negativity (MMN), an event-related brain potential (ERP) automatically elicited by occasional changes in sounds, which ranged from non-phonetic (tones) to phonetic (vowels). MMN can be used to probe auditory central processing on a millisecond scale with no attention-dependent task requirements. Our results indicate that speech processing occurs predominantly in the left hemisphere at the early, pre-attentive level of auditory analysis.

  7. Sound velocity and compressibility for lunar rocks 17 and 46 and for glass spheres from the lunar soil.

    PubMed

    Schreiber, E; Anderson, O L; Sogat, N; Warren, N; Scholz, C

    1970-01-30

    Four experiments on lunar materials are reported: (i) resonance on glass spheres from the soil; (ii) compressibility of rock 10017; (iii) sound velocities of rocks 10046 and 10017; (iv) sound velocity of the lunar fines. The data overlap and are mutually consistent. The glass beads and rock 10017 have mechanical properties which correspond to terrestrial materials. Results of (iv) are consistent with low seismic travel times in the lunar maria. Results of analysis of the microbreccia (10046) agreed with the soil during the first pressure cycle, but after overpressure the rock changed, and it then resembled rock 10017. Three models of the lunar surface were constructed giving density and velocity profiles.

  8. Prediction of drilling site-specific interaction of industrial acoustic stimuli and endangered whales: Beaufort Sea (1985). Final report, July 1985-March 1986

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miles, P.R.; Malme, C.I.; Shepard, G.W.

    1986-10-01

    Research was performed during the first year (1985) of the two-year project investigating potential responsiveness of bowhead and gray whales to underwater sounds associated with offshore oil-drilling sites in the Alaskan Beaufort Sea. The underwater acoustic environment and sound propagation characteristics of five offshore sites were determined. Estimates of industrial noise levels versus distance from those sites are provided. LGL Ltd. (bowhead) and BBN (gray whale) jointly present zones of responsiveness of these whales to typical underwater sounds (drillship, dredge, tugs, drilling at gravel island). An annotated bibliography regarding the potential effects of offshore industrial noise on bowhead whales inmore » the Beaufort Sea is included.« less

  9. Use Of Vertical Electrical Sounding Survey For Study Groundwater In NISSAH Region, SAUDI ARABIA

    NASA Astrophysics Data System (ADS)

    Alhenaki, Bander; Alsoma, Ali

    2015-04-01

    The aim of this research is to investigate groundwater depth in desert and dry environmental conditions area . The study site located in Wadi Nisah-eastern part of Najd province (east-central of Saudi Arabia), Generally, the study site is underlain by Phanerozoic sedimentary rocks of the western edge of the Arabian platform, which rests on Proterozoic basement at depths ranged between 5-8km. Another key objective of this research is to assess the water-table and identify the bearing layers structures study area by using Vertical Electrical Sounding (VES) 1D imaging technique. We have been implemented and acquired a sections of 315 meter vertical electrical soundings using Schlumberger field arrangements . These dataset were conducted along 9 profiles. The resistivity Schlumberger sounding was carried with half-spacing in the range 500 . The VES survey intend to cover several locations where existing wells information may be used for correlations. also location along the valley using the device Syscal R2 The results of this study concluded that there are at least three sedimentary layers to a depth of 130 meter. First layer, extending from the surface to a depth of about 3 meter characterized by dry sandy layer and high resistivity value. The second layer, underlain the first layer to a depth of 70 meter. This layer has less resistant compare to the first layer. Last layer, has low resistivity values of 20 ohm .m to a depth of 130 meter blow ground surface. We have observed a complex pattern of groundwater depth (ranging from 80 meter to 120 meter) which may reflect the lateral heterogeneity of study site. The outcomes of this research has been used to locate the suitable drilling locations.

  10. Atmospheric limb sounding with imaging FTS

    NASA Astrophysics Data System (ADS)

    Friedl-Vallon, Felix; Riese, Martin; Preusse, Peter; Oelhaf, Hermann; Fischer, Herbert

    Imaging Fourier transform spectrometers in the thermal infrared are a promising new class of sensors for atmospheric science. The availability of fast and sensitive large focal plane arrays with appropriate spectral coverage in the infrared region allows the conception and construction of innovative sensors for Nadir and Limb geometry. Instruments in Nadir geometry have already reached prototype status (e.g. Geostationary Imaging Fourier Transform Spectrometer / U. Wisconsin and NASA) or are in Phase A study (infrared sounding mission on Meteosat third generation / ESA and EUMETSAT). The first application of the new technical possibilities to atmospheric limb sounding from space, the Imaging Michelson Interferometer for Passive Atmospheric Sounding (IMIPAS), is currently studied by industry in the context of preparatory work for the next set of ESA earth explorers. The scientific focus of the instrument is on the processes controlling the composition of the mid/upper troposphere and lower stratosphere. The instrument concept of IMIPAS has been conceived at the research centres Karlsruhe and J¨lich. The development of a precursor instrument (GLORIA-AB) at these research institutions u started already in 2005. The instrument will be able to fly on board of various airborne platforms. First scientific missions are planned for the second half of the year 2009 on board the new German research aircraft HALO. This airborne sensor serves its own scientific purpose, but it also provides a test bed to learn about this new instrument class and its peculiarities and to learn to exploit and interpret the wealth of information provided by a limb imaging IR Fourier transform spectrometer. The presentation will discuss design considerations and challenges for GLORIA-AB and put them in the context of the planned satellite application. It will describe the solutions found, present first laboratory figures of merit for the prototype instrument and outline the new scientific possibilities.

  11. Aeroacoustic Improvements to Fluidic Chevron Nozzles

    NASA Technical Reports Server (NTRS)

    Henderson, Brenda; Kinzie, Kevin; Whitmire, Julia; Abeysinghe, Amal

    2006-01-01

    Fluidic chevrons use injected air near the trailing edge of a nozzle to emulate mixing and jet noise reduction characteristics of mechanical chevrons. While previous investigations of "first generation" fluidic chevron nozzles showed only marginal improvements in effective perceived noise levels when compared to nozzles without injection, significant improvements in noise reduction characteristics were achieved through redesigned "second generation" nozzles on a bypass ratio 5 model system. The second-generation core nozzles had improved injection passage contours, external nozzle contour lines, and nozzle trailing edges. The new fluidic chevrons resulted in reduced overall sound pressure levels over that of the baseline nozzle for all observation angles. Injection ports with steep injection angles produced lower overall sound pressure levels than those produced by shallow injection angles. The reductions in overall sound pressure levels were the result of noise reductions at low frequencies. In contrast to the first-generation nozzles, only marginal increases in high frequency noise over that of the baseline nozzle were observed for the second-generation nozzles. The effective perceived noise levels of the new fluidic chevrons are shown to approach those of the core mechanical chevrons.

  12. Changes in teachers' voice quality during a working day with and without electric sound amplification.

    PubMed

    Jónsdottir, Valdis; Laukkanen, Anne-Maria; Siikki, Ilona

    2003-01-01

    The present study investigated changes in the voice quality of teachers during a working day (a). in ordinary conditions and (b). when using electrical sound amplification while teaching. Classroom speech of 5 teachers was recorded with a portable DAT recorder and a head-mounted microphone during the first and the last lesson of a hard working day first in ordinary conditions and the following week using amplification. Long-term average spectrum and sound pressure level (SPL) analyses were made. The subjects' comments were gathered by questionnaire. Voice quality was evaluated by 2 speech trainers. With amplification, SPL was lower and the spectrum more tilted. Voice quality was evaluated to be better. The subjects reported less fatigue in the vocal mechanism. Spectral tilt decreased and SPL increased during the day. There was a tendency for perceived asthenia to decrease. No significant changes were observed in ordinary conditions. The acoustic changes seem to reflect a positive adaptation to vocal loading. Their absence may be a sign of vocal fatigue. Copyright 2003 S. Karger AG, Basel

  13. Design and development of second order MEMS sound pressure gradient sensor

    NASA Astrophysics Data System (ADS)

    Albahri, Shehab

    The design and development of a second order MEMS sound pressure gradient sensor is presented in this dissertation. Inspired by the directional hearing ability of the parasitoid fly, Ormia ochracea, a novel first order directional microphone that mimics the mechanical structure of the fly's ears and detects the sound pressure gradient has been developed. While the first order directional microphones can be very beneficial in a large number of applications, there is great potential for remarkable improvements in performance through the use of second order systems. The second order directional microphone is able to provide a theoretical improvement in Sound to Noise ratio (SNR) of 9.5dB, compared to the first-order system that has its maximum SNR of 6dB. Although second order microphone is more sensitive to sound angle of incidence, the nature of the design and fabrication process imposes different factors that could lead to deterioration in its performance. The first Ormia ochracea second order directional microphone was designed in 2004 and fabricated in 2006 at Binghamton University. The results of the tested parts indicate that the Ormia ochracea second order directional microphone performs mostly as an Omni directional microphone. In this work, the previous design is reexamined and analyzed to explain the unexpected results. A more sophisticated tool implementing a finite element package ANSYS is used to examine the previous design response. This new tool is used to study different factors that used to be ignored in the previous design, mainly; response mismatch and fabrication uncertainty. A continuous model using Hamilton's principle is introduced to verify the results using the new method. Both models agree well, and propose a new way for optimizing the second order directional microphone using geometrical manipulation. In this work we also introduce a new fabrication process flow to increase the fabrication yield. The newly suggested method uses the shell layered analysis method in ANSYS. The developed models simulate the fabricated chips at different stages; with the stress at each layer is introduced using thermal loading. The results indicate a new fabrication process flow to increase the rigidity of the composite layers, and countering the deformation caused by the high stress in the thermal oxide layer.

  14. Towards an unsupervised device for the diagnosis of childhood pneumonia in low resource settings: automatic segmentation of respiratory sounds.

    PubMed

    Sola, J; Braun, F; Muntane, E; Verjus, C; Bertschi, M; Hugon, F; Manzano, S; Benissa, M; Gervaix, A

    2016-08-01

    Pneumonia remains the worldwide leading cause of children mortality under the age of five, with every year 1.4 million deaths. Unfortunately, in low resource settings, very limited diagnostic support aids are provided to point-of-care practitioners. Current UNICEF/WHO case management algorithm relies on the use of a chronometer to manually count breath rates on pediatric patients: there is thus a major need for more sophisticated tools to diagnose pneumonia that increase sensitivity and specificity of breath-rate-based algorithms. These tools should be low cost, and adapted to practitioners with limited training. In this work, a novel concept of unsupervised tool for the diagnosis of childhood pneumonia is presented. The concept relies on the automated analysis of respiratory sounds as recorded by a point-of-care electronic stethoscope. By identifying the presence of auscultation sounds at different chest locations, this diagnostic tool is intended to estimate a pneumonia likelihood score. After presenting the overall architecture of an algorithm to estimate pneumonia scores, the importance of a robust unsupervised method to identify inspiratory and expiratory phases of a respiratory cycle is highlighted. Based on data from an on-going study involving pediatric pneumonia patients, a first algorithm to segment respiratory sounds is suggested. The unsupervised algorithm relies on a Mel-frequency filter bank, a two-step Gaussian Mixture Model (GMM) description of data, and a final Hidden Markov Model (HMM) interpretation of inspiratory-expiratory sequences. Finally, illustrative results on first recruited patients are provided. The presented algorithm opens the doors to a new family of unsupervised respiratory sound analyzers that could improve future versions of case management algorithms for the diagnosis of pneumonia in low-resources settings.

  15. Toxicity of sediment pore water in Puget Sound (Washington, USA): a review of spatial status and temporal trends

    USGS Publications Warehouse

    Long, Edward R.; Carr, R. Scott; Biedenbach, James M.; Weakland, Sandra; Partridge, Valerie; Dutch, Margaret

    2013-01-01

    Data from toxicity tests of the pore water extracted from Puget Sound sediments were compiled from surveys conducted from 1997 to 2009. Tests were performed on 664 samples collected throughout all of the eight monitoring regions in the Sound, an area encompassing 2,294.1 km2. Tests were performed with the gametes of the Pacific purple sea urchin, Strongylocentrotus purpuratus, to measure percent fertilization success as an indicator of relative sediment quality. Data were evaluated to determine the incidence, degree of response, geographic patterns, spatial extent, and temporal changes in toxicity. This is the first survey of this kind and magnitude in Puget Sound. In the initial round of surveys of the eight regions, 40 of 381 samples were toxic for an incidence of 10.5 %. Stations classified as toxic represented an estimated total of 107.1 km2, equivalent to 4.7 % of the total area. Percent sea urchin fertilization ranged from >100 % of the nontoxic, negative controls to 0 %. Toxicity was most prevalent and pervasive in the industrialized harbors and lowest in the deep basins. Conditions were intermediate in deep-water passages, urban bays, and rural bays. A second round of testing in four regions and three selected urban bays was completed 5–10 years following the first round. The incidence and spatial extent of toxicity decreased in two of the regions and two of the bays and increased in the other two regions and the third bay; however, only the latter change was statistically significant. Both the incidence and spatial extent of toxicity were lower in the Sound than in most other US estuaries and marine bays.

  16. First description of underwater acoustic diversity in three temperate ponds.

    PubMed

    Desjonquères, Camille; Rybak, Fanny; Depraetere, Marion; Gasc, Amandine; Le Viol, Isabelle; Pavoine, Sandrine; Sueur, Jérôme

    2015-01-01

    The past decade has produced an increased ecological interest in sonic environments, or soundscapes. However, despite this rise in interest and technological improvements that allow for long-term acoustic surveys in various environments, some habitats' soundscapes remain to be explored. Ponds, and more generally freshwater habitats, are one of these acoustically unexplored environments. Here we undertook the first long term acoustic monitoring of three temperate ponds in France. By aural and visual inspection of a selection of recordings, we identified 48 different sound types, and according to the rarefaction curves we calculated, more sound types are likely present in one of the three ponds. The richness of sound types varied significantly across ponds. Surprisingly, there was no pond-to-pond daily consistency of sound type richness variation; each pond had its own daily patterns of activity. We also explored the possibility of using six acoustic diversity indices to conduct rapid biodiversity assessments in temperate ponds. We found that all indices were sensitive to the background noise as estimated through correlations with the signal-to-noise ratio (SNR). However, we determined that the AR index could be a good candidate to measure acoustic diversities using partial correlations with the SNR as a control variable. Yet, research is still required to automatically compute the SNR in order to apply this index on a large data set of recordings. The results showed that these three temperate ponds host a high level of acoustic diversity in which the soundscapes were variable not only between but also within the ponds. The sources producing this diversity of sounds and the drivers of difference in daily song type richness variation both require further investigation. Such research would yield insights into the biodiversity and ecology of temperate ponds.

  17. Cognitive flexibility modulates maturation and music-training-related changes in neural sound discrimination.

    PubMed

    Saarikivi, Katri; Putkinen, Vesa; Tervaniemi, Mari; Huotilainen, Minna

    2016-07-01

    Previous research has demonstrated that musicians show superior neural sound discrimination when compared to non-musicians, and that these changes emerge with accumulation of training. Our aim was to investigate whether individual differences in executive functions predict training-related changes in neural sound discrimination. We measured event-related potentials induced by sound changes coupled with tests for executive functions in musically trained and non-trained children aged 9-11 years and 13-15 years. High performance in a set-shifting task, indexing cognitive flexibility, was linked to enhanced maturation of neural sound discrimination in both musically trained and non-trained children. Specifically, well-performing musically trained children already showed large mismatch negativity (MMN) responses at a young age as well as at an older age, indicating accurate sound discrimination. In contrast, the musically trained low-performing children still showed an increase in MMN amplitude with age, suggesting that they were behind their high-performing peers in the development of sound discrimination. In the non-trained group, in turn, only the high-performing children showed evidence of an age-related increase in MMN amplitude, and the low-performing children showed a small MMN with no age-related change. These latter results suggest an advantage in MMN development also for high-performing non-trained individuals. For the P3a amplitude, there was an age-related increase only in the children who performed well in the set-shifting task, irrespective of music training, indicating enhanced attention-related processes in these children. Thus, the current study provides the first evidence that, in children, cognitive flexibility may influence age-related and training-related plasticity of neural sound discrimination. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  18. Perception of Water-Based Masking Sounds—Long-Term Experiment in an Open-Plan Office

    PubMed Central

    Hongisto, Valtteri; Varjo, Johanna; Oliva, David; Haapakangas, Annu; Benway, Evan

    2017-01-01

    A certain level of masking sound is necessary to control the disturbance caused by speech sounds in open-plan offices. The sound is usually provided with evenly distributed loudspeakers. Pseudo-random noise is often used as a source of artificial sound masking (PRMS). A recent laboratory experiment suggested that water-based masking sound (WBMS) could be more favorable than PRMS. The purpose of our study was to determine how the employees perceived different WBMSs compared to PRMS. The experiment was conducted in an open-plan office of 77 employees who had been accustomed to work under PRMS (44 dB LAeq). The experiment consisted of five masking conditions: the original PRMS, four different WBMSs and return to the original PRMS. The exposure time of each condition was 3 weeks. The noise level was nearly equal between the conditions (43–45 dB LAeq) but the spectra and the nature of the sounds were very different. A questionnaire was completed at the end of each condition. Acoustic satisfaction was worse during the WBMSs than during the PRMS. The disturbance caused by three out of four WBMSs was larger than that of PRMS. Several attributes describing the sound quality itself were in favor of PRMS. Colleagues' speech sounds disturbed more during WBMSs. None of the WBMSs produced better subjective ratings than PRMS. Although the first WBMS was equal with the PRMS for several variables, the overall results cannot be seen to support the use of WBMSs in office workplaces. Because the experiment suffered from some methodological weaknesses, conclusions about the adequacy of WBMSs cannot yet be drawn. PMID:28769834

  19. Fish sound production in the presence of harmful algal blooms in the eastern Gulf of Mexico.

    PubMed

    Wall, Carrie C; Lembke, Chad; Hu, Chuanmin; Mann, David A

    2014-01-01

    This paper presents the first known research to examine sound production by fishes during harmful algal blooms (HABs). Most fish sound production is species-specific and repetitive, enabling passive acoustic monitoring to identify the distribution and behavior of soniferous species. Autonomous gliders that collect passive acoustic data and environmental data concurrently can be used to establish the oceanographic conditions surrounding sound-producing organisms. Three passive acoustic glider missions were conducted off west-central Florida in October 2011, and September and October 2012. The deployment period for two missions was dictated by the presence of red tide events with the glider path specifically set to encounter toxic Karenia brevis blooms (a.k.a red tides). Oceanographic conditions measured by the glider were significantly correlated to the variation in sounds from six known or suspected species of fish across the three missions with depth consistently being the most significant factor. At the time and space scales of this study, there was no detectable effect of red tide on sound production. Sounds were still recorded within red tide-affected waters from species with overlapping depth ranges. These results suggest that the fishes studied here did not alter their sound production nor migrate out of red tide-affected areas. Although these results are preliminary because of the limited measurements, the data and methods presented here provide a proof of principle and could serve as protocol for future studies on the effects of algal blooms on the behavior of soniferous fishes. To fully capture the effects of episodic events, we suggest that stationary or vertically profiling acoustic recorders and environmental sampling be used as a complement to glider measurements.

  20. Fish Sound Production in the Presence of Harmful Algal Blooms in the Eastern Gulf of Mexico

    PubMed Central

    Wall, Carrie C.; Lembke, Chad; Hu, Chuanmin; Mann, David A.

    2014-01-01

    This paper presents the first known research to examine sound production by fishes during harmful algal blooms (HABs). Most fish sound production is species-specific and repetitive, enabling passive acoustic monitoring to identify the distribution and behavior of soniferous species. Autonomous gliders that collect passive acoustic data and environmental data concurrently can be used to establish the oceanographic conditions surrounding sound-producing organisms. Three passive acoustic glider missions were conducted off west-central Florida in October 2011, and September and October 2012. The deployment period for two missions was dictated by the presence of red tide events with the glider path specifically set to encounter toxic Karenia brevis blooms (a.k.a red tides). Oceanographic conditions measured by the glider were significantly correlated to the variation in sounds from six known or suspected species of fish across the three missions with depth consistently being the most significant factor. At the time and space scales of this study, there was no detectable effect of red tide on sound production. Sounds were still recorded within red tide-affected waters from species with overlapping depth ranges. These results suggest that the fishes studied here did not alter their sound production nor migrate out of red tide-affected areas. Although these results are preliminary because of the limited measurements, the data and methods presented here provide a proof of principle and could serve as protocol for future studies on the effects of algal blooms on the behavior of soniferous fishes. To fully capture the effects of episodic events, we suggest that stationary or vertically profiling acoustic recorders and environmental sampling be used as a complement to glider measurements. PMID:25551564

  1. Full Spatial Resolution Infrared Sounding Application in the Preconvection Environment

    NASA Astrophysics Data System (ADS)

    Liu, C.; Liu, G.; Lin, T.

    2013-12-01

    Advanced infrared (IR) sounders such as the Atmospheric Infrared Sounder (AIRS) and Infrared Atmospheric Sounding Interferometer (IASI) provide atmospheric temperature and moisture profiles with high vertical resolution and high accuracy in preconvection environments. The derived atmospheric stability indices such as convective available potential energy (CAPE) and lifted index (LI) from advanced IR soundings can provide critical information 1 ; 6 h before the development of severe convective storms. Three convective storms are selected for the evaluation of applying AIRS full spatial resolution soundings and the derived products on providing warning information in the preconvection environments. In the first case, the AIRS full spatial resolution soundings revealed local extremely high atmospheric instability 3 h ahead of the convection on the leading edge of a frontal system, while the second case demonstrates that the extremely high atmospheric instability is associated with the local development of severe thunderstorm in the following hours. The third case is a local severe storm that occurred on 7-8 August 2010 in Zhou Qu, China, which caused more than 1400 deaths and left another 300 or more people missing. The AIRS full spatial resolution LI product shows the atmospheric instability 3.5 h before the storm genesis. The CAPE and LI from AIRS full spatial resolution and operational AIRS/AMSU soundings along with Geostationary Operational Environmental Satellite (GOES) Sounder derived product image (DPI) products were analyzed and compared. Case studies show that full spatial resolution AIRS retrievals provide more useful warning information in the preconvection environments for determining favorable locations for convective initiation (CI) than do the coarser spatial resolution operational soundings and lower spectral resolution GOES Sounder retrievals. The retrieved soundings are also tested in a regional data assimilation WRF 3D-var system to evaluate the potential assist in the NWP model.

  2. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    PubMed Central

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  3. Sound attenuation of fiberglass lined ventilation ducts

    NASA Astrophysics Data System (ADS)

    Albright, Jacob

    Sound attenuation is a crucial part of designing any HVAC system. Most ventilation systems are designed to be in areas occupied by one or more persons. If these systems do not adequately attenuate the sound of the supply fan, compressor, or any other source of sound, the affected area could be subject to an array of problems ranging from an annoying hum to a deafening howl. The goals of this project are to quantify the sound attenuation properties of fiberglass duct liner and to perform a regression analysis to develop equations to predict insertion loss values for both rectangular and round duct liners. The first goal was accomplished via insertion loss testing. The tests performed conformed to the ASTM E477 standard. Using the insertion loss test data, regression equations were developed to predict insertion loss values for rectangular ducts ranging in size from 12-in x 18-in to 48-in x 48-in in lengths ranging from 3ft to 30ft. Regression equations were also developed to predict insertion loss values for round ducts ranging in diameters from 12-in to 48-in in lengths ranging from 3ft to 30ft.

  4. Radio Sounding Science at High Powers

    NASA Technical Reports Server (NTRS)

    Green, J. L.; Reinisch, B. W.; Song, P.; Fung, S. F.; Benson, R. F.; Taylor, W. W. L.; Cooper, J. F.; Garcia, L.; Markus, T.; Gallagher, D. L.

    2004-01-01

    Future space missions like the Jupiter Icy Moons Orbiter (JIMO) planned to orbit Callisto, Ganymede, and Europa can fully utilize a variable power radio sounder instrument. Radio sounding at 1 kHz to 10 MHz at medium power levels (10 W to kW) will provide long-range magnetospheric sounding (several Jovian radii) like those first pioneered by the radio plasma imager instrument on IMAGE at low power (less than l0 W) and much shorter distances (less than 5 R(sub E)). A radio sounder orbiting a Jovian icy moon would be able to globally measure time-variable electron densities in the moon ionosphere and the local magnetospheric environment. Near-spacecraft resonance and guided echoes respectively allow measurements of local field magnitude and local field line geometry, perturbed both by direct magnetospheric interactions and by induced components from subsurface oceans. JIMO would allow radio sounding transmissions at much higher powers (approx. 10 kW) making subsurface sounding of the Jovian icy moons possible at frequencies above the ionosphere peak plasma frequency. Subsurface variations in dielectric properties, can be probed for detection of dense and solid-liquid phase boundaries associated with oceans and related structures in overlying ice crusts.

  5. Factors regulating early life history dispersal of Atlantic cod (Gadus morhua) from coastal Newfoundland.

    PubMed

    Stanley, Ryan R E; deYoung, Brad; Snelgrove, Paul V R; Gregory, Robert S

    2013-01-01

    To understand coastal dispersal dynamics of Atlantic cod (Gadus morhua), we examined spatiotemporal egg and larval abundance patterns in coastal Newfoundland. In recent decades, Smith Sound, Trinity Bay has supported the largest known overwintering spawning aggregation of Atlantic cod in the region. We estimated spawning and dispersal characteristics for the Smith Sound-Trinity Bay system by fitting ichthyoplankton abundance data to environmentally-driven, simplified box models. Results show protracted spawning, with sharply increased egg production in early July, and limited dispersal from the Sound. The model for the entire spawning season indicates egg export from Smith Sound is 13%•day(-1) with a net mortality of 27%•day(-1). Eggs and larvae are consistently found in western Trinity Bay with little advection from the system. These patterns mirror particle tracking models that suggest residence times of 10-20 days, and circulation models indicating local gyres in Trinity Bay that act in concert with upwelling dynamics to retain eggs and larvae. Our results are among the first quantitative dispersal estimates from Smith Sound, linking this spawning stock to the adjacent coastal waters. These results illustrate the biophysical interplay regulating dispersal and connectivity originating from inshore spawning of coastal northwest Atlantic.

  6. Long-Term Impairment of Sound Processing in the Auditory Midbrain by Daily Short-Term Exposure to Moderate Noise.

    PubMed

    Cheng, Liang; Wang, Shao-Hui; Peng, Kang; Liao, Xiao-Mei

    2017-01-01

    Most citizen people are exposed daily to environmental noise at moderate levels with a short duration. The aim of the present study was to determine the effects of daily short-term exposure to moderate noise on sound level processing in the auditory midbrain. Sound processing properties of auditory midbrain neurons were recorded in anesthetized mice exposed to moderate noise (80 dB SPL, 2 h/d for 6 weeks) and were compared with those from age-matched controls. Neurons in exposed mice had a higher minimum threshold and maximum response intensity, a longer first spike latency, and a higher slope and narrower dynamic range for rate level function. However, these observed changes were greater in neurons with the best frequency within the noise exposure frequency range compared with those outside the frequency range. These sound processing properties also remained abnormal after a 12-week period of recovery in a quiet laboratory environment after completion of noise exposure. In conclusion, even daily short-term exposure to moderate noise can cause long-term impairment of sound level processing in a frequency-specific manner in auditory midbrain neurons.

  7. Baleen whale infrasonic sounds: Natural variability and function

    NASA Astrophysics Data System (ADS)

    Clark, Christopher W.

    2004-05-01

    Blue and fin whales (Balaenoptera musculus and B. physalus) produce very intense, long, patterned sequences of infrasonic sounds. The acoustic characteristics of these sounds suggest strong selection for signals optimized for very long-range propagation in the deep ocean as first hypothesized by Payne and Webb in 1971. This hypothesis has been partially validated by very long-range detections using hydrophone arrays in deep water. Humpback songs recorded in deep water contain units in the 20-l00 Hz range, and these relatively simple song components are detectable out to many hundreds of miles. The mid-winter peak in the occurrence of 20-Hz fin whale sounds led Watkins to hypothesize a reproductive function similar to humpback (Megaptera novaeangliae) song, and by default this function has been extended to blue whale songs. More recent evidence shows that blue and fin whales produce infrasonic calls in high latitudes during the feeding season, and that singing is associated with areas of high productivity where females congregate to feed. Acoustic sampling over broad spatial and temporal scales for baleen species is revealing higher geographic and seasonal variability in the low-frequency vocal behaviors than previously reported, suggesting that present explanations for baleen whale sounds are too simplistic.

  8. A sound quality model for objective synthesis evaluation of vehicle interior noise based on artificial neural network

    NASA Astrophysics Data System (ADS)

    Wang, Y. S.; Shen, G. Q.; Xing, Y. F.

    2014-03-01

    Based on the artificial neural network (ANN) technique, an objective sound quality evaluation (SQE) model for synthesis annoyance of vehicle interior noises is presented in this paper. According to the standard named GB/T18697, firstly, the interior noises under different working conditions of a sample vehicle are measured and saved in a noise database. Some mathematical models for loudness, sharpness and roughness of the measured vehicle noises are established and performed by Matlab programming. Sound qualities of the vehicle interior noises are also estimated by jury tests following the anchored semantic differential (ASD) procedure. Using the objective and subjective evaluation results, furthermore, an ANN-based model for synthetical annoyance evaluation of vehicle noises, so-called ANN-SAE, is developed. Finally, the ANN-SAE model is proved by some verification tests with the leave-one-out algorithm. The results suggest that the proposed ANN-SAE model is accurate and effective and can be directly used to estimate sound quality of the vehicle interior noises, which is very helpful for vehicle acoustical designs and improvements. The ANN-SAE approach may be extended to deal with other sound-related fields for product quality evaluations in SQE engineering.

  9. A theoretical study of passive control of duct noise using panels of varying compliance.

    PubMed

    Huang, L

    2001-06-01

    It is theoretically demonstrated that, in a duct, a substantial amount of sound energy can be transferred to flexural waves on a finite wall panel when the upstream portion of the panel is made to couple strongly with sound. The flexural wave then loses its energy either through radiating reflection sound waves or by internal friction. The effectiveness of the energy transfer and damping is greatly enhanced if the panel has a gradually decreasing in vacuo wave speed, which, in this study, is achieved by using a tapered membrane under tension. A high noise attenuation rate is possible with the usual viscoelastic materials such as rubber. The transmission loss has a broadband spectrum, and it offers an alternative to conventional duct lining where a smooth air passage is desired and nonacoustical considerations, such as chemical contamination or cost of operation maintenance, are important. Another advantage of the tapered panel is that, at very low frequencies, typically 5% of the first cut-on frequency of the duct, sound reflection occurs over the entire panel length. This supplements the inevitable drop in sound absorption coefficient, and a high transmission loss may still be obtained at very low frequencies.

  10. Universal formula for the holographic speed of sound

    NASA Astrophysics Data System (ADS)

    Anabalón, Andrés; Andrade, Tomás; Astefanesei, Dumitru; Mann, Robert

    2018-06-01

    We consider planar hairy black holes in five dimensions with a real scalar field in the Breitenlohner-Freedman window and derive a universal formula for the holographic speed of sound for any mixed boundary conditions of the scalar field. As an example, we numerically construct the most general class of planar black holes coupled to a single scalar field in the consistent truncation of type IIB supergravity that preserves the SO (3) × SO (3) R-symmetry group of the gauge theory. For this particular family of solutions, we find that the speed of sound exceeds the conformal value. From a phenomenological point of view, the fact that the conformal bound can be violated by choosing the right mixed boundary conditions is relevant for the existence of neutron stars with a certain mass-size relationship for which a large value of the speed of sound codifies a stiff equation of state. In the way, we also shed light on a puzzle regarding the appearance of the scalar charges in the first law. Finally, we generalize the formula of the speed of sound to arbitrary dimensional scalar-metric theories whose parameters lie within the Breitenlohner-Freedman window.

  11. Analysis of sound propagation in ducts using the wave envelope concept

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.

    1974-01-01

    A finite difference formulation is presented for sound propagation in a rectangular two-dimensional duct without steady flow for plane wave input. Before the difference equations are formulated, the governing Helmholtz equation is first transformed to a form whose solution does not oscillate along the length of the duct. This transformation reduces the required number of grid points by an order of magnitude, and the number of grid points becomes independent of the sound frequency. Physically, the transformed pressure represents the amplitude of the conventional sound wave. Example solutions are presented for sound propagation in a one-dimensional straight hard-wall duct and in a two-dimensional straight soft-wall duct without steady flow. The numerical solutions show evidence of the existence along the duct wall of a developing acoustic pressure diffusion boundary layer which is similar in nature to the conventional viscous flow boundary layer. In order to better illustrate this concept, the wave equation and boundary conditions are written such that the frequency no longer appears explicitly in them. The frequency effects in duct propagation can be visualized solely as an expansion and stretching of the suppressor duct.

  12. Development of the software tool for generation and visualization of the finite element head model with bone conduction sounds

    NASA Astrophysics Data System (ADS)

    Nikolić, Dalibor; Milošević, Žarko; Saveljić, Igor; Filipović, Nenad

    2015-12-01

    Vibration of the skull causes a hearing sensation. We call it Bone Conduction (BC) sound. There are several investigations about transmission properties of bone conducted sound. The aim of this study was to develop a software tool for easy generation of the finite element (FE) model of the human head with different materials based on human head anatomy and to calculate sound conduction through the head. Developed software tool generates a model in a few steps. The first step is to do segmentation of CT medical images (DICOM) and to generate a surface mesh files (STL). Each STL file presents a different layer of human head with different material properties (brain, CSF, different layers of the skull bone, skin, etc.). The next steps are to make tetrahedral mesh from obtained STL files, to define FE model boundary conditions and to solve FE equations. This tool uses PAK solver, which is the open source software implemented in SIFEM FP7 project, for calculations of the head vibration. Purpose of this tool is to show impact of the bone conduction sound of the head on the hearing system and to estimate matching of obtained results with experimental measurements.

  13. Nonspeech oral motor treatment issues related to children with developmental speech sound disorders.

    PubMed

    Ruscello, Dennis M

    2008-07-01

    This article examines nonspeech oral motor treatments (NSOMTs) in the population of clients with developmental speech sound disorders. NSOMTs are a collection of nonspeech methods and procedures that claim to influence tongue, lip, and jaw resting postures; increase strength; improve muscle tone; facilitate range of motion; and develop muscle control. In the case of developmental speech sound disorders, NSOMTs are employed before or simultaneous with actual speech production treatment. First, NSOMTs are defined for the reader, and there is a discussion of NSOMTs under the categories of active muscle exercise, passive muscle exercise, and sensory stimulation. Second, different theories underlying NSOMTs along with the implications of the theories are discussed. Finally, a review of pertinent investigations is presented. The application of NSOMTs is questionable due to a number of reservations that include (a) the implied cause of developmental speech sound disorders, (b) neurophysiologic differences between the limbs and oral musculature, (c) the development of new theories of movement and movement control, and (d) the paucity of research literature concerning NSOMTs. There is no substantive evidence to support NSOMTs as interventions for children with developmental speech sound disorders.

  14. Long-Term Impairment of Sound Processing in the Auditory Midbrain by Daily Short-Term Exposure to Moderate Noise

    PubMed Central

    Cheng, Liang; Wang, Shao-Hui; Peng, Kang

    2017-01-01

    Most citizen people are exposed daily to environmental noise at moderate levels with a short duration. The aim of the present study was to determine the effects of daily short-term exposure to moderate noise on sound level processing in the auditory midbrain. Sound processing properties of auditory midbrain neurons were recorded in anesthetized mice exposed to moderate noise (80 dB SPL, 2 h/d for 6 weeks) and were compared with those from age-matched controls. Neurons in exposed mice had a higher minimum threshold and maximum response intensity, a longer first spike latency, and a higher slope and narrower dynamic range for rate level function. However, these observed changes were greater in neurons with the best frequency within the noise exposure frequency range compared with those outside the frequency range. These sound processing properties also remained abnormal after a 12-week period of recovery in a quiet laboratory environment after completion of noise exposure. In conclusion, even daily short-term exposure to moderate noise can cause long-term impairment of sound level processing in a frequency-specific manner in auditory midbrain neurons. PMID:28589040

  15. Snapshot recordings provide a first description of the acoustic signatures of deeper habitats adjacent to coral reefs of Moorea.

    PubMed

    Bertucci, Frédéric; Parmentier, Eric; Berthe, Cécile; Besson, Marc; Hawkins, Anthony D; Aubin, Thierry; Lecchini, David

    2017-01-01

    Acoustic recording has been recognized as a valuable tool for non-intrusive monitoring of the marine environment, complementing traditional visual surveys. Acoustic surveys conducted on coral ecosystems have so far been restricted to barrier reefs and to shallow depths (10-30 m). Since they may provide refuge for coral reef organisms, the monitoring of outer reef slopes and describing of the soundscapes of deeper environment could provide insights into the characteristics of different biotopes of coral ecosystems. In this study, the acoustic features of four different habitats, with different topographies and substrates, located at different depths from 10 to 100 m, were recorded during day-time on the outer reef slope of the north Coast of Moorea Island (French Polynesia). Barrier reefs appeared to be the noisiest habitats whereas the average sound levels at other habitats decreased with their distance from the reef and with increasing depth. However, sound levels were higher than expected by propagation models, supporting that these habitats possess their own sound sources. While reef sounds are known to attract marine larvae, sounds from deeper habitats may then also have a non-negligible attractive potential, coming into play before the reef itself.

  16. LANGUAGE DEVELOPMENT. The developmental dynamics of marmoset monkey vocal production.

    PubMed

    Takahashi, D Y; Fenley, A R; Teramoto, Y; Narayanan, D Z; Borjon, J I; Holmes, P; Ghazanfar, A A

    2015-08-14

    Human vocal development occurs through two parallel interactive processes that transform infant cries into more mature vocalizations, such as cooing sounds and babbling. First, natural categories of sounds change as the vocal apparatus matures. Second, parental vocal feedback sensitizes infants to certain features of those sounds, and the sounds are modified accordingly. Paradoxically, our closest living ancestors, nonhuman primates, are thought to undergo few or no production-related acoustic changes during development, and any such changes are thought to be impervious to social feedback. Using early and dense sampling, quantitative tracking of acoustic changes, and biomechanical modeling, we showed that vocalizations in infant marmoset monkeys undergo dramatic changes that cannot be solely attributed to simple consequences of growth. Using parental interaction experiments, we found that contingent parental feedback influences the rate of vocal development. These findings overturn decades-old ideas about primate vocalizations and show that marmoset monkeys are a compelling model system for early vocal development in humans. Copyright © 2015, American Association for the Advancement of Science.

  17. Calcutta metro: is it safe from noise pollution hazards?

    PubMed

    Bhattacharya, S K; Bandyopadhyay, P; Kashyap, S K

    1996-01-01

    A modest assessment of noise was made in Calcutta Metro, India's first ever underground tube rail system, to examine if the range of noise levels present could endanger the hearing sensitivity of workers for the Metro. Sound measuring instruments of a sound level meter, an octave band analyzer, and a sound level calibrator were used for measuring the sound pressure levels in platforms of three stations: Esplanade, Kalighat and Tollygunge. The results indicated that the averaged A-weighted SPLs in these stations were in the range of 84-87 dBA. In the coaches of the moving train the Leq values ranged 92-99 dBA and LNP 105-117 dBA, all exceeding the safe limit of day time noise exposure of 55 dBA and 85 dBA of ACGIH. The SPLs at 4,000 Hz in the coaches were also in excess of safe exposure limit of 79 dB. The findings thus posed a potential threat to the workers.

  18. Spellbinding and crooning: sound amplification, radio, and political rhetoric in international comparative perspective, 1900-1945.

    PubMed

    Wijfjes, Huub

    2014-01-01

    This article researches in an interdisciplinary way the relationship of sound technology and political culture at the beginning of the twentieth century. It sketches the different strategies that politicians--Franklin D. Roosevelt, Adolf Hitler, Winston Churchill, and Dutch prime minister Hendrikus Colijn--found for the challenges that sound amplification and radio created for their rhetoric and presentation. Taking their different political styles into account, the article demonstrates that the interconnected technologies of sound amplification and radio forced a transition from a spellbinding style based on atmosphere and pathos in a virtual environment to "political crooning" that created artificial intimacy in despatialized simultaneity. Roosevelt and Colijn created the best examples of this political crooning, while Churchill and Hitler encountered problems in this respect. Churchill's radio successes profited from the special circumstances during the first period of World War II. Hitler's speeches were integrated into a radio regime trying to shape, with dictatorial powers, a national socialistic community of listeners.

  19. Description and Flight Performance Results of the WASP Sounding Rocket

    NASA Technical Reports Server (NTRS)

    De Pauw, J. F.; Steffens, L. E.; Yuska, J. A.

    1968-01-01

    A general description of the design and construction of the WASP sounding rocket and of the performance of its first flight are presented. The purpose of the flight test was to place the 862-pound (391-kg) spacecraft above 250 000 feet (76.25 km) on free-fall trajectory for at least 6 minutes in order to study the effect of "weightlessness" on a slosh dynamics experiment. The WASP sounding rocket fulfilled its intended mission requirements. The sounding rocket approximately followed a nominal trajectory. The payload was in free fall above 250 000 feet (76.25 km) for 6.5 minutes and reached an apogee altitude of 134 nautical miles (248 km). Flight data including velocity, altitude, acceleration, roll rate, and angle of attack are discussed and compared to nominal performance calculations. The effect of residual burning of the second stage motor is analyzed. The flight vibration environment is presented and analyzed, including root mean square (RMS) and power spectral density analysis.

  20. Radiation mechanism for the aerodynamic sound of gears - An explanation for the radiation process by air flow observation

    NASA Astrophysics Data System (ADS)

    Houjoh, Haruo

    1992-12-01

    One specific feature of the aerodynamic sound produced at the face end region is that the radiation becomes equally weak by filling root spaces as by shortening the center distance. However, one can easily expect that such actions make the air flow faster, and consequently make the sound louder. This paper attempts to reveal the reason for such a feature. First, air flow induced by the pumping action of the gear pair was analyzed regarding a series of root spaces as volume varying cavities which have channels to adjacent cavities as well as the exit/inlet at the face ends. The numerical analysis was verified by the hot wire anemometer measurement. Next, from the obtained flow response, the sound source was estimated to be a combination of symmetrically distributed simple sources. Taking the effect of either the center distance or root filling into consideration, it is shown that the simplified model can explain such a feature rationally.

  1. Quantifying the influence of flow asymmetries on glottal sound sources in speech

    NASA Astrophysics Data System (ADS)

    Erath, Byron; Plesniak, Michael

    2008-11-01

    Human speech is made possible by the air flow interaction with the vocal folds. During phonation, asymmetries in the glottal flow field may arise from flow phenomena (e.g. the Coanda effect) as well as from pathological vocal fold motion (e.g. unilateral paralysis). In this study, the effects of flow asymmetries on glottal sound sources were investigated. Dynamically-programmable 7.5 times life-size vocal fold models with 2 degrees-of-freedom (linear and rotational) were constructed to provide a first-order approximation of vocal fold motion. Important parameters (Reynolds, Strouhal, and Euler numbers) were scaled to physiological values. Normal and abnormal vocal fold motions were synthesized, and the velocity field and instantaneous transglottal pressure drop were measured. Variability in the glottal jet trajectory necessitated sorting of the data according to the resulting flow configuration. The dipole sound source is related to the transglottal pressure drop via acoustic analogies. Variations in the transglottal pressure drop (and subsequently the dipole sound source) arising from flow asymmetries are discussed.

  2. Measurement of heart sounds with EMFi transducer.

    PubMed

    Kärki, Satu; Kääriäinen, Minna; Lekkala, Jukka

    2007-01-01

    A measurement system for heart sounds was implemented by using ElectroMechanical Film (EMFi). Heart sounds are produced by the vibrations of the cardiac structure. An EMFi transducer attached to the skin of the chest wall converts these mechanical vibrations into an electrical signal. Furthermore, the signal is amplified and transmitted to the computer. The data is analyzed with Matlab software. The low-frequency components of the measured signal (respiration and pulsation of the heart) are filtered out as well as the 50 Hz noise. Also the power spectral density (PSD) plot is computed. In test measurements, the signal was measured with respiration and by holding breath. From the filtered signal, the first (S1) and the second (S2) heart sound can be clearly seen in both cases. In addition, from the raw data signals the respiration frequency and the heart rate can be determined. In future applications, with the EMFi material it is possible to implement a plaster-like transducer measuring vital signals.

  3. Soundscape manipulation enhances larval recruitment of a reef-building mollusk

    PubMed Central

    Bohnenstiehl, DelWayne R.; Eggleston, David B.

    2015-01-01

    Marine seafloor ecosystems, and efforts to restore them, depend critically on the influx and settlement of larvae following their pelagic dispersal period. Larval dispersal and settlement patterns are driven by a combination of physical oceanography and behavioral responses of larvae to a suite of sensory cues both in the water column and at settlement sites. There is growing evidence that the biological and physical sounds associated with adult habitats (i.e., the “soundscape”) influence larval settlement and habitat selection; however, the significance of acoustic cues is rarely tested. Here we show in a field experiment that the free-swimming larvae of an estuarine invertebrate, the eastern oyster, respond to the addition of replayed habitat-related sounds. Oyster larval recruitment was significantly higher on larval collectors exposed to oyster reef sounds compared to no-sound controls. These results provide the first field evidence that soundscape cues may attract the larval settlers of a reef-building estuarine invertebrate. PMID:26056624

  4. The social vocalization repertoire of east Australian migrating humpback whales (Megaptera novaeangliae).

    PubMed

    Dunlop, Rebecca A; Noad, Michael J; Cato, Douglas H; Stokes, Dale

    2007-11-01

    Although the songs of humpback whales have been extensively studied, other vocalizations and percussive sounds, referred to as "social sounds," have received little attention. This study presents the social vocalization repertoire of migrating east Australian humpback whales from a sample of 660 sounds recorded from 61 groups of varying composition, over three years. The social vocalization repertoire of humpback whales was much larger than previously described with a total of 34 separate call types classified aurally and by spectrographic analysis as well as statistically. Of these, 21 call types were the same as units of the song current at the time of recording but used individually instead of as part of the song sequence, while the other 13 calls were stable over the three years of the study and were not part of the song. This study provides a catalog of sounds that can be used as a basis for future studies. It is an essential first step in determining the function, contextual use and cultural transmission of humpback social vocalizations.

  5. Transport processes and sound velocity in vibrationally non-equilibrium gas of anharmonic oscillators

    NASA Astrophysics Data System (ADS)

    Rydalevskaya, Maria A.; Voroshilova, Yulia N.

    2018-05-01

    Vibrationally non-equilibrium flows of chemically homogeneous diatomic gases are considered under the conditions that the distribution of the molecules over vibrational levels differs significantly from the Boltzmann distribution. In such flows, molecular collisions can be divided into two groups: the first group corresponds to "rapid" microscopic processes whereas the second one corresponds to "slow" microscopic processes (their rate is comparable to or larger than that of gasdynamic parameters variation). The collisions of the first group form quasi-stationary vibrationally non-equilibrium distribution functions. The model kinetic equations are used to study the transport processes under these conditions. In these equations, the BGK-type approximation is used to model only the collision operators of the first group. It allows us to simplify derivation of the transport fluxes and calculation of the kinetic coefficients. Special attention is given to the connection between the formulae for the bulk viscosity coefficient and the sound velocity square.

  6. Development of the Astrobee F sounding rocket system.

    NASA Technical Reports Server (NTRS)

    Jenkins, R. B.; Taylor, J. P.; Honecker, H. J., Jr.

    1973-01-01

    The development of the Astrobee F sounding rocket vehicle through the first flight test at NASA-Wallops Station is described. Design and development of a 15 in. diameter, dual thrust, solid propellant motor demonstrating several new technology features provided the basis for the flight vehicle. The 'F' motor test program described demonstrated the following advanced propulsion technology: tandem dual grain configuration, low burning rate HTPB case-bonded propellant, and molded plastic nozzle. The resultant motor integrated into a flight vehicle was successfully flown with extensive diagnostic instrumentation.-

  7. Hearing on the Reauthorization of the Higher Education Act of 1965; Sallie Mae--Safety and Soundness. Hearing before the Subcommittee on Postsecondary Education of the Committee on Education and Labor. House of Representatives, One Hundred Second Congress, First Session.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Subcommittee on Postsecondary Education.

    As part of a series of hearings on the reauthorization of the Higher Education Act of 1965, testimony was heard on the safety and soundness of the Student Loan Marketing Association (Sallie Mae). Witnesses discussed many issues surrounding financial oversight of federal agencies and financial risk to the taxpayer through the potential failure of…

  8. Noise-induced hearing impairment and handicap

    NASA Technical Reports Server (NTRS)

    1984-01-01

    A permanent, noise-induced hearing loss has doubly harmful effect on speech communications. First, the elevation in the threshold of hearing means that many speech sounds are too weak to be heard, and second, very intense speech sounds may appear to be distorted. The whole question of the impact of noise-induced hearing loss upon the impairments and handicaps experienced by people with such hearing losses was somewhat controversial partly because of the economic aspects of related practical noise control and workmen's compensation.

  9. Ultrasonic sensing for noninvasive characterization of oil-water-gas flow in a pipe

    NASA Astrophysics Data System (ADS)

    Chillara, Vamshi Krishna; Sturtevant, Blake T.; Pantea, Cristian; Sinha, Dipen N.

    2017-02-01

    A technique for noninvasive ultrasonic characterization of multiphase crude oil-water-gas flow is discussed. The proposed method relies on determining the sound speed in the mixture. First, important issues associated with making real-time noninvasive measurements are discussed. Then, signal processing approach adopted to determine the sound speed in the multiphase mixture is presented. Finally, results from controlled experiments on crude oil-water mixture in both the presence and absence of gas are presented.

  10. Cosmic X-ray physics

    NASA Technical Reports Server (NTRS)

    Mccammon, D.; Cox, D. P.; Kraushaar, W. L.; Sanders, W. T.

    1987-01-01

    The soft X-ray sky survey data are combined with the results from the UXT sounding rocket payload. Very strong constraints can then be placed on models of the origin of the soft diffuse background. Additional observational constraints force more complicated and realistic models. Significant progress was made in the extraction of more detailed spectral information from the UXT data set. Work was begun on a second generation proportional counter response model. The first flight of the sounding rocket will have a collimator to study the diffuse background.

  11. A Study on the Model of Detecting the Liquid Level of Sealed Containers Based on Kirchhoff Approximation Theory.

    PubMed

    Zhang, Bin; Song, Wen-Ai; Wei, Yue-Juan; Zhang, Dong-Song; Liu, Wen-Yi

    2017-06-15

    By simulating the sound field of a round piston transducer with the Kirchhoff integral theorem and analyzing the shape of ultrasound beams and propagation characteristics in a metal container wall, this study presents a model for calculating the echo sound pressure by using the Kirchhoff paraxial approximation theory, based on which and according to different ultrasonic impedance between gas and liquid media, a method for detecting the liquid level from outside of sealed containers is proposed. Then, the proposed method is evaluated through two groups of experiments. In the first group, three kinds of liquid media with different ultrasonic impedance are used as detected objects; the echo sound pressure is calculated by using the proposed model under conditions of four sets of different wall thicknesses. The changing characteristics of the echo sound pressure in the entire detection process are analyzed, and the effects of different ultrasonic impedance of liquids on the echo sound pressure are compared. In the second group, taking water as an example, two transducers with different radii are selected to measure the liquid level under four sets of wall thickness. Combining with sound field characteristics, the influence of different size transducers on the pressure calculation and detection resolution are discussed and analyzed. Finally, the experimental results indicate that measurement uncertainly is better than ±5 mm, which meets the industrial inspection requirements.

  12. A Study on the Model of Detecting the Liquid Level of Sealed Containers Based on Kirchhoff Approximation Theory

    PubMed Central

    Zhang, Bin; Song, Wen-Ai; Wei, Yue-Juan; Zhang, Dong-Song; Liu, Wen-Yi

    2017-01-01

    By simulating the sound field of a round piston transducer with the Kirchhoff integral theorem and analyzing the shape of ultrasound beams and propagation characteristics in a metal container wall, this study presents a model for calculating the echo sound pressure by using the Kirchhoff paraxial approximation theory, based on which and according to different ultrasonic impedance between gas and liquid media, a method for detecting the liquid level from outside of sealed containers is proposed. Then, the proposed method is evaluated through two groups of experiments. In the first group, three kinds of liquid media with different ultrasonic impedance are used as detected objects; the echo sound pressure is calculated by using the proposed model under conditions of four sets of different wall thicknesses. The changing characteristics of the echo sound pressure in the entire detection process are analyzed, and the effects of different ultrasonic impedance of liquids on the echo sound pressure are compared. In the second group, taking water as an example, two transducers with different radii are selected to measure the liquid level under four sets of wall thickness. Combining with sound field characteristics, the influence of different size transducers on the pressure calculation and detection resolution are discussed and analyzed. Finally, the experimental results indicate that measurement uncertainly is better than ±5 mm, which meets the industrial inspection requirements. PMID:28617326

  13. The vibroacoustic response and sound absorption performance of multilayer, microperforated rib-stiffened plates

    NASA Astrophysics Data System (ADS)

    Zhou, Haian; Wang, Xiaoming; Wu, Huayong; Meng, Jianbing

    2017-10-01

    The vibroacoustic response and sound absorption performance of a structure composed of multilayer plates and one rigid back wall are theoretically analyzed. In this structure, all plates are two-dimensional, microperforated, and periodically rib-stiffened. To investigate such a structural system, semianalytical models of one-layer and multilayer plate structures considering the vibration effects are first developed. Then approaches of the space harmonic method and Fourier transforms are applied to a one-layer plate, and finally the cascade connection method is utilized for a multilayer plate structure. Based on fundamental acoustic formulas, the vibroacoustic responses of microperforated stiffened plates are expressed as functions of a series of harmonic amplitudes of plate displacement, which are then solved by employing the numerical truncation method. Applying the inverse Fourier transform, wave propagation, and linear addition properties, the equations of the sound pressures and absorption coefficients for the one-layer and multilayer stiffened plates in physical space are finally derived. Using numerical examples, the effects of the most important physical parameters—for example, the perforation ratio of the plate, sound incident angles, and periodical rib spacing—on sound absorption performance are examined. Numerical results indicate that the sound absorption performance of the studied structure is effectively enhanced by the flexural vibration of the plate in water. Finally, the proposed approaches are validated by comparing the results of stiffened plates of the present work with solutions from previous studies.

  14. Experimental Simulation of Active Control With On-line System Identification on Sound Transmission Through an Elastic Plate

    NASA Technical Reports Server (NTRS)

    1998-01-01

    An adaptive control algorithm with on-line system identification capability has been developed. One of the great advantages of this scheme is that an additional system identification mechanism such as an additional uncorrelated random signal generator as the source of system identification is not required. A time-varying plate-cavity system is used to demonstrate the control performance of this algorithm. The time-varying system consists of a stainless-steel plate which is bolted down on a rigid cavity opening where the cavity depth was changed with respect to time. For a given externally located harmonic sound excitation, the system identification and the control are simultaneously executed to minimize the transmitted sound in the cavity. The control performance of the algorithm is examined for two cases. First, all the water was drained, the external disturbance frequency is swept with 1 Hz/sec. The result shows an excellent frequency tracking capability with cavity internal sound suppression of 40 dB. For the second case, the water level is initially empty and then raised to 3/20 full in 60 seconds while the external sound excitation is fixed with a frequency. Hence, the cavity resonant frequency decreases and passes the external sound excitation frequency. The algorithm shows 40 dB transmitted noise suppression without compromising the system identification tracking capability.

  15. Seismic and Biological Sources of Ambient Ocean Sound

    NASA Astrophysics Data System (ADS)

    Freeman, Simon Eric

    Sound is the most efficient radiation in the ocean. Sounds of seismic and biological origin contain information regarding the underlying processes that created them. A single hydrophone records summary time-frequency information from the volume within acoustic range. Beamforming using a hydrophone array additionally produces azimuthal estimates of sound sources. A two-dimensional array and acoustic focusing produce an unambiguous two-dimensional `image' of sources. This dissertation describes the application of these techniques in three cases. The first utilizes hydrophone arrays to investigate T-phases (water-borne seismic waves) in the Philippine Sea. Ninety T-phases were recorded over a 12-day period, implying a greater number of seismic events occur than are detected by terrestrial seismic monitoring in the region. Observation of an azimuthally migrating T-phase suggests that reverberation of such sounds from bathymetric features can occur over megameter scales. In the second case, single hydrophone recordings from coral reefs in the Line Islands archipelago reveal that local ambient reef sound is spectrally similar to sounds produced by small, hard-shelled benthic invertebrates in captivity. Time-lapse photography of the reef reveals an increase in benthic invertebrate activity at sundown, consistent with an increase in sound level. The dominant acoustic phenomenon on these reefs may thus originate from the interaction between a large number of small invertebrates and the substrate. Such sounds could be used to take census of hard-shelled benthic invertebrates that are otherwise extremely difficult to survey. A two-dimensional `map' of sound production over a coral reef in the Hawaiian Islands was obtained using two-dimensional hydrophone array in the third case. Heterogeneously distributed bio-acoustic sources were generally co-located with rocky reef areas. Acoustically dominant snapping shrimp were largely restricted to one location within the area surveyed. This distribution of sources could reveal small-scale spatial ecological limitations, such as the availability of food and shelter. While array-based passive acoustic sensing is well established in seismoacoustics, the technique is little utilized in the study of ambient biological sound. With the continuance of Moore's law and advances in battery and memory technology, inferring biological processes from ambient sound may become a more accessible tool in underwater ecological evaluation and monitoring.

  16. Cortical processing of pitch: Model-based encoding and decoding of auditory fMRI responses to real-life sounds.

    PubMed

    De Angelis, Vittoria; De Martino, Federico; Moerel, Michelle; Santoro, Roberta; Hausfeld, Lars; Formisano, Elia

    2017-11-13

    Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant perceptual attribute such as pitch. Taken together, the results of our model-based encoding and decoding analyses indicated that the pitch of complex real life sounds is extracted and processed in lateral HG/STG regions, at locations consistent with those indicated in several previous fMRI studies using synthetic sounds. Within these regions, pitch-related sound representations reflect the modulatory combination of height and the salience of the pitch percept. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Reprint of: Initial uncertainty impacts statistical learning in sound sequence processing.

    PubMed

    Todd, Juanita; Provost, Alexander; Whitson, Lisa; Mullens, Daniel

    2018-05-18

    This paper features two studies confirming a lasting impact of first learning on how subsequent experience is weighted in early relevance-filtering processes. In both studies participants were exposed to sequences of sound that contained a regular pattern on two different timescales. Regular patterning in sound is readily detected by the auditory system and used to form "prediction models" that define the most likely properties of sound to be encountered in a given context. The presence and strength of these prediction models is inferred from changes in automatically elicited components of auditory evoked potentials. Both studies employed sound sequences that contained both a local and longer-term pattern. The local pattern was defined by a regular repeating pure tone occasionally interrupted by a rare deviating tone (p=0.125) that was physically different (a 30msvs. 60ms duration difference in one condition and a 1000Hz vs. 1500Hz frequency difference in the other). The longer-term pattern was defined by the rate at which the two tones alternated probabilities (i.e., the tone that was first rare became common and the tone that was first common became rare). There was no task related to the tones and participants were asked to ignore them while focussing attention on a movie with subtitles. Auditory-evoked potentials revealed long lasting modulatory influences based on whether the tone was initially encountered as rare and unpredictable or common and predictable. The results are interpreted as evidence that probability (or indeed predictability) assigns a differential information-value to the two tones that in turn affects the extent to which prediction models are updated and imposed. These effects are exposed for both common and rare occurrences of the tones. The studies contribute to a body of work that reveals that probabilistic information is not faithfully represented in these early evoked potentials and instead exposes that predictability (or conversely uncertainty) may trigger value-based learning modulations even in task-irrelevant incidental learning. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  18. Experimental validation of finite element and boundary element methods for predicting structural vibration and radiated noise

    NASA Technical Reports Server (NTRS)

    Seybert, A. F.; Wu, T. W.; Wu, X. F.

    1994-01-01

    This research report is presented in three parts. In the first part, acoustical analyses were performed on modes of vibration of the housing of a transmission of a gear test rig developed by NASA. The modes of vibration of the transmission housing were measured using experimental modal analysis. The boundary element method (BEM) was used to calculate the sound pressure and sound intensity on the surface of the housing and the radiation efficiency of each mode. The radiation efficiency of each of the transmission housing modes was then compared to theoretical results for a finite baffled plate. In the second part, analytical and experimental validation of methods to predict structural vibration and radiated noise are presented. A rectangular box excited by a mechanical shaker was used as a vibrating structure. Combined finite element method (FEM) and boundary element method (BEM) models of the apparatus were used to predict the noise level radiated from the box. The FEM was used to predict the vibration, while the BEM was used to predict the sound intensity and total radiated sound power using surface vibration as the input data. Vibration predicted by the FEM model was validated by experimental modal analysis; noise predicted by the BEM was validated by measurements of sound intensity. Three types of results are presented for the total radiated sound power: sound power predicted by the BEM model using vibration data measured on the surface of the box; sound power predicted by the FEM/BEM model; and sound power measured by an acoustic intensity scan. In the third part, the structure used in part two was modified. A rib was attached to the top plate of the structure. The FEM and BEM were then used to predict structural vibration and radiated noise respectively. The predicted vibration and radiated noise were then validated through experimentation.

  19. Metagenomic Profiling of Microbial Composition and Antibiotic Resistance Determinants in Puget Sound

    PubMed Central

    Port, Jesse A.; Wallace, James C.; Griffith, William C.; Faustman, Elaine M.

    2012-01-01

    Human-health relevant impacts on marine ecosystems are increasing on both spatial and temporal scales. Traditional indicators for environmental health monitoring and microbial risk assessment have relied primarily on single species analyses and have provided only limited spatial and temporal information. More high-throughput, broad-scale approaches to evaluate these impacts are therefore needed to provide a platform for informing public health. This study uses shotgun metagenomics to survey the taxonomic composition and antibiotic resistance determinant content of surface water bacterial communities in the Puget Sound estuary. Metagenomic DNA was collected at six sites in Puget Sound in addition to one wastewater treatment plant (WWTP) that discharges into the Sound and pyrosequenced. A total of ∼550 Mbp (1.4 million reads) were obtained, 22 Mbp of which could be assembled into contigs. While the taxonomic and resistance determinant profiles across the open Sound samples were similar, unique signatures were identified when comparing these profiles across the open Sound, a nearshore marina and WWTP effluent. The open Sound was dominated by α-Proteobacteria (in particular Rhodobacterales sp.), γ-Proteobacteria and Bacteroidetes while the marina and effluent had increased abundances of Actinobacteria, β-Proteobacteria and Firmicutes. There was a significant increase in the antibiotic resistance gene signal from the open Sound to marina to WWTP effluent, suggestive of a potential link to human impacts. Mobile genetic elements associated with environmental and pathogenic bacteria were also differentially abundant across the samples. This study is the first comparative metagenomic survey of Puget Sound and provides baseline data for further assessments of community composition and antibiotic resistance determinants in the environment using next generation sequencing technologies. In addition, these genomic signals of potential human impact can be used to guide initial public health monitoring as well as more targeted and functionally-based investigations. PMID:23144718

  20. Performance of an open-source heart sound segmentation algorithm on eight independent databases.

    PubMed

    Liu, Chengyu; Springer, David; Clifford, Gari D

    2017-08-01

    Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for evaluators who need to test their algorithms with realistic data and share reproducible results.

  1. Diversity of fish sound types in the Pearl River Estuary, China

    PubMed Central

    Wang, Zhi-Tao; Nowacek, Douglas P.; Akamatsu, Tomonari; Liu, Jian-Chang; Duan, Guo-Qin; Cao, Han-Jiang

    2017-01-01

    Background Repetitive species-specific sound enables the identification of the presence and behavior of soniferous species by acoustic means. Passive acoustic monitoring has been widely applied to monitor the spatial and temporal occurrence and behavior of calling species. Methods Underwater biological sounds in the Pearl River Estuary, China, were collected using passive acoustic monitoring, with special attention paid to fish sounds. A total of 1,408 suspected fish calls comprising 18,942 pulses were qualitatively analyzed using a customized acoustic analysis routine. Results We identified a diversity of 66 types of fish sounds. In addition to single pulse, the sounds tended to have a pulse train structure. The pulses were characterized by an approximate 8 ms duration, with a peak frequency from 500 to 2,600 Hz and a majority of the energy below 4,000 Hz. The median inter-pulsepeak interval (IPPI) of most call types was 9 or 10 ms. Most call types with median IPPIs of 9 ms and 10 ms were observed at times that were exclusive from each other, suggesting that they might be produced by different species. According to the literature, the two section signal types of 1 + 1 and 1 + N10 might belong to big-snout croaker (Johnius macrorhynus), and 1 + N19 might be produced by Belanger’s croaker (J. belangerii). Discussion Categorization of the baseline ambient biological sound is an important first step in mapping the spatial and temporal patterns of soniferous fishes. The next step is the identification of the species producing each sound. The distribution pattern of soniferous fishes will be helpful for the protection and management of local fishery resources and in marine environmental impact assessment. Since the local vulnerable Indo-Pacific humpback dolphin (Sousa chinensis) mainly preys on soniferous fishes, the fine-scale distribution pattern of soniferous fishes can aid in the conservation of this species. Additionally, prey and predator relationships can be observed when a database of species-identified sounds is completed. PMID:29085746

  2. Diversity of fish sound types in the Pearl River Estuary, China.

    PubMed

    Wang, Zhi-Tao; Nowacek, Douglas P; Akamatsu, Tomonari; Wang, Ke-Xiong; Liu, Jian-Chang; Duan, Guo-Qin; Cao, Han-Jiang; Wang, Ding

    2017-01-01

    Repetitive species-specific sound enables the identification of the presence and behavior of soniferous species by acoustic means. Passive acoustic monitoring has been widely applied to monitor the spatial and temporal occurrence and behavior of calling species. Underwater biological sounds in the Pearl River Estuary, China, were collected using passive acoustic monitoring, with special attention paid to fish sounds. A total of 1,408 suspected fish calls comprising 18,942 pulses were qualitatively analyzed using a customized acoustic analysis routine. We identified a diversity of 66 types of fish sounds. In addition to single pulse, the sounds tended to have a pulse train structure. The pulses were characterized by an approximate 8 ms duration, with a peak frequency from 500 to 2,600 Hz and a majority of the energy below 4,000 Hz. The median inter-pulsepeak interval (IPPI) of most call types was 9 or 10 ms. Most call types with median IPPIs of 9 ms and 10 ms were observed at times that were exclusive from each other, suggesting that they might be produced by different species. According to the literature, the two section signal types of 1 + 1 and 1 + N 10 might belong to big-snout croaker ( Johnius macrorhynus ), and 1 + N 19 might be produced by Belanger's croaker ( J. belangerii ). Categorization of the baseline ambient biological sound is an important first step in mapping the spatial and temporal patterns of soniferous fishes. The next step is the identification of the species producing each sound. The distribution pattern of soniferous fishes will be helpful for the protection and management of local fishery resources and in marine environmental impact assessment. Since the local vulnerable Indo-Pacific humpback dolphin ( Sousa chinensis ) mainly preys on soniferous fishes, the fine-scale distribution pattern of soniferous fishes can aid in the conservation of this species. Additionally, prey and predator relationships can be observed when a database of species-identified sounds is completed.

  3. 11. Interior view of first floor of 1922 north section, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. Interior view of first floor of 1922 north section, showing east wall and windows at far north end of building. Camera pointed E. Rear of building is partially visible on far left. - Puget Sound Naval Shipyard, Pattern Shop, Farragut Avenue, Bremerton, Kitsap County, WA

  4. Pushing the Envelope

    NASA Image and Video Library

    2017-10-12

    The first generation X-1 aircraft changed aviation history in numerous ways, and not simply because they were the first aircraft to fly faster than the speed of sound. Rather, they established the concept of the research aircraft, built solely for experimental purposes. NASA continues this legacy of experimental aircraft today.

  5. Musical Sound Quality in Cochlear Implant Users: A Comparison in Bass Frequency Perception Between Fine Structure Processing and High-Definition Continuous Interleaved Sampling Strategies.

    PubMed

    Roy, Alexis T; Carver, Courtney; Jiradejvong, Patpong; Limb, Charles J

    2015-01-01

    Med-El cochlear implant (CI) patients are typically programmed with either the fine structure processing (FSP) or high-definition continuous interleaved sampling (HDCIS) strategy. FSP is the newer-generation strategy and aims to provide more direct encoding of fine structure information compared with HDCIS. Since fine structure information is extremely important in music listening, FSP may offer improvements in musical sound quality for CI users. Despite widespread clinical use of both strategies, few studies have assessed the possible benefits in music perception for the FSP strategy. The objective of this study is to measure the differences in musical sound quality discrimination between the FSP and HDCIS strategies. Musical sound quality discrimination was measured using a previously designed evaluation, called Cochlear Implant-MUltiple Stimulus with Hidden Reference and Anchor (CI-MUSHRA). In this evaluation, participants were required to detect sound quality differences between an unaltered real-world musical stimulus and versions of the stimulus in which various amount of bass (low) frequency information was removed via a high-pass filer. Eight CI users, currently using the FSP strategy, were enrolled in this study. In the first session, participants completed the CI-MUSHRA evaluation with their FSP strategy. Patients were then programmed with the clinical-default HDCIS strategy, which they used for 2 months to allow for acclimatization. After acclimatization, each participant returned for the second session, during which they were retested with HDCIS, and then switched back to their original FSP strategy and tested acutely. Sixteen normal-hearing (NH) controls completed a CI-MUSHRA evaluation for comparison, in which NH controls listened to music samples under normal acoustic conditions, without CI stimulation. Sensitivity to high-pass filtering more closely resembled that of NH controls when CI users were programmed with the clinical-default FSP strategy compared with performance when programmed with HDCIS (mixed-design analysis of variance, p < 0.05). The clinical-default FSP strategy offers improvements in musical sound quality discrimination for CI users with respect to bass frequency perception. This improved bass frequency discrimination may in turn support enhanced musical sound quality. This is the first study that has demonstrated objective improvements in musical sound quality discrimination with the newer-generation FSP strategy. These positive results may help guide the selection of processing strategies for Med-El CI patients. In addition, CI-MUSHRA may also provide a novel method for assessing the benefits of newer processing strategies in the future.

  6. Vocalisation sound pattern identification in young broiler chickens.

    PubMed

    Fontana, I; Tullo, E; Scrase, A; Butterworth, A

    2016-09-01

    In this study, we describe the monitoring of young broiler chicken vocalisation, with sound recorded and assessed at regular intervals throughout the life of the birds from day 1 to day 38, with a focus on the first week of life. We assess whether there are recognisable, and even predictable, vocalisation patterns based on frequency and sound spectrum analysis, which can be observed in birds at different ages and stages of growth within the relatively short life of the birds in commercial broiler production cycles. The experimental trials were carried out in a farm where the broiler where reared indoor, and audio recording procedures carried out over 38 days. The recordings were made using two microphones connected to a digital recorder, and the sonic data was collected in situations without disturbance of the animals beyond that created by the routine activities of the farmer. Digital files of 1 h duration were cut into short files of 10 min duration, and these sound recordings were analysed and labelled using audio analysis software. Analysis of these short sound files showed that the key vocalisation frequency and patterns changed in relation to increasing age and the weight of the broilers. Statistical analysis showed a significant correlation (P<0.001) between the frequency of vocalisation and the age of the birds. Based on the identification of specific frequencies of the sounds emitted, in relation to age and weight, it is proposed that there is potential for audio monitoring and comparison with 'anticipated' sound patterns to be used to evaluate the status of farmed broiler chicken.

  7. How do auditory cortex neurons represent communication sounds?

    PubMed

    Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris; Laudanski, Jonathan; Occelli, Florian; Edeline, Jean-Marc

    2013-11-01

    A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Measurement of the speed of sound by observation of the Mach cones in a complex plasma under microgravity conditions

    NASA Astrophysics Data System (ADS)

    Zhukhovitskii, D. I.; Fortov, V. E.; Molotkov, V. I.; Lipaev, A. M.; Naumkin, V. N.; Thomas, H. M.; Ivlev, A. V.; Schwabe, M.; Morfill, G. E.

    2015-02-01

    We report the first observation of the Mach cones excited by a larger microparticle (projectile) moving through a cloud of smaller microparticles (dust) in a complex plasma with neon as a buffer gas under microgravity conditions. A collective motion of the dust particles occurs as propagation of the contact discontinuity. The corresponding speed of sound was measured by a special method of the Mach cone visualization. The measurement results are incompatible with the theory of ion acoustic waves. The estimate for the pressure in a strongly coupled Coulomb system and a scaling law for the complex plasma make it possible to derive an evaluation for the speed of sound, which is in a reasonable agreement with the experiments in complex plasmas.

  9. An application of boundary element method calculations to hearing aid systems: The influence of the human head

    NASA Astrophysics Data System (ADS)

    Rasmussen, Karsten B.; Juhl, Peter

    2004-05-01

    Boundary element method (BEM) calculations are used for the purpose of predicting the acoustic influence of the human head in two cases. In the first case the sound source is the mouth and in the second case the sound is plane waves arriving from different directions in the horizontal plane. In both cases the sound field is studied in relation to two positions above the right ear being representative of hearing aid microphone positions. Both cases are relevant for hearing aid development. The calculations are based upon a direct BEM implementation in Matlab. The meshing is based on the original geometrical data files describing the B&K Head and Torso Simulator 4128 combined with a 3D scan of the pinna.

  10. Speed of sound and photoacoustic imaging with an optical camera based ultrasound detection system

    NASA Astrophysics Data System (ADS)

    Nuster, Robert; Paltauf, Guenther

    2017-07-01

    CCD camera based optical ultrasound detection is a promising alternative approach for high resolution 3D photoacoustic imaging (PAI). To fully exploit its potential and to achieve an image resolution <50 μm, it is necessary to incorporate variations of the speed of sound (SOS) in the image reconstruction algorithm. Hence, in the proposed work the idea and a first implementation are shown how speed of sound imaging can be added to a previously developed camera based PAI setup. The current setup provides SOS-maps with a spatial resolution of 2 mm and an accuracy of the obtained absolute SOS values of about 1%. The proposed dual-modality setup has the potential to provide highly resolved and perfectly co-registered 3D photoacoustic and SOS images.

  11. Learning words and learning sounds: Advances in language development.

    PubMed

    Vihman, Marilyn M

    2017-02-01

    Phonological development is sometimes seen as a process of learning sounds, or forming phonological categories, and then combining sounds to build words, with the evidence taken largely from studies demonstrating 'perceptual narrowing' in infant speech perception over the first year of life. In contrast, studies of early word production have long provided evidence that holistic word learning may precede the formation of phonological categories. In that account, children begin by matching their existing vocal patterns to adult words, with knowledge of the phonological system emerging from the network of related word forms. Here I review evidence from production and then consider how the implicit and explicit learning mechanisms assumed by the complementary memory systems model might be understood as reconciling the two approaches. © 2016 The British Psychological Society.

  12. Satellite observed thermodynamics during FGGE

    NASA Technical Reports Server (NTRS)

    Smith, W. L.

    1985-01-01

    During the First Global Atmospheric Research Program (GARP) Global Experiment (FGGE), determinations of temperature and moisture were made from TIROS-N and NOAA-6 satellite infrared and microwave sounding radiance measurements. The data were processed by two methods differing principally in their horizontal resolution. At the National Earth Satellite Service (NESS) in Washington, D.C., the data were produced operationally with a horizontal resolution of 250 km for inclusion in the FGGE Level IIb data sets for application to large-scale numerical analysis and prediction models. High horizontal resolution (75 km) sounding data sets were produced using man-machine interactive methods for the special observing periods of FGGE at the NASA/Goddard Space Flight Center and archived as supplementary Level IIb. The procedures used for sounding retrieval and the characteristics and quality of these thermodynamic observations are given.

  13. Music Perception with Cochlear Implants: A Review

    PubMed Central

    McDermott, Hugh J.

    2004-01-01

    The acceptance of cochlear implantation as an effective and safe treatment for deafness has increased steadily over the past quarter century. The earliest devices were the first implanted prostheses found to be successful in compensating partially for lost sensory function by direct electrical stimulation of nerves. Initially, the main intention was to provide limited auditory sensations to people with profound or total sensorineural hearing impairment in both ears. Although the first cochlear implants aimed to provide patients with little more than awareness of environmental sounds and some cues to assist visual speech-reading, the technology has advanced rapidly. Currently, most people with modern cochlear implant systems can understand speech using the device alone, at least in favorable listening conditions. In recent years, an increasing research effort has been directed towards implant users’ perception of nonspeech sounds, especially music. This paper reviews that research, discusses the published experimental results in terms of both psychophysical observations and device function, and concludes with some practical suggestions about how perception of music might be enhanced for implant recipients in the future. The most significant findings of past research are: (1) On average, implant users perceive rhythm about as well as listeners with normal hearing; (2) Even with technically sophisticated multiple-channel sound processors, recognition of melodies, especially without rhythmic or verbal cues, is poor, with performance at little better than chance levels for many implant users; (3) Perception of timbre, which is usually evaluated by experimental procedures that require subjects to identify musical instrument sounds, is generally unsatisfactory; (4) Implant users tend to rate the quality of musical sounds as less pleasant than listeners with normal hearing; (5) Auditory training programs that have been devised specifically to provide implant users with structured musical listening experience may improve the subjective acceptability of music that is heard through a prosthesis; (6) Pitch perception might be improved by designing innovative sound processors that use both temporal and spatial patterns of electric stimulation more effectively and precisely to overcome the inherent limitations of signal coding in existing implant systems; (7) For the growing population of implant recipients who have usable acoustic hearing, at least for low-frequency sounds, perception of music is likely to be much better with combined acoustic and electric stimulation than is typical for deaf people who rely solely on the hearing provided by their prostheses. PMID:15497033

  14. Music perception with cochlear implants: a review.

    PubMed

    McDermott, Hugh J

    2004-01-01

    The acceptance of cochlear implantation as an effective and safe treatment for deafness has increased steadily over the past quarter century. The earliest devices were the first implanted prostheses found to be successful in compensating partially for lost sensory function by direct electrical stimulation of nerves. Initially, the main intention was to provide limited auditory sensations to people with profound or total sensorineural hearing impairment in both ears. Although the first cochlear implants aimed to provide patients with little more than awareness of environmental sounds and some cues to assist visual speech-reading, the technology has advanced rapidly. Currently, most people with modern cochlear implant systems can understand speech using the device alone, at least in favorable listening conditions. In recent years, an increasing research effort has been directed towards implant users' perception of nonspeech sounds, especially music. This paper reviews that research, discusses the published experimental results in terms of both psychophysical observations and device function, and concludes with some practical suggestions about how perception of music might be enhanced for implant recipients in the future. The most significant findings of past research are: (1) On average, implant users perceive rhythm about as well as listeners with normal hearing; (2) Even with technically sophisticated multiple-channel sound processors, recognition of melodies, especially without rhythmic or verbal cues, is poor, with performance at little better than chance levels for many implant users; (3) Perception of timbre, which is usually evaluated by experimental procedures that require subjects to identify musical instrument sounds, is generally unsatisfactory; (4) Implant users tend to rate the quality of musical sounds as less pleasant than listeners with normal hearing; (5) Auditory training programs that have been devised specifically to provide implant users with structured musical listening experience may improve the subjective acceptability of music that is heard through a prosthesis; (6) Pitch perception might be improved by designing innovative sound processors that use both temporal and spatial patterns of electric stimulation more effectively and precisely to overcome the inherent limitations of signal coding in existing implant systems; (7) For the growing population of implant recipients who have usable acoustic hearing, at least for low-frequency sounds, perception of music is likely to be much better with combined acoustic and electric stimulation than is typical for deaf people who rely solely on the hearing provided by their prostheses.

  15. Topography of sound level representation in the FM sweep selective region of the pallid bat auditory cortex.

    PubMed

    Measor, Kevin; Yarrow, Stuart; Razak, Khaleel A

    2018-05-26

    Sound level processing is a fundamental function of the auditory system. To determine how the cortex represents sound level, it is important to quantify how changes in level alter the spatiotemporal structure of cortical ensemble activity. This is particularly true for echolocating bats that have control over, and often rapidly adjust, call level to actively change echo level. To understand how cortical activity may change with sound level, here we mapped response rate and latency changes with sound level in the auditory cortex of the pallid bat. The pallid bat uses a 60-30 kHz downward frequency modulated (FM) sweep for echolocation. Neurons tuned to frequencies between 30 and 70 kHz in the auditory cortex are selective for the properties of FM sweeps used in echolocation forming the FM sweep selective region (FMSR). The FMSR is strongly selective for sound level between 30 and 50 dB SPL. Here we mapped the topography of level selectivity in the FMSR using downward FM sweeps and show that neurons with more monotonic rate level functions are located in caudomedial regions of the FMSR overlapping with high frequency (50-60 kHz) neurons. Non-monotonic neurons dominate the FMSR, and are distributed across the entire region, but there is no evidence for amplitopy. We also examined how first spike latency of FMSR neurons change with sound level. The majority of FMSR neurons exhibit paradoxical latency shift wherein the latency increases with sound level. Moreover, neurons with paradoxical latency shifts are more strongly level selective and are tuned to lower sound level than neurons in which latencies decrease with level. These data indicate a clustered arrangement of neurons according to monotonicity, with no strong evidence for finer scale topography, in the FMSR. The latency analysis suggests mechanisms for strong level selectivity that is based on relative timing of excitatory and inhibitory inputs. Taken together, these data suggest how the spatiotemporal spread of cortical activity may represent sound level. Copyright © 2018. Published by Elsevier B.V.

  16. First Nations Development Institute Biennial Report, 1994/95.

    ERIC Educational Resources Information Center

    First Nations Development Inst., Fredericksburg, VA.

    This report describes economic development projects that were funded during 1994-95 by the First Nations Development Institute. The Institute was established in 1980 to help tribes build sound, sustainable reservation economies. Through the Eagle Staff Fund, the Institute regrants funds for culturally viable economic development projects from a…

  17. 12. Interior view of first floor aisle in 1922 north ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    12. Interior view of first floor aisle in 1922 north section. Camera is pointed south from rear of building. Post at far left is also seen on right in photo WA-116-A-11. - Puget Sound Naval Shipyard, Pattern Shop, Farragut Avenue, Bremerton, Kitsap County, WA

  18. Exploring the Science of Sound

    ERIC Educational Resources Information Center

    Manser, Michael James; Kilgo, John Wesley

    2015-01-01

    This investigation-based lesson is geared toward third through fifth grade students. The lesson presented in this article was the first of three lessons designed and taught by us for our first preservice teaching course. UTeach is a teacher preparation program for undergraduate STEM majors, which originated at the University of Austin, Texas. The…

  19. Condition of stream ecosystem in the US: An overview of the first national assessment

    EPA Science Inventory

    The Wadeable Streams Assessment (WSA) provided the first statistically sound summary of the ecological condition of streams and small rivers in the US. Information provided in the assessment filled an important gap in meeting the requirements of the US Clean Water Act. The purpos...

  20. Bird sound spectrogram decomposition through Non-Negative Matrix Factorization for the acoustic classification of bird species.

    PubMed

    Ludeña-Choez, Jimmy; Quispe-Soncco, Raisa; Gallardo-Antolín, Ascensión

    2017-01-01

    Feature extraction for Acoustic Bird Species Classification (ABSC) tasks has traditionally been based on parametric representations that were specifically developed for speech signals, such as Mel Frequency Cepstral Coefficients (MFCC). However, the discrimination capabilities of these features for ABSC could be enhanced by accounting for the vocal production mechanisms of birds, and, in particular, the spectro-temporal structure of bird sounds. In this paper, a new front-end for ABSC is proposed that incorporates this specific information through the non-negative decomposition of bird sound spectrograms. It consists of the following two different stages: short-time feature extraction and temporal feature integration. In the first stage, which aims at providing a better spectral representation of bird sounds on a frame-by-frame basis, two methods are evaluated. In the first method, cepstral-like features (NMF_CC) are extracted by using a filter bank that is automatically learned by means of the application of Non-Negative Matrix Factorization (NMF) on bird audio spectrograms. In the second method, the features are directly derived from the activation coefficients of the spectrogram decomposition as performed through NMF (H_CC). The second stage summarizes the most relevant information contained in the short-time features by computing several statistical measures over long segments. The experiments show that the use of NMF_CC and H_CC in conjunction with temporal integration significantly improves the performance of a Support Vector Machine (SVM)-based ABSC system with respect to conventional MFCC.

  1. Neural Correlates of Central Inhibition during Physical Fatigue

    PubMed Central

    Tanaka, Masaaki; Ishii, Akira; Watanabe, Yasuyoshi

    2013-01-01

    Central inhibition plays a pivotal role in determining physical performance during physical fatigue. Classical conditioning of central inhibition is believed to be associated with the pathophysiology of chronic fatigue. We tried to determine whether classical conditioning of central inhibition can really occur and to clarify the neural mechanisms of central inhibition related to classical conditioning during physical fatigue using magnetoencephalography (MEG). Eight right-handed volunteers participated in this study. We used metronome sounds as conditioned stimuli and maximum handgrip trials as unconditioned stimuli to cause central inhibition. Participants underwent MEG recording during imagery of maximum grips of the right hand guided by metronome sounds for 10 min. Thereafter, fatigue-inducing maximum handgrip trials were performed for 10 min; the metronome sounds were started 5 min after the beginning of the handgrip trials. The next day, neural activities during imagery of maximum grips of the right hand guided by metronome sounds were measured for 10 min. Levels of fatigue sensation and sympathetic nerve activity on the second day were significantly higher relative to those of the first day. Equivalent current dipoles (ECDs) in the posterior cingulated cortex (PCC), with latencies of approximately 460 ms, were observed in all the participants on the second day, although ECDs were not identified in any of the participants on the first day. We demonstrated that classical conditioning of central inhibition can occur and that the PCC is involved in the neural substrates of central inhibition related to classical conditioning during physical fatigue. PMID:23923034

  2. Bird sound spectrogram decomposition through Non-Negative Matrix Factorization for the acoustic classification of bird species

    PubMed Central

    Quispe-Soncco, Raisa

    2017-01-01

    Feature extraction for Acoustic Bird Species Classification (ABSC) tasks has traditionally been based on parametric representations that were specifically developed for speech signals, such as Mel Frequency Cepstral Coefficients (MFCC). However, the discrimination capabilities of these features for ABSC could be enhanced by accounting for the vocal production mechanisms of birds, and, in particular, the spectro-temporal structure of bird sounds. In this paper, a new front-end for ABSC is proposed that incorporates this specific information through the non-negative decomposition of bird sound spectrograms. It consists of the following two different stages: short-time feature extraction and temporal feature integration. In the first stage, which aims at providing a better spectral representation of bird sounds on a frame-by-frame basis, two methods are evaluated. In the first method, cepstral-like features (NMF_CC) are extracted by using a filter bank that is automatically learned by means of the application of Non-Negative Matrix Factorization (NMF) on bird audio spectrograms. In the second method, the features are directly derived from the activation coefficients of the spectrogram decomposition as performed through NMF (H_CC). The second stage summarizes the most relevant information contained in the short-time features by computing several statistical measures over long segments. The experiments show that the use of NMF_CC and H_CC in conjunction with temporal integration significantly improves the performance of a Support Vector Machine (SVM)-based ABSC system with respect to conventional MFCC. PMID:28628630

  3. The effect of maternal presence on premature infant response to recorded music.

    PubMed

    Dearn, Trish; Shoemark, Helen

    2014-01-01

    To determine the effect of maternal presence on the physiological and behavioral status of the preterm infant when exposed to recorded music versus ambient sound. Repeated-measures randomized controlled trial. Special care nursery (SCN) in a tertiary perinatal center. Clinically stable preterm infants (22) born at > 28 weeks gestation and enrolled at > 32 weeks gestation and their mothers. Infants were exposed to lullaby music (6 minutes of ambient sound alternating with 2x 6 minutes recorded lullaby music) at a volume within the recommended sound level for the SCN. The mothers in the experimental group were present for the first 12 minutes (baseline and first music period) whereas the mothers in the control group were absent overall. There was no discernible infant response to music and therefore no significant impact of maternal presence on infant's response to music over time. However during the mothers' presence (first 12 minutes), the infants exhibited significantly higher oxygen saturation than during their absence p = .024) and less time spent in quiet sleep after their departure, though this was not significant. Infants may have been unable to detect the music against the ambient soundscape. Regardless of exposure to music, the infants' physiological and behavioral regulation were affected by the presence and departure of the mothers. © 2014 AWHONN, the Association of Women's Health, Obstetric and Neonatal Nurses.

  4. The inferior colliculus encodes the Franssen auditory spatial illusion

    PubMed Central

    Rajala, Abigail Z.; Yan, Yonghe; Dent, Micheal L.; Populin, Luis C.

    2014-01-01

    Illusions are effective tools for the study of the neural mechanisms underlying perception because neural responses can be correlated to the physical properties of stimuli and the subject’s perceptions. The Franssen illusion (FI) is an auditory spatial illusion evoked by presenting a transient, abrupt tone and a slowly rising, sustained tone of the same frequency simultaneously on opposite sides of the subject. Perception of the FI consists of hearing a single sound, the sustained tone, on the side that the transient was presented. Both subcortical and cortical mechanisms for the FI have been proposed, but, to date, there is no direct evidence for either. The data show that humans and rhesus monkeys perceive the FI similarly. Recordings were taken from single units of the inferior colliculus in the monkey while they indicated the perceived location of sound sources with their gaze. The results show that the transient component of the Franssen stimulus, with a shorter first spike latency and higher discharge rate than the sustained tone, encodes the perception of sound location. Furthermore, the persistent erroneous perception of the sustained stimulus location is due to continued excitation of the same neurons, first activated by the transient, by the sustained stimulus without location information. These results demonstrate for the first time, on a trial-by-trial basis, a correlation between perception of an auditory spatial illusion and a subcortical physiological substrate. PMID:23899307

  5. Sound suppression mixer

    NASA Technical Reports Server (NTRS)

    Brown, William H. (Inventor)

    1994-01-01

    A gas turbine engine flow mixer includes at least one chute having first and second spaced apart sidewalls joined together at a leading edge, with the sidewalls having first and second trailing edges defining therebetween a chute outlet. The first trailing edge is spaced longitudinally downstream from the second trailing edge for defining a septum in the first sidewall extending downstream from the second trailing edge. The septum includes a plurality of noise attenuating apertures.

  6. Spacecraft Internal Acoustic Environment Modeling

    NASA Technical Reports Server (NTRS)

    Chu, SShao-sheng R.; Allen, Christopher S.

    2009-01-01

    Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. In FY09, the physical mockup developed in FY08, with interior geometric shape similar to Orion CM (Crew Module) IML (Interior Mode Line), was used to validate SEA (Statistical Energy Analysis) acoustic model development with realistic ventilation fan sources. The sound power levels of these sources were unknown a priori, as opposed to previous studies that RSS (Reference Sound Source) with known sound power level was used. The modeling results were evaluated based on comparisons to measurements of sound pressure levels over a wide frequency range, including the frequency range where SEA gives good results. Sound intensity measurement was performed over a rectangular-shaped grid system enclosing the ventilation fan source. Sound intensities were measured at the top, front, back, right, and left surfaces of the and system. Sound intensity at the bottom surface was not measured, but sound blocking material was placed tinder the bottom surface to reflect most of the incident sound energy back to the remaining measured surfaces. Integrating measured sound intensities over measured surfaces renders estimated sound power of the source. The reverberation time T6o of the mockup interior had been modified to match reverberation levels of ISS US Lab interior for speech frequency bands, i.e., 0.5k, 1k, 2k, 4 kHz, by attaching appropriately sized Thinsulate sound absorption material to the interior wall of the mockup. Sound absorption of Thinsulate was modeled in three methods: Sabine equation with measured mockup interior reverberation time T60, layup model based on past impedance tube testing, and layup model plus air absorption correction. The evaluation/validation was carried out by acquiring octave band microphone data simultaneously at ten fixed locations throughout the mockup. SPLs (Sound Pressure Levels) predicted by our SEA model match well with measurements for our CM mockup, with a more complicated shape. Additionally in FY09, background NC noise (Noise Criterion) simulation and MRT (Modified Rhyme Test) were developed and performed in the mockup to determine the maximum noise level in CM habitable volume for fair crew voice communications. Numerous demonstrations of simulated noise environment in the mockup and associated SIL (Speech Interference Level) via MRT were performed for various communities, including members from NASA and Orion prime-/sub-contractors. Also, a new HSIR (Human-Systems Integration Requirement) for limiting pre- and post-landing SIL was proposed.

  7. Earth Observing System (EOS)/Advanced Microwave Sounding Unit-A (AMSU-A)

    NASA Technical Reports Server (NTRS)

    Mullooly, William

    1995-01-01

    This is the thirty-first monthly report for the Earth Observing System (EOS)/Advanced Microwave Sounding Unit- A (AMSU-A), Contract NAS5-32314, and covers the period from 1 July 1995 through 31 July 1995. This period is the nineteenth month of the Implementation Phase which provides for the design, fabrication, assembly, and test of the first EOS/AMSU-A, the Protoflight Model. Included in this report is the Master Program Schedule (Section 2), a report from the Product Team Leaders on the status of all major program elements (Section 3), Drawing status (Section 4), Weight and Power Budgets (CDRL) 503 (Section 5), Performance Assurance (CDRL 204) (Section 6), Configuration Management Status Report (CDRL 203) (Section 7), Documentation/Data Management Status Report (Section 8), and Contract Status (Section 9).

  8. Two-dimensional model of vocal fold vibration for sound synthesis of voice and soprano singing

    NASA Astrophysics Data System (ADS)

    Adachi, Seiji; Yu, Jason

    2005-05-01

    Voiced sounds were simulated with a computer model of the vocal fold composed of a single mass vibrating both parallel and perpendicular to the airflow. Similarities with the two-mass model are found in the amplitudes of the glottal area and the glottal volume flow velocity, the variation in the volume flow waveform with the vocal tract shape, and the dependence of the oscillation amplitude upon the average opening area of the glottis, among other similar features. A few dissimilarities are also found in the more symmetric glottal and volume flow waveforms in the rising and falling phases. The major improvement of the present model over the two-mass model is that it yields a smooth transition between oscillations with an inductive load and a capacitive load of the vocal tract with no sudden jumps in the vibration frequency. Self-excitation is possible both below and above the first formant frequency of the vocal tract. By taking advantage of the wider continuous frequency range, the two-dimensional model can successfully be applied to the sound synthesis of a high-pitched soprano singing, where the fundamental frequency sometimes exceeds the first formant frequency. .

  9. Classification of communication signals of the little brown bat

    NASA Astrophysics Data System (ADS)

    Melendez, Karla V.; Jones, Douglas L.; Feng, Albert S.

    2005-09-01

    Little brown bats, Myotis lucifugus, are known for their ability to echolocate and utilize their echolocation system to navigate, locate, and identify prey. Their echolocation signals have been characterized in detail, but their communication signals are poorly understood despite their widespread use during the social interactions. The goal of this study was to characterize the communication signals of little brown bats. Sound recordings were made overnight on five individual bats (housed separately from a large group of captive bats) for 7 nights, using a Pettersson ultrasound detector D240x bat detector and Nagra ARES-BB digital recorder. The spectral and temporal characteristics of recorded sounds were first analyzed using BATSOUND software from Pettersson. Sounds were first classified by visual observation of calls' temporal pattern and spectral composition, and later using an automatic classification scheme based on multivariate statistical parameters in MATLAB. Human- and machine-based analysis revealed five discrete classes of bat's communication signals: downward frequency-modulated calls, constant frequency calls, broadband noise bursts, broadband chirps, and broadband click trains. Future studies will focus on analysis of calls' spectrotemporal modulations to discriminate any subclasses that may exist. [Research supported by Grant R01-DC-04998 from the National Institute for Deafness and Communication Disorders.

  10. Aircraft panel with sensorless active sound power reduction capabilities through virtual mechanical impedances

    NASA Astrophysics Data System (ADS)

    Boulandet, R.; Michau, M.; Micheau, P.; Berry, A.

    2016-01-01

    This paper deals with an active structural acoustic control approach to reduce the transmission of tonal noise in aircraft cabins. The focus is on the practical implementation of the virtual mechanical impedances method by using sensoriactuators instead of conventional control units composed of separate sensors and actuators. The experimental setup includes two sensoriactuators developed from the electrodynamic inertial exciter and distributed over an aircraft trim panel which is subject to a time-harmonic diffuse sound field. The target mechanical impedances are first defined by solving a linear optimization problem from sound power measurements before being applied to the test panel using a complex envelope controller. Measured data are compared to results obtained with sensor-actuator pairs consisting of an accelerometer and an inertial exciter, particularly as regards sound power reduction. It is shown that the two types of control unit provide similar performance, and that here virtual impedance control stands apart from conventional active damping. In particular, it is clear from this study that extra vibrational energy must be provided by the actuators for optimal sound power reduction, mainly due to the high structural damping in the aircraft trim panel. Concluding remarks on the benefits of using these electrodynamic sensoriactuators to control tonal disturbances are also provided.

  11. Hearing in three dimensions

    NASA Astrophysics Data System (ADS)

    Shinn-Cunningham, Barbara

    2003-04-01

    One of the key functions of hearing is to help us monitor and orient to events in our environment (including those outside the line of sight). The ability to compute the spatial location of a sound source is also important for detecting, identifying, and understanding the content of a sound source, especially in the presence of competing sources from other positions. Determining the spatial location of a sound source poses difficult computational challenges; however, we perform this complex task with proficiency, even in the presence of noise and reverberation. This tutorial will review the acoustic, psychoacoustic, and physiological processes underlying spatial auditory perception. First, the tutorial will examine how the many different features of the acoustic signals reaching a listener's ears provide cues for source direction and distance, both in anechoic and reverberant space. Then we will discuss psychophysical studies of three-dimensional sound localization in different environments and the basic neural mechanisms by which spatial auditory cues are extracted. Finally, ``virtual reality'' approaches for simulating sounds at different directions and distances under headphones will be reviewed. The tutorial will be structured to appeal to a diverse audience with interests in all fields of acoustics and will incorporate concepts from many areas, such as psychological and physiological acoustics, architectural acoustics, and signal processing.

  12. Factors Regulating Early Life History Dispersal of Atlantic Cod (Gadus morhua) from Coastal Newfoundland

    PubMed Central

    Stanley, Ryan R. E.; deYoung, Brad; Snelgrove, Paul V. R.; Gregory, Robert S.

    2013-01-01

    To understand coastal dispersal dynamics of Atlantic cod (Gadus morhua), we examined spatiotemporal egg and larval abundance patterns in coastal Newfoundland. In recent decades, Smith Sound, Trinity Bay has supported the largest known overwintering spawning aggregation of Atlantic cod in the region. We estimated spawning and dispersal characteristics for the Smith Sound-Trinity Bay system by fitting ichthyoplankton abundance data to environmentally-driven, simplified box models. Results show protracted spawning, with sharply increased egg production in early July, and limited dispersal from the Sound. The model for the entire spawning season indicates egg export from Smith Sound is 13%•day−1 with a net mortality of 27%•day–1. Eggs and larvae are consistently found in western Trinity Bay with little advection from the system. These patterns mirror particle tracking models that suggest residence times of 10–20 days, and circulation models indicating local gyres in Trinity Bay that act in concert with upwelling dynamics to retain eggs and larvae. Our results are among the first quantitative dispersal estimates from Smith Sound, linking this spawning stock to the adjacent coastal waters. These results illustrate the biophysical interplay regulating dispersal and connectivity originating from inshore spawning of coastal northwest Atlantic. PMID:24058707

  13. A model for the perception of environmental sound based on notice-events.

    PubMed

    De Coensel, Bert; Botteldooren, Dick; De Muer, Tom; Berglund, Birgitta; Nilsson, Mats E; Lercher, Peter

    2009-08-01

    An approach is proposed to shed light on the mechanisms underlying human perception of environmental sound that intrudes in everyday living. Most research on exposure-effect relationships aims at relating overall effects to overall exposure indicators in an epidemiological fashion, without including available knowledge on the possible underlying mechanisms. Here, it is proposed to start from available knowledge on audition and perception to construct a computational framework for the effect of environmental sound on individuals. Obviously, at the individual level additional mechanisms (inter-sensory, attentional, cognitive, emotional) play a role in the perception of environmental sound. As a first step, current knowledge is made explicit by building a model mimicking some aspects of human auditory perception. This model is grounded in the hypothesis that long-term perception of environmental sound is determined primarily by short notice-events. The applicability of the notice-event model is illustrated by simulating a synthetic population exposed to typical Flemish environmental noise. From these simulation results, it is demonstrated that the notice-event model is able to mimic the differences between the annoyance caused by road traffic noise exposure and railway traffic noise exposure that are also observed empirically in other studies and thus could provide an explanation for these differences.

  14. Electroacoustic control of Rijke tube instability

    NASA Astrophysics Data System (ADS)

    Zhang, Yumin; Huang, Lixi

    2017-11-01

    Unsteady heat release coupled with pressure fluctuation triggers the thermoacoustic instability which may damage a combustion chamber severely. This study demonstrates an electroacoustic control approach of suppressing the thermoacoustic instability in a Rijke tube by altering the wall boundary condition. An electrically shunted loudspeaker driver device is connected as a side-branch to the main tube via a small aperture. Tests in an impedance tube show that this device has sound absorption coefficient up to 40% under normal incidence from 100 Hz to 400 Hz, namely over two octaves. Experimental result demonstrates that such a broadband acoustic performance can effectively eliminate the Rijke-tube instability from 94 Hz to 378 Hz (when the tube length varies from 1.8 m to 0.9 m, the first mode frequency for the former is 94 Hz and the second mode frequency for the latter is 378 Hz). Theoretical investigation reveals that the devices act as a damper draining out sound energy through a tiny hole to eliminate the instability. Finally, it is also estimated based on the experimental data that small amount of sound energy is actually absorbed when the system undergoes a transition from the unstable to stable state if the contrpaol is activated. When the system is actually stabilized, no sound is radiated so no sound energy needs to be absorbed by the control device.

  15. Measurements and time-domain simulations of multiphonics in the trombone.

    PubMed

    Velut, Lionel; Vergez, Christophe; Gilbert, Joël

    2016-10-01

    Multiphonic sounds of brass instruments are studied in this article. They are produced by playing a note on a brass instrument while simultaneously singing another note in the mouthpiece. This results in a peculiar sound, heard as a chord or a cluster of more than two notes in most cases. This effect is used in different artistic contexts. Measurements of the mouth pressure, the pressure inside the mouthpiece, and the radiated sound are recorded while a trombone player performs a multiphonic, first by playing an F 3 and singing a C 4 , then playing an F 3 and singing a note with a decreasing pitch. Results highlight the quasi-periodic nature of the multiphonic sound and the appearance of combination tones due to intermodulation between the played and the sung sounds. To assess the ability of a given brass instrument physical model to reproduce the measured phenomenon, time-domain simulations of multiphonics are carried out. A trombone model consisting in an exciter and a resonator nonlinearly coupled is forced while self-oscillating to reproduce simultaneous singing and playing. Comparison between simulated and measured signals is discussed. Spectral content of the simulated pressure match very well with the measured one, at the cost of a high forcing pressure.

  16. Low-frequency sound speed and attenuation in sandy seabottom from long-range broadband acoustic measurements.

    PubMed

    Wan, Lin; Zhou, Ji-Xun; Rogers, Peter H

    2010-08-01

    A joint China-U.S. underwater acoustics experiment was conducted in the Yellow Sea with a very flat bottom and a strong and sharp thermocline. Broadband explosive sources were deployed both above and below the thermocline along two radial lines up to 57.2 km and a quarter circle with a radius of 34 km. Two inversion schemes are used to obtain the seabottom sound speed. One is based on extracting normal mode depth functions from the cross-spectral density matrix. The other is based on the best match between the calculated and measured modal arrival times for different frequencies. The inverted seabottom sound speed is used as a constraint condition to extract the seabottom sound attenuation by three methods. The first method involves measuring the attenuation coefficients of normal modes. In the second method, the seabottom sound attenuation is estimated by minimizing the difference between the theoretical and measured modal amplitude ratios. The third method is based on finding the best match between the measured and modeled transmission losses (TLs). The resultant seabottom attenuation, averaged over three independent methods, can be expressed as alpha=(0.33+/-0.02)f(1.86+/-0.04)(dB/m kHz) over a frequency range of 80-1000 Hz.

  17. Active control of sound radiation from a vibrating rectangular panel by sound sources and vibration inputs - An experimental comparison

    NASA Technical Reports Server (NTRS)

    Fuller, C. R.; Hansen, C. H.; Snyder, S. D.

    1991-01-01

    Active control of sound radiation from a rectangular panel by two different methods has been experimentally studied and compared. In the first method a single control force applied directly to the structure is used with a single error microphone located in the radiated acoustic field. Global attenuation of radiated sound was observed to occur by two main mechanisms. For 'on-resonance' excitation, the control force had the effect of increasing the total panel input impedance presented to the nosie source, thus reducing all radiated sound. For 'off-resonance' excitation, the control force tends not significantly to modify the panel total response amplitude but rather to restructure the relative phases of the modes leading to a more complex vibration pattern and a decrease in radiation efficiency. For acoustic control, the second method, the number of acoustic sources required for global reduction was seen to increase with panel modal order. The mechanism in this case was that the acoustic sources tended to create an inverse pressure distribution at the panel surface and thus 'unload' the panel by reducing the panel radiation impedance. In general, control by structural inputs appears more effective than control by acoustic sources for structurally radiated noise.

  18. The Auditory Anatomy of the Minke Whale (Balaenoptera acutorostrata): A Potential Fatty Sound Reception Pathway in a Baleen Whale

    PubMed Central

    Yamato, Maya; Ketten, Darlene R; Arruda, Julie; Cramer, Scott; Moore, Kathleen

    2012-01-01

    Cetaceans possess highly derived auditory systems adapted for underwater hearing. Odontoceti (toothed whales) are thought to receive sound through specialized fat bodies that contact the tympanoperiotic complex, the bones housing the middle and inner ears. However, sound reception pathways remain unknown in Mysticeti (baleen whales), which have very different cranial anatomies compared to odontocetes. Here, we report a potential fatty sound reception pathway in the minke whale (Balaenoptera acutorostrata), a mysticete of the balaenopterid family. The cephalic anatomy of seven minke whales was investigated using computerized tomography and magnetic resonance imaging, verified through dissections. Findings include a large, well-formed fat body lateral, dorsal, and posterior to the mandibular ramus and lateral to the tympanoperiotic complex. This fat body inserts into the tympanoperiotic complex at the lateral aperture between the tympanic and periotic bones and is in contact with the ossicles. There is also a second, smaller body of fat found within the tympanic bone, which contacts the ossicles as well. This is the first analysis of these fatty tissues' association with the auditory structures in a mysticete, providing anatomical evidence that fatty sound reception pathways may not be a unique feature of odontocete cetaceans. Anat Rec, 2012. © 2012 Wiley Periodicals, Inc. PMID:22488847

  19. Neurons in the inferior colliculus of the rat show stimulus-specific adaptation for frequency, but not for intensity

    PubMed Central

    Duque, Daniel; Wang, Xin; Nieto-Diego, Javier; Krumbholz, Katrin; Malmierca, Manuel S.

    2016-01-01

    Electrophysiological and psychophysical responses to a low-intensity probe sound tend to be suppressed by a preceding high-intensity adaptor sound. Nevertheless, rare low-intensity deviant sounds presented among frequent high-intensity standard sounds in an intensity oddball paradigm can elicit an electroencephalographic mismatch negativity (MMN) response. This has been taken to suggest that the MMN is a correlate of true change or “deviance” detection. A key question is where in the ascending auditory pathway true deviance sensitivity first emerges. Here, we addressed this question by measuring low-intensity deviant responses from single units in the inferior colliculus (IC) of anesthetized rats. If the IC exhibits true deviance sensitivity to intensity, IC neurons should show enhanced responses to low-intensity deviant sounds presented among high-intensity standards. Contrary to this prediction, deviant responses were only enhanced when the standards and deviants differed in frequency. The results could be explained with a model assuming that IC neurons integrate over multiple frequency-tuned channels and that adaptation occurs within each channel independently. We used an adaptation paradigm with multiple repeated adaptors to measure the tuning widths of these adaption channels in relation to the neurons’ overall tuning widths. PMID:27066835

  20. Synthesis of walking sounds for alleviating gait disturbances in Parkinson's disease.

    PubMed

    Rodger, Matthew W M; Young, William R; Craig, Cathy M

    2014-05-01

    Managing gait disturbances in people with Parkinson's disease is a pressing challenge, as symptoms can contribute to injury and morbidity through an increased risk of falls. While drug-based interventions have limited efficacy in alleviating gait impairments, certain nonpharmacological methods, such as cueing, can also induce transient improvements to gait. The approach adopted here is to use computationally-generated sounds to help guide and improve walking actions. The first method described uses recordings of force data taken from the steps of a healthy adult which in turn were used to synthesize realistic gravel-footstep sounds that represented different spatio-temporal parameters of gait, such as step duration and step length. The second method described involves a novel method of sonifying, in real time, the swing phase of gait using real-time motion-capture data to control a sound synthesis engine. Both approaches explore how simple but rich auditory representations of action based events can be used by people with Parkinson's to guide and improve the quality of their walking, reducing the risk of falls and injury. Studies with Parkinson's disease patients are reported which show positive results for both techniques in reducing step length variability. Potential future directions for how these sound approaches can be used to manage gait disturbances in Parkinson's are also discussed.

Top