Science.gov

Sample records for ear-canal sound pressure

  1. Sound pressure distribution and power flow within the gerbil ear canal from 100 Hz to 80 kHz

    PubMed Central

    Ravicz, Michael E.; Olson, Elizabeth S.; Rosowski, John J.

    2008-01-01

    Sound pressure was mapped in the bony ear canal of gerbils during closed-field sound stimulation at frequencies from 0.1 to 80 kHz. A 1.27-mm-diam probe-tube microphone or a 0.17-mm-diam fiber-optic miniature microphone was positioned along approximately longitudinal trajectories within the 2.3-mm-diam ear canal. Substantial spatial variations in sound pressure, sharp minima in magnitude, and half-cycle phase changes occurred at frequencies >30 kHz. The sound frequencies of these transitions increased with decreasing distance from the tympanic membrane (TM). Sound pressure measured orthogonally across the surface of the TM showed only small variations at frequencies below 60 kHz. Hence, the ear canal sound field can be described fairly well as a one-dimensional standing wave pattern. Ear-canal power reflectance estimated from longitudinal spatial variations was roughly constant at 0.2–0.5 at frequencies between 30 and 45 kHz. In contrast, reflectance increased at higher frequencies to at least 0.8 above 60 kHz. Sound pressure was also mapped in a microphone-terminated uniform tube—an “artificial ear.” Comparison with ear canal sound fields suggests that an artificial ear or “artificial cavity calibration” technique may underestimate the in situ sound pressure by 5–15 dB between 40 and 60 kHz. PMID:17902852

  2. Sound pressure distribution within natural and artificial human ear canals: Forward stimulation

    PubMed Central

    Ravicz, Michael E.; Tao Cheng, Jeffrey; Rosowski, John J.

    2014-01-01

    This work is part of a study of the interaction of sound pressure in the ear canal (EC) with tympanic membrane (TM) surface displacement. Sound pressures were measured with 0.5–2 mm spacing at three locations within the shortened natural EC or an artificial EC in human temporal bones: near the TM surface, within the tympanic ring plane, and in a plane transverse to the long axis of the EC. Sound pressure was also measured at 2-mm intervals along the long EC axis. The sound field is described well by the size and direction of planar sound pressure gradients, the location and orientation of standing-wave nodal lines, and the location of longitudinal standing waves along the EC axis. Standing-wave nodal lines perpendicular to the long EC axis are present on the TM surface >11–16 kHz in the natural or artificial EC. The range of sound pressures was larger in the tympanic ring plane than at the TM surface or in the transverse EC plane. Longitudinal standing-wave patterns were stretched. The tympanic-ring sound field is a useful approximation of the TM sound field, and the artificial EC approximates the natural EC. PMID:25480061

  3. Sound pressure distribution within natural and artificial human ear canals: forward stimulation.

    PubMed

    Ravicz, Michael E; Tao Cheng, Jeffrey; Rosowski, John J

    2014-12-01

    This work is part of a study of the interaction of sound pressure in the ear canal (EC) with tympanic membrane (TM) surface displacement. Sound pressures were measured with 0.5-2 mm spacing at three locations within the shortened natural EC or an artificial EC in human temporal bones: near the TM surface, within the tympanic ring plane, and in a plane transverse to the long axis of the EC. Sound pressure was also measured at 2-mm intervals along the long EC axis. The sound field is described well by the size and direction of planar sound pressure gradients, the location and orientation of standing-wave nodal lines, and the location of longitudinal standing waves along the EC axis. Standing-wave nodal lines perpendicular to the long EC axis are present on the TM surface >11-16 kHz in the natural or artificial EC. The range of sound pressures was larger in the tympanic ring plane than at the TM surface or in the transverse EC plane. Longitudinal standing-wave patterns were stretched. The tympanic-ring sound field is a useful approximation of the TM sound field, and the artificial EC approximates the natural EC.

  4. Investigation of the Sound Pressure Level (SPL) of earphones during music listening with the use of physical ear canal models

    NASA Astrophysics Data System (ADS)

    Aying, K. P.; Otadoy, R. E.; Violanda, R.

    2015-06-01

    This study investigates on the sound pressure level (SPL) of insert-type earphones that are commonly used for music listening of the general populace. Measurements of SPL from earphones of different respondents were measured by plugging the earphone to a physical ear canal model. Durations of the earphone used for music listening were also gathered through short interviews. Results show that 21% of the respondents exceed the standard loudness/duration relation recommended by the World Health Organization (WHO).

  5. Distortion product otoacoustic emissions upon ear canal pressurization.

    PubMed

    Zebian, Makram; Schirkonyer, Volker; Hensel, Johannes; Vollbort, Sven; Fedtke, Thomas; Janssen, Thomas

    2013-04-01

    The purpose of this study was to quantify the change in distortion product otoacoustic emission (DPOAE) level upon ear canal pressurization. DPOAEs were measured on 12 normal-hearing human subjects for ear canal static pressures between -200 and +200 daPa in (50 ± 5) daPa steps. A clear dependence of DPOAE levels on the pressure was observed, with levels being highest at the maximum compliance of the middle ear, and decreasing on average by 2.3 dB per 50 daPa for lower and higher pressures. Ear canal pressurization can serve as a tool for improving the detectability of DPOAEs in the case of middle-ear dysfunction.

  6. Comparison of forward (ear-canal) and reverse (round-window) sound stimulation of the cochlea.

    PubMed

    Stieger, Christof; Rosowski, John J; Nakajima, Hideko Heidi

    2013-07-01

    The cochlea is normally driven with "forward" stimulation, in which sound is introduced to the ear canal. Alternatively, the cochlea can be stimulated at the round window (RW) using an actuator. During RW "reverse" stimulation, the acoustic flow starting at the RW does not necessarily take the same path as during forward stimulation. To understand the differences between forward and reverse stimulation, we measured ear-canal pressure, stapes velocity, RW velocity, and intracochlear pressures in scala vestibuli (SV) and scala tympani (ST) of fresh human temporal bones. During forward stimulation, the cochlear drive (differential pressure across the partition) results from the large difference in magnitude between the pressures of SV and ST, which occurs due to the high compliance of the RW. During reverse stimulation, the relatively high impedance of the middle ear causes the pressures of SV and ST to have similar magnitudes, and the differential pressure results primarily from the difference in phase of the pressures. Furthermore, the sound path differs between forward and reverse stimulation, such that motion through a third window is more significant during reverse stimulation. Additionally, we determined that although stapes velocity is a good estimate of cochlear drive during forward stimulation, it is not a good measure during reverse stimulation. This article is part of a special issue entitled "MEMRO 2012". Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Specification of absorbed-sound power in the ear canal: Application to suppression of stimulus frequency otoacoustic emissions

    PubMed Central

    Keefe, Douglas H.; Schairer, Kim S.

    2011-01-01

    An insert ear-canal probe including sound source and microphone can deliver a calibrated sound power level to the ear. The aural power absorbed is proportional to the product of mean-squared forward pressure, ear-canal area, and absorbance, in which the sound field is represented using forward (reverse) waves traveling toward (away from) the eardrum. Forward pressure is composed of incident pressure and its multiple internal reflections between eardrum and probe. Based on a database of measurements in normal-hearing adults from 0.22 to 8 kHz, the transfer-function level of forward relative to incident pressure is boosted below 0.7 kHz and within 4 dB above. The level of forward relative to total pressure is maximal close to 4 kHz with wide variability across ears. A spectrally flat incident-pressure level across frequency produces a nearly flat absorbed power level, in contrast to 19 dB changes in pressure level. Calibrating an ear-canal sound source based on absorbed power may be useful in audiological and research applications. Specifying the tip-to-tail level difference of the suppression tuning curve of stimulus frequency otoacoustic emissions in terms of absorbed power reveals increased cochlear gain at 8 kHz relative to the level difference measured using total pressure. PMID:21361437

  8. Pressure transfer function and absorption cross section from the diffuse field to the human infant ear canal.

    PubMed

    Keefe, D H; Bulen, J C; Campbell, S L; Burns, E M

    1994-01-01

    The diffuse-field pressure transfer function from a reverberant field to the ear canal of human infants, ages 1, 3, 6, 12, and 24 months, has been measured from 125-10700 Hz. The source was a loudspeaker using pink noise, and the diffuse-field pressure and the ear-canal pressure were simultaneously measured using a spatial averaging technique in a reverberant room. The results in most subjects show a two-peak structure in the 2-6-kHz range, corresponding to the ear-canal and concha resonances. The ear-canal resonance frequency decreases from 4.4 kHz at age 1 month to 2.9 kHz at age 24 months. The concha resonance frequency decreases from 5.5 kHz at age 1 month to 4.5 kHz at age 24 months. Below 2 kHz, the diffuse-field transfer function shows effects due to the torsos of the infant and parent, and varies with how the infant is held. Comparisons are reported of the diffuse-field absorption cross section for infants relative to adults. This quantity is a measure of power absorbed by the middle ear from a diffuse sound field, and large differences are observed in infants relative to adults. The radiation efficiencies of the infant and the adult ear are small at low frequencies, near unity at midfrequencies, and decrease at higher frequencies. The process of ear-canal development is not yet complete at age 24 months. The results have implications for experiments on hearing in infants.

  9. Hands-free device control using sound picked up in the ear canal

    NASA Astrophysics Data System (ADS)

    Chhatpar, Siddharth R.; Ngia, Lester; Vlach, Chris; Lin, Dong; Birkhimer, Craig; Juneja, Amit; Pruthi, Tarun; Hoffman, Orin; Lewis, Tristan

    2008-04-01

    Hands-free control of unmanned ground vehicles is essential for soldiers, bomb disposal squads, and first responders. Having their hands free for other equipment and tasks allows them to be safer and more mobile. Currently, the most successful hands-free control devices are speech-command based. However, these devices use external microphones, and in field environments, e.g., war zones and fire sites, their performance suffers because of loud ambient noise: typically above 90dBA. This paper describes the development of technology using the ear as an output source that can provide excellent command recognition accuracy even in noisy environments. Instead of picking up speech radiating from the mouth, this technology detects speech transmitted internally through the ear canal. Discreet tongue movements also create air pressure changes within the ear canal, and can be used for stealth control. A patented earpiece was developed with a microphone pointed into the ear canal that captures these signals generated by tongue movements and speech. The signals are transmitted from the earpiece to an Ultra-Mobile Personal Computer (UMPC) through a wired connection. The UMPC processes the signals and utilizes them for device control. The processing can include command recognition, ambient noise cancellation, acoustic echo cancellation, and speech equalization. Successful control of an iRobot PackBot has been demonstrated with both speech (13 discrete commands) and tongue (5 discrete commands) signals. In preliminary tests, command recognition accuracy was 95% with speech control and 85% with tongue control.

  10. Comparison of nine methods to estimate ear-canal stimulus levels

    PubMed Central

    Souza, Natalie N.; Dhar, Sumitrajit; Neely, Stephen T.; Siegel, Jonathan H.

    2014-01-01

    The reliability of nine measures of the stimulus level in the human ear canal was compared by measuring the sensitivity of behavioral hearing thresholds to changes in the depth of insertion of an otoacoustic emission probe. Four measures were the ear-canal pressure, the eardrum pressure estimated from it and the pressure measured in an ear simulator with and without compensation for insertion depth. The remaining five quantities were derived from the ear-canal pressure and the Thévenin-equivalent source characteristics of the probe: Forward pressure, initial forward pressure, the pressure transmitted into the middle ear, eardrum sound pressure estimated by summing the magnitudes of the forward and reverse pressure (integrated pressure) and absorbed power. Two sets of behavioral thresholds were measured in 26 subjects from 0.125 to 20 kHz, with the probe inserted at relatively deep and shallow positions in the ear canal. The greatest dependence on insertion depth was for transmitted pressure and absorbed power. The measures with the least dependence on insertion depth throughout the frequency range (best performance) included the depth-compensated simulator, eardrum, forward, and integrated pressures. Among these, forward pressure is advantageous because it quantifies stimulus phase. PMID:25324079

  11. Inverse solution of ear-canal area function from reflectance

    PubMed Central

    Rasetshwane, Daniel M.; Neely, Stephen T.

    2011-01-01

    A number of acoustical applications require the transformation of acoustical quantities, such as impedance and pressure that are measured at the entrance of the ear canal, to quantities at the eardrum. This transformation often requires knowledge of the shape of the ear canal. Previous attempts to measure ear-canal area functions were either invasive, non-reproducible, or could only measure the area function up to a point mid-way along the canal. A method to determine the area function of the ear canal from measurements of acoustic impedance at the entrance of the ear canal is described. The method is based on a solution to the inverse problem in which measurements of impedance are used to calculate reflectance, which is then used to determine the area function of the canal. The mean ear-canal area function determined using this method is similar to mean ear-canal area functions measured by other researchers using different techniques. The advantage of the proposed method over previous methods is that it is non- invasive, fast, and reproducible. PMID:22225043

  12. Isolating the auditory system from acoustic noise during functional magnetic resonance imaging: Examination of noise conduction through the ear canal, head, and bodya)

    PubMed Central

    Ravicz, Michael E.; Melcher, Jennifer R.

    2007-01-01

    Approaches were examined for reducing acoustic noise levels heard by subjects during functional magnetic resonance imaging (fMRI), a technique for localizing brain activation in humans. Specifically, it was examined whether a device for isolating the head and ear canal from sound (a “helmet”) could add to the isolation provided by conventional hearing protection devices (i.e., earmuffs and earplugs). Both subjective attenuation (the difference in hearing threshold with versus without isolation devices in place) and objective attenuation (difference in ear-canal sound pressure) were measured. In the frequency range of the most intense fMRI noise (1–1.4 kHz), a helmet, earmuffs, and earplugs used together attenuated perceived sound by 55–63 dB, whereas the attenuation provided by the conventional devices alone was substantially less: 30–37 dB for earmuffs, 25–28 dB for earplugs, and 39–41 dB for earmuffs and earplugs used together. The data enabled the clarification of the relative importance of ear canal, head, and body conduction routes to the cochlea under different conditions: At low frequencies (≤500 Hz), the ear canal was the dominant route of sound conduction to the cochlea for all of the device combinations considered. At higher frequencies (>500 Hz), the ear canal was the dominant route when either earmuffs or earplugs were worn. However, the dominant route of sound conduction was through the head when both earmuffs and earplugs were worn, through both ear canal and body when a helmet and earmuffs were worn, and through the body when a helmet, earmuffs, and earplugs were worn. It is estimated that a helmet, earmuffs, and earplugs together will reduce the most intense fMRI noise levels experienced by a subject to 60–65 dB SPL. Even greater reductions in noise should be achievable by isolating the body from the surrounding noise field. PMID:11206150

  13. [Effect size on resonance of the outer ear canal by simulation of middle ear lesions using a temporal bone preparation].

    PubMed

    Scheinpflug, L; Vorwerk, U; Begall, K

    1995-01-01

    By means of a model of the external and the middle ear it is possible to simulate various, exactly defined pathological conditions of the middle ear and to describe their influence on ear canal resonance. Starting point of the investigations are fresh postmortem preparations of 8 human temporal bones with an intact ear drum and a retained skin of the ear canal. The compliance of the middle ear does not significantly differ from the clinical data of probands with healthy ears. After antrotomy it is possible to simulate pathological conditions of the middle ear one after the other at the same temporal bone. The influence of the changed middle ear conditions on ear drum compliance, ear canal volume and on the resonance curve of the external ear canal was investigated. For example, the middle ear was filled with water to create approximately the same conditions as in acute serous otitis media. In this middle ear condition a significant increase of the sound pressure amplification was found, on an average by 4 decibels compared to the unchanged temporal bone model. A small increase in resonance frequency was also measured. The advantages of this model are the approximately physiological conditions and the constant dimensions of the external and middle ear.

  14. Comparing otoacoustic emissions evoked by chirp transients with constant absorbed sound power and constant incident pressure magnitude.

    PubMed

    Keefe, Douglas H; Feeney, M Patrick; Hunter, Lisa L; Fitzpatrick, Denis F

    2017-01-01

    Human ear-canal properties of transient acoustic stimuli are contrasted that utilize measured ear-canal pressures in conjunction with measured acoustic pressure reflectance and admittance. These data are referenced to the tip of a probe snugly inserted into the ear canal. Promising procedures to calibrate across frequency include stimuli with controlled levels of incident pressure magnitude, absorbed sound power, and forward pressure magnitude. An equivalent pressure at the eardrum is calculated from these measured data using a transmission-line model of ear-canal acoustics parameterized by acoustically estimated ear-canal area at the probe tip and length between the probe tip and eardrum. Chirp stimuli with constant incident pressure magnitude and constant absorbed sound power across frequency were generated to elicit transient-evoked otoacoustic emissions (TEOAEs), which were measured in normal-hearing adult ears from 0.7 to 8 kHz. TEOAE stimuli had similar peak-to-peak equivalent sound pressure levels across calibration conditions. Frequency-domain TEOAEs were compared using signal level, signal-to-noise ratio (SNR), coherence synchrony modulus (CSM), group delay, and group spread. Time-domain TEOAEs were compared using SNR, CSM, instantaneous frequency and instantaneous bandwidth. Stimuli with constant incident pressure magnitude or constant absorbed sound power across frequency produce generally similar TEOAEs up to 8 kHz.

  15. Comparing otoacoustic emissions evoked by chirp transients with constant absorbed sound power and constant incident pressure magnitude

    PubMed Central

    Keefe, Douglas H.; Feeney, M. Patrick; Hunter, Lisa L.; Fitzpatrick, Denis F.

    2017-01-01

    Human ear-canal properties of transient acoustic stimuli are contrasted that utilize measured ear-canal pressures in conjunction with measured acoustic pressure reflectance and admittance. These data are referenced to the tip of a probe snugly inserted into the ear canal. Promising procedures to calibrate across frequency include stimuli with controlled levels of incident pressure magnitude, absorbed sound power, and forward pressure magnitude. An equivalent pressure at the eardrum is calculated from these measured data using a transmission-line model of ear-canal acoustics parameterized by acoustically estimated ear-canal area at the probe tip and length between the probe tip and eardrum. Chirp stimuli with constant incident pressure magnitude and constant absorbed sound power across frequency were generated to elicit transient-evoked otoacoustic emissions (TEOAEs), which were measured in normal-hearing adult ears from 0.7 to 8 kHz. TEOAE stimuli had similar peak-to-peak equivalent sound pressure levels across calibration conditions. Frequency-domain TEOAEs were compared using signal level, signal-to-noise ratio (SNR), coherence synchrony modulus (CSM), group delay, and group spread. Time-domain TEOAEs were compared using SNR, CSM, instantaneous frequency and instantaneous bandwidth. Stimuli with constant incident pressure magnitude or constant absorbed sound power across frequency produce generally similar TEOAEs up to 8 kHz. PMID:28147608

  16. Challenges in fitting a hearing aid to a severely collapsed ear canal and mixed hearing loss.

    PubMed

    Oeding, Kristi; Valente, Michael; Chole, Richard

    2012-04-01

    Collapsed ear canals typically occur when an outside force, such as a headset for audiometric testing, is present. However, when a collapsed ear canal occurs without external pressure, this creates a challenge not only for performing audiometric testing but also for coupling a hearing aid to the ear canal. This case report highlights the challenges associated with fitting a hearing aid on a patient with a severe anterior-posterior collapsed ear canal with a mixed hearing loss. A 67-yr-old female originally presented to Washington University in St. Louis School of Medicine in 1996 with a long-standing history of bilateral otosclerosis. She had chronic ear infections in the right ear and a severely collapsed ear canal in the left ear and was fit with a bone anchored hearing aid (BAHA®) on the right side in 2003. However, benefit from the BAHA started to decrease due to changes in hearing, and a different hearing solution was needed. It was proposed that a hearing aid be fit to her collapsed left ear canal; however, trying to couple a hearing aid to the collapsed ear canal required unique noncustom earmold solutions. This case study highlights some of the obstacles and potential solutions for coupling a hearing aid to a severely collapsed ear canal. American Academy of Audiology.

  17. Acceleration induced water removal from ear canals.

    NASA Astrophysics Data System (ADS)

    Kang, Hosung; Averett, Katelee; Jung, Sunghwan

    2017-11-01

    Children and adults commonly experience having water trapped in the ear canals after swimming. To remove the water, individuals will shake their head sideways. Since a child's ear canal has a smaller diameter, it requires more acceleration of the head to remove the trapped water. In this study, we theoretically and experimentally investigated the acceleration required to break the surface meniscus of the water in artificial ear canals and hydrophobic-coated glass tubes. In experiments, ear canal models were 3D-printed from a CT-scanned human head. Also, glass tubes were coated with silane to match the hydrophobicity in ear canals. Then, using a linear stage, we measured the acceleration values required to forcefully eject the water from the artificial ear canals and glass tubes. A theoretical model was developed to predict the critical acceleration at a given tube diameter and water volume by using a modified Rayleigh-Taylor instability. Furthermore, this research can shed light on the potential of long-term brain injury and damage by shaking the head to push the water out of the ear canal. This research was supported by National Science Foundation Grant CBET-1604424.

  18. A miniaturized laser-Doppler-system in the ear canal

    NASA Astrophysics Data System (ADS)

    Schmidt, T.; Gerhardt, U.; Kupper, C.; Manske, E.; Witte, H.

    2013-03-01

    Gathering vibrational data from the human middle ear is quite difficult. To this date the well-known acoustic probe is used to estimate audiometric parameters, e.g. otoacoustic emissions, wideband reflectance and the measurement of the stapedius reflex. An acoustic probe contains at least one microphone and one loudspeaker. The acoustic parameter determination of the ear canal is essential for the comparability of test-retest measurement situations. Compared to acoustic tubes, the ear canal wall cannot be described as a sound hard boundary. Sound energy is partly absorbed by the ear canal wall. In addition the ear canal features a complex geometric shape (Stinson and Lawton1). Those conditions are one reason for the inter individual variability in input impedance measurement data of the tympanic membrane. The method of Laser-Doppler-Vibrometry is well described in literature. Using this method, the surface velocity of vibrating bodies can be determined contact-free. Conventional Laser-Doppler-Systems (LDS) for auditory research are mounted on a surgical microscope. Assuming a free line of view to the ear drum, the handling of those laser-systems is complicated. We introduce the concept of a miniaturized vibrometer which is supposed to be applied directly in the ear canal for contact-free measurement of the tympanic membrane surface vibration. The proposed interferometer is based on a Fabry-Perot etalon with a DFB laser diode as light source. The fiber-based Fabry-Perot-interferometer is characterized by a reduced size, compared to e.g. Michelson-, or Mach-Zehnder-Systems. For the determination of the phase difference in the interferometer, a phase generated carrier was used. To fit the sensor head in the ear canal, the required shape of the probe was generated by means of the geometrical data of 70 ear molds. The suggested prototype is built up by a singlemode optical fiber with a GRIN-lens, acting as a fiber collimator. The probe has a diameter of 1.8 mm and a

  19. The measurement of Eustachian tube function in a hyperbaric chamber using an ear canal microphone.

    PubMed

    Fischer, Hans-Georg; Koch, Andreas; Kähler, Wataru; Pohl, Michael; Pau, Hans-Wilhelm; Zehlicke, Thorsten

    2016-03-01

    The purpose of this study was to further the understanding of the opening of the Eustachian tube in relation to changes in barometric pressure. An ear canal microphone was used to measure the specific sounds related to tube opening and possible eardrum movements. Five subjects with normal tube function were examined in a hyperbaric chamber (up to 304 kPa). All active and passive equalization events were recorded and correlated with the subjectively perceived pressure regulation in the measured ear. The signals recorded were clear and reproducible. The acoustic analysis distinguished between the different kinds of equalization. Subjective impressions were confirmed by the recorded frequency of acoustic phenomena (clicks). During compression, the sequence of active equalization manoeuvres was in a more regular and steady pattern than during decompression, when the click sounds varied. The study established a simple technical method for analyzing the function of the Eustachian tube and provided new information about barometric pressure regulation of the middle ear.

  20. Compensating for ear-canal acoustics when measuring otoacoustic emissions

    PubMed Central

    Charaziak, Karolina K.; Shera, Christopher A.

    2017-01-01

    Otoacoustic emissions (OAEs) provide an acoustic fingerprint of the inner ear, and changes in this fingerprint may indicate changes in cochlear function arising from efferent modulation, aging, noise trauma, and/or exposure to harmful agents. However, the reproducibility and diagnostic power of OAE measurements is compromised by the variable acoustics of the ear canal, in particular, by multiple reflections and the emergence of standing waves at relevant frequencies. Even when stimulus levels are controlled using methods that circumvent standing-wave problems (e.g., forward-pressure-level calibration), distortion-product otoacoustic emission (DPOAE) levels vary with probe location by 10–15 dB near half-wave resonant frequencies. The method presented here estimates the initial outgoing OAE pressure wave at the eardrum from measurements of the conventional OAE, allowing one to separate the emitted OAE from the many reflections trapped in the ear canal. The emitted pressure level (EPL) represents the OAE level that would be recorded were the ear canal replaced by an infinite tube with no reflections. When DPOAEs are expressed using EPL, their variation with probe location decreases to the test–retest repeatability of measurements obtained at similar probe positions. EPL provides a powerful way to reduce the variability of OAE measurements and improve their ability to detect cochlear changes. PMID:28147590

  1. Do high sound pressure levels of crowing in roosters necessitate passive mechanisms for protection against self-vocalization?

    PubMed

    Claes, Raf; Muyshondt, Pieter G G; Dirckx, Joris J J; Aerts, Peter

    2018-02-01

    High sound pressure levels (>120dB) cause damage or death of the hair cells of the inner ear, hence causing hearing loss. Vocalization differences are present between hens and roosters. Crowing in roosters is reported to produce sound pressure levels of 100dB measured at a distance of 1m. In this study we measured the sound pressure levels that exist at the entrance of the outer ear canal. We hypothesize that roosters may benefit from a passive protective mechanism while hens do not require such a mechanism. Audio recordings at the level of the entrance of the outer ear canal of crowing roosters, made in this study, indeed show that a protective mechanism is needed as sound pressure levels can reach amplitudes of 142.3dB. Audio recordings made at varying distances from the crowing rooster show that at a distance of 0.5m sound pressure levels already drop to 102dB. Micro-CT scans of a rooster and chicken head show that in roosters the auditory canal closes when the beak is opened. In hens the diameter of the auditory canal only narrows but does not close completely. A morphological difference between the sexes in shape of a bursa-like slit which occurs in the outer ear canal causes the outer ear canal to close in roosters but not in hens. Copyright © 2017 Elsevier GmbH. All rights reserved.

  2. Ewing Sarcoma of the External Ear Canal

    PubMed Central

    Kecelioglu Binnetoglu, Kiymet; Gerin, Fatma; Sari, Murat

    2016-01-01

    Background. Ewing sarcoma (ES) is a high-grade malignant tumor that has skeletal and extraskeletal forms and consists of small round cells. In the head and neck region, reported localization of extraskeletal ES includes the larynx, thyroid gland, submandibular gland, nasal fossa, pharynx, skin, and parotid gland, but not the external ear canal. Methods. We present the unique case of a 2-year-old boy with extraskeletal ES arising from the external ear canal, mimicking auricular hematoma. Results. Surgery was performed and a VAC/IE (vincristine, adriamycin, cyclophosphamide alternating with ifosfamide, and etoposide) regimen was used for adjuvant chemotherapy for 12 months. Conclusion. The clinician should consider extraskeletal ES when diagnosing tumors localized in the head and neck region because it may be manifested by a nonspecific clinical picture mimicking common otorhinolaryngologic disorders. PMID:27313930

  3. The path of a click stimulus from ear canal to umbo.

    PubMed

    Milazzo, Mario; Fallah, Elika; Carapezza, Michael; Kumar, Nina S; Lei, Jason H; Olson, Elizabeth S

    2017-03-01

    The tympanic membrane (TM) has a key role in transmitting sounds to the inner ear, but a concise description of how the TM performs this function remains elusive. This paper probes TM operation by applying a free field click stimulus to the gerbil ear and exploring the consequent motions of the TM and umbo. Motions of the TM were measured both on radial tracks starting close to the umbo and on a grid distal and adjacent to the umbo. The experimental results confirmed the high fidelity of sound transmission from the ear canal to the umbo. A delay of 5-15 μs was seen in the onset of TM motion between points just adjacent to the umbo and mid-radial points. The TM responded with a ringing motion, with different locations possessing different primary ringing frequencies. A simple analytic model from the literature, treating the TM as a string, was used to explore the experimental results. The click-based experiments and analysis led to the following description of TM operation: A transient sound pressure on the TM causes a transient initial TM motion that is maximal ∼ at the TM's radial midpoints. Mechanical forces generated by this initial prominent TM distortion then pull the umbo inward, leading to a delayed umbo response. The initial TM deformation also gives rise to prolonged mechanical ringing on the TM that does not result in significant umbo motion, likely due to destructive interference from the range of ringing frequencies. Thus, the umbo's response is a high-fidelity representation of the transient stimulus. Because any sound can be considered as a consecutive series of clicks, this description is applicable to any sound stimulus. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Factors that introduce intrasubject variability into ear-canal absorbance measurements.

    PubMed

    Voss, Susan E; Stenfelt, Stefan; Neely, Stephen T; Rosowski, John J

    2013-07-01

    Wideband immittance measures can be useful in analyzing acoustic sound flow through the ear and also have diagnostic potential for the identification of conductive hearing loss as well as causes of conductive hearing loss. To interpret individual measurements, the variability in test–retest data must be described and quantified. Contributors to variability in ear-canal absorbance–based measurements are described in this article. These include assumptions related to methodologies and issues related to the probe fit within the ear and potential acoustic leaks. Evidence suggests that variations in ear-canal cross-sectional area or measurement location are small relative to variability within a population. Data are shown to suggest that the determination of the Thévenin equivalent of the ER-10C probe introduces minimal variability and is independent of the foam ear tip itself. It is suggested that acoustic leaks in the coupling of the ear tip to the ear canal lead to substantial variations and that this issue needs further work in terms of potential criteria to identify an acoustic leak. In addition, test–retest data from the literature are reviewed.

  5. Posture systematically alters ear-canal reflectance and DPOAE properties

    PubMed Central

    Voss, Susan E.; Adegoke, Modupe F.; Horton, Nicholas J.; Sheth, Kevin N.; Rosand, Jonathan; Shera, Christopher A.

    2010-01-01

    Several studies have demonstrated that the auditory system is sensitive to changes in posture, presumably through changes in intracranial pressure (ICP) that in turn alter the intracochlear pressure, which affects the stiffness of the middle-ear system. This observation has led to efforts to develop an ear-canal based noninvasive diagnostic measure for monitoring ICP, which is currently monitored invasively via access through the skull or spine. Here, we demonstrate the effects of postural changes, and presumably ICP changes, on distortion product otoacoustic emissions (DPOAE) magnitude, DPOAE angle, and power reflectance. Measurements were made on 12 normal-hearing subjects in two postural positions: upright at 90 degrees and tilted at −45 degrees to the horizontal. Measurements on each subject were repeated five times across five separate measurement sessions. All three measures showed significant changes (p < 0.001) between upright and tilted for frequencies between 500 and 2000 Hz, and DPOAE angle changes were significant at all measured frequencies (500–4000 Hz). Intrasubject variability, assessed via standard deviations for each subject’s multiple measurements, were generally smaller in the upright position relative to the tilted position. PMID:20227475

  6. Bilateral external ear canal osteomas - discussion on a clinical case.

    PubMed

    Gheorghe, D C; Stanciu, A E; Ulici, A; Zamfir-Chiru-Anton, A

    2016-01-01

    Osteomas of the external ear are uncommon benign tumors that need to be differentiated from the external ear canal exostoses, bony proliferations that are linked mainly to cold-water exposure. Clinical manifestations vary from no symptoms to recurrent local infections and external ear cholesteatoma. Objective: presenting a rare case that we did not find described in the published literature. A patient with multiple long-term asymptomatic osteomas of both external ear canals presented to our department. Material: Data recorded from the patient's medical record was reviewed and analyzed. Surgery was performed and histology confirmed the presumptive diagnosis. Results: There was a discrepancy between the local severity of the disease, with a complete obstruction of his ear canals, and the long-term disease-free status of the patient. Conclusion: We hypothesized about the etiology of these multiple bilateral osteomas of the EAC, in light of the clinical and surgical findings.

  7. Gain affected by the interior shape of the ear canal.

    PubMed

    Yu, Jen-Fang; Chen, Yen-Sheng; Cheng, Wei-De

    2011-06-01

    This study investigated the correlation of gain distribution and the interior shape of the human external ear canal. Cross-sectional study of gain measurement at the first bend and second bend. Chang Gung Memorial Hospital and Chang Gung University. There were 15 ears in patients aged between 20 and 30 years (8 men/7 women) with normal hearing and middle ears. Stimulus frequencies of 500, 1000, 2000, 3000, and 4000 Hz were based on the standard clinical hearing test. Measurements closer to the tympanic membrane and the positions at the first and second bends were confirmed by using otoscope. Real ear measurement to analyze the canal resonance in human external ears was adopted. This study found that gain at stimulus frequencies of 4000 Hz was affected by the interior shape of the ear canal (P < .005), particularly at the first and second bends, whereas gain was only affected by the length of the ear canal for stimulus frequencies of 2000 Hz (P < .005). This study found that gain was affected not only by the length of the external auditory canal (EAC) but also by the interior shape of the EAC significantly. The findings of this study may have potential clinical applications in canalplasty and congenital aural atresia surgery and may be used to guide surgeries that attempt to reshape the ear canal to achieve more desirable hearing outcomes.

  8. Ear canal dynamic motion as a source of power for in-ear devices

    NASA Astrophysics Data System (ADS)

    Delnavaz, Aidin; Voix, Jérémie

    2013-02-01

    Ear canal deformation caused by temporomandibular joint (jaw joint) activity, also known as "ear canal dynamic motion," is introduced in this paper as a candidate source of power to possibly recharge hearing aid batteries. The geometrical deformation of the ear canal is quantified in 3D by laser scanning of different custom ear moulds. An experimental setup is proposed to measure the amount of power potentially available from this source. The results show that 9 mW of power is available from a 15 mm3 dynamic change in the ear canal volume. Finally, the dynamic motion and power capability of the ear canal are investigated in a group of 12 subjects.

  9. Air-Leak Effects on Ear-Canal Acoustic Absorbance

    PubMed Central

    Rasetshwane, Daniel M.; Kopun, Judy G.; Gorga, Michael P.; Neely, Stephen T.

    2015-01-01

    Objective: Accurate ear-canal acoustic measurements, such as wideband acoustic admittance, absorbance, and otoacoustic emissions, require that the measurement probe be tightly sealed in the ear canal. Air leaks can compromise the validity of the measurements, interfere with calibrations, and increase variability. There are no established procedures for determining the presence of air leaks or criteria for what size leak would affect the accuracy of ear-canal acoustic measurements. The purpose of this study was to determine ways to quantify the effects of air leaks and to develop objective criteria to detect their presence. Design: Air leaks were simulated by modifying the foam tips that are used with the measurement probe through insertion of thin plastic tubing. To analyze the effect of air leaks, acoustic measurements were taken with both modified and unmodified foam tips in brass-tube cavities and human ear canals. Measurements were initially made in cavities to determine the range of critical leaks. Subsequently, data were collected in ears of 21 adults with normal hearing and normal middle-ear function. Four acoustic metrics were used for predicting the presence of air leaks and for quantifying these leaks: (1) low-frequency admittance phase (averaged over 0.1–0.2 kHz), (2) low-frequency absorbance, (3) the ratio of compliance volume to physical volume (CV/PV), and (4) the air-leak resonance frequency. The outcome variable in this analysis was the absorbance change (Δabsorbance), which was calculated in eight frequency bands. Results: The trends were similar for both the brass cavities and the ear canals. ΔAbsorbance generally increased with air-leak size and was largest for the lower frequency bands (0.1–0.2 and 0.2–0.5 kHz). Air-leak effects were observed in frequencies up to 10 kHz, but their effects above 1 kHz were unpredictable. These high-frequency air leaks were larger in brass cavities than in ear canals. Each of the four predictor variables

  10. Air-leak effects on ear-canal acoustic absorbance.

    PubMed

    Groon, Katherine A; Rasetshwane, Daniel M; Kopun, Judy G; Gorga, Michael P; Neely, Stephen T

    2015-01-01

    Accurate ear-canal acoustic measurements, such as wideband acoustic admittance, absorbance, and otoacoustic emissions, require that the measurement probe be tightly sealed in the ear canal. Air leaks can compromise the validity of the measurements, interfere with calibrations, and increase variability. There are no established procedures for determining the presence of air leaks or criteria for what size leak would affect the accuracy of ear-canal acoustic measurements. The purpose of this study was to determine ways to quantify the effects of air leaks and to develop objective criteria to detect their presence. Air leaks were simulated by modifying the foam tips that are used with the measurement probe through insertion of thin plastic tubing. To analyze the effect of air leaks, acoustic measurements were taken with both modified and unmodified foam tips in brass-tube cavities and human ear canals. Measurements were initially made in cavities to determine the range of critical leaks. Subsequently, data were collected in ears of 21 adults with normal hearing and normal middle-ear function. Four acoustic metrics were used for predicting the presence of air leaks and for quantifying these leaks: (1) low-frequency admittance phase (averaged over 0.1-0.2 kHz), (2) low-frequency absorbance, (3) the ratio of compliance volume to physical volume (CV/PV), and (4) the air-leak resonance frequency. The outcome variable in this analysis was the absorbance change (Δabsorbance), which was calculated in eight frequency bands. The trends were similar for both the brass cavities and the ear canals. ΔAbsorbance generally increased with air-leak size and was largest for the lower frequency bands (0.1-0.2 and 0.2-0.5 kHz). Air-leak effects were observed in frequencies up to 10 kHz, but their effects above 1 kHz were unpredictable. These high-frequency air leaks were larger in brass cavities than in ear canals. Each of the four predictor variables exhibited consistent dependence on

  11. Acoustic Immittance, Absorbance, and Reflectance in the Human Ear Canal

    PubMed Central

    Rosowski, John J.; Wilber, Laura Ann

    2015-01-01

    Ear canal measurements of acoustic immittance (a term that groups impedance and its inverse, admittance) and the related quantities of acoustic reflectance and power absorbance have been used to assess auditory function and aid in the differential diagnosis of conductive hearing loss for over 50 years. The change in such quantities after stimulation of the acoustic reflex also has been used in diagnosis. In this article, we define these quantities, describe how they are commonly measured, and discuss appropriate calibration procedures and standards necessary for accurate immittance/reflectance measurements. PMID:27516708

  12. Systems and methods for biometric identification using the acoustic properties of the ear canal

    DOEpatents

    Bouchard, Ann Marie; Osbourn, Gordon Cecil

    1998-01-01

    The present invention teaches systems and methods for verifying or recognizing a person's identity based on measurements of the acoustic response of the individual's ear canal. The system comprises an acoustic emission device, which emits an acoustic source signal s(t), designated by a computer, into the ear canal of an individual, and an acoustic response detection device, which detects the acoustic response signal f(t). A computer digitizes the response (detected) signal f(t) and stores the data. Computer-implemented algorithms analyze the response signal f(t) to produce ear-canal feature data. The ear-canal feature data obtained during enrollment is stored on the computer, or some other recording medium, to compare the enrollment data with ear-canal feature data produced in a subsequent access attempt, to determine if the individual has previously been enrolled. The system can also be adapted for remote access applications.

  13. Systems and methods for biometric identification using the acoustic properties of the ear canal

    DOEpatents

    Bouchard, A.M.; Osbourn, G.C.

    1998-07-28

    The present invention teaches systems and methods for verifying or recognizing a person`s identity based on measurements of the acoustic response of the individual`s ear canal. The system comprises an acoustic emission device, which emits an acoustic source signal s(t), designated by a computer, into the ear canal of an individual, and an acoustic response detection device, which detects the acoustic response signal f(t). A computer digitizes the response (detected) signal f(t) and stores the data. Computer-implemented algorithms analyze the response signal f(t) to produce ear-canal feature data. The ear-canal feature data obtained during enrollment is stored on the computer, or some other recording medium, to compare the enrollment data with ear-canal feature data produced in a subsequent access attempt, to determine if the individual has previously been enrolled. The system can also be adapted for remote access applications. 5 figs.

  14. Contralateral Occlusion Test: The effect of external ear canal occlusion on hearing thresholds.

    PubMed

    Reis, Luis Roque; Fernandes, Paulo; Escada, Pedro

    Bedside testing with tuning forks may decrease turnaround time and improve decision making for a quick qualitative assessment of hearing loss. The purpose of this study was to quantify the effects of ear canal occlusion on hearing, in order to decide which tuning fork frequency is more appropriate to use for quantifying hearing loss with the Contralateral Occlusion Test. Twenty normal-hearing adults (forty ears) underwent sound field pure tone audiometry with and without ear canal occlusion. Each ear was tested with the standard frequencies. The contralateral ear was suppressed with by masking. Ear occlusion was performed by two examiners. Participants aged between 21 and 30 years (25.6±3.03 years) showed an increase in hearing thresholds with increasing frequencies from 19.94dB (250Hz) to 39.25dB (2000Hz). The threshold difference between occluded and unoccluded conditions was statistically significant and increased from 10.69dB (250Hz) to 32.12dB (2000Hz). There were no statistically significant differences according to gender or between the examiners. The occlusion effect increased the hearing thresholds and became more evident with higher frequencies. The occlusion method as performed demonstrated reproducibility. In the Contralateral Occlusion Test, 256Hz or 512Hz tuning forks should be used for diagnosis of mild hearing loss, and a 2048Hz tuning fork should be used for moderate hearing loss. Copyright © 2017 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. All rights reserved.

  15. Inner-ear sound pressures near the base of the cochlea in chinchilla: Further investigation

    PubMed Central

    Ravicz, Michael E.; Rosowski, John J.

    2013-01-01

    The middle-ear pressure gain GMEP, the ratio of sound pressure in the cochlear vestibule PV to sound pressure at the tympanic membrane PTM, is a descriptor of middle-ear sound transfer and the cochlear input for a given stimulus in the ear canal. GMEP and the cochlear partition differential pressure near the cochlear base ΔPCP, which determines the stimulus for cochlear partition motion and has been linked to hearing ability, were computed from simultaneous measurements of PV, PTM, and the sound pressure in scala tympani near the round window PST in chinchilla. GMEP magnitude was approximately 30 dB between 0.1 and 10 kHz and decreased sharply above 20 kHz, which is not consistent with an ideal transformer or a lossless transmission line. The GMEP phase was consistent with a roughly 50-μs delay between PV and PTM. GMEP was little affected by the inner-ear modifications necessary to measure PST. GMEP is a good predictor of ΔPCP at low and moderate frequencies where PV ⪢ PST but overestimates ΔPCP above a few kilohertz where PV ≈ PST. The ratio of PST to PV provides insight into the distribution of sound pressure within the cochlear scalae. PMID:23556590

  16. Cartilage conduction is characterized by vibrations of the cartilaginous portion of the ear canal.

    PubMed

    Nishimura, Tadashi; Hosoi, Hiroshi; Saito, Osamu; Miyamae, Ryosuke; Shimokura, Ryota; Yamanaka, Toshiaki; Kitahara, Tadashi; Levitt, Harry

    2015-01-01

    Cartilage conduction (CC) is a new form of sound transmission which is induced by a transducer being placed on the aural cartilage. Although the conventional forms of sound transmission to the cochlea are classified into air or bone conduction (AC or BC), previous study demonstrates that CC is not classified into AC or BC (Laryngoscope 124: 1214-1219). Next interesting issue is whether CC is a hybrid of AC and BC. Seven volunteers with normal hearing participated in this experiment. The threshold-shifts by water injection in the ear canal were measured. AC, BC, and CC thresholds at 0.5-4 kHz were measured in the 0%-, 40%-, and 80%-water injection conditions. In addition, CC thresholds were also measured for the 20%-, 60%-, 100%-, and overflowing-water injection conditions. The contributions of the vibrations of the cartilaginous portion were evaluated by the threshold-shifts. For AC and BC, the threshold-shifts by the water injection were 22.6-53.3 dB and within 14.9 dB at the frequency of 0.5-4 kHz, respectively. For CC, when the water was filled within the bony portion, the thresholds were elevated to the same degree as AC. When the water was additionally injected to reach the cartilaginous portion, the thresholds at 0.5 and 1 kHz dramatically decreased by 27.4 and 27.5 dB, respectively. In addition, despite blocking AC by the injected water, the CC thresholds in force level were remarkably lower than those for BC. The vibration of the cartilaginous portion contributes to the sound transmission, particularly in the low frequency range. Although the airborne sound is radiated into the ear canal in both BC and CC, the mechanism underlying its generation is different between them. CC generates airborne sound in the canal more efficiently than BC. The current findings suggest that CC is not a hybrid of AC and BC.

  17. Cartilage Conduction Is Characterized by Vibrations of the Cartilaginous Portion of the Ear Canal

    PubMed Central

    Nishimura, Tadashi; Hosoi, Hiroshi; Saito, Osamu; Miyamae, Ryosuke; Shimokura, Ryota; Yamanaka, Toshiaki; Kitahara, Tadashi; Levitt, Harry

    2015-01-01

    Cartilage conduction (CC) is a new form of sound transmission which is induced by a transducer being placed on the aural cartilage. Although the conventional forms of sound transmission to the cochlea are classified into air or bone conduction (AC or BC), previous study demonstrates that CC is not classified into AC or BC (Laryngoscope 124: 1214–1219). Next interesting issue is whether CC is a hybrid of AC and BC. Seven volunteers with normal hearing participated in this experiment. The threshold-shifts by water injection in the ear canal were measured. AC, BC, and CC thresholds at 0.5–4 kHz were measured in the 0%-, 40%-, and 80%-water injection conditions. In addition, CC thresholds were also measured for the 20%-, 60%-, 100%-, and overflowing-water injection conditions. The contributions of the vibrations of the cartilaginous portion were evaluated by the threshold-shifts. For AC and BC, the threshold-shifts by the water injection were 22.6–53.3 dB and within 14.9 dB at the frequency of 0.5–4 kHz, respectively. For CC, when the water was filled within the bony portion, the thresholds were elevated to the same degree as AC. When the water was additionally injected to reach the cartilaginous portion, the thresholds at 0.5 and 1 kHz dramatically decreased by 27.4 and 27.5 dB, respectively. In addition, despite blocking AC by the injected water, the CC thresholds in force level were remarkably lower than those for BC. The vibration of the cartilaginous portion contributes to the sound transmission, particularly in the low frequency range. Although the airborne sound is radiated into the ear canal in both BC and CC, the mechanism underlying its generation is different between them. CC generates airborne sound in the canal more efficiently than BC. The current findings suggest that CC is not a hybrid of AC and BC. PMID:25768088

  18. The Effect of Superior Semicircular Canal Dehiscence on Intracochlear Sound Pressures

    NASA Astrophysics Data System (ADS)

    Nakajima, Hideko Heidi; Pisano, Dominic V.; Merchant, Saumil N.; Rosowski, John J.

    2011-11-01

    Semicircular canal dehiscence (SCD) is a pathological opening in the bony wall of the inner ear that can result in conductive hearing loss. The hearing loss is variable across patients, and the precise mechanism and source of variability is not fully understood. We use intracochlear sound pressure measurements in cadaveric preparations to study the effects of SCD size. Simultaneous measurement of basal intracochlear sound pressures in scala vestibuli (SV) and scala tympani (ST) quantifies the complex differential pressure across the cochlear partition, the stimulus that excites the partition. Sound-induced pressures in SV and ST, as well as stapes velocity and ear-canal pressure are measured simultaneously for various sizes of SCD followed by SCD patching. At low frequencies (<600 Hz) our results show that SCD decreases the pressure in both SV and ST, as well as differential pressure, and these effects become more pronounced as dehiscence size is increased. For frequencies above 1 kHz, the smallest pinpoint dehiscence can have the larger effect on the differential pressure in some ears. These effects due to SCD are reversible by patching the dehiscence.

  19. External and middle ear sound pressure distribution and acoustic coupling to the tympanic membrane

    PubMed Central

    Bergevin, Christopher; Olson, Elizabeth S.

    2014-01-01

    Sound energy is conveyed to the inner ear by the diaphanous, cone-shaped tympanic membrane (TM). The TM moves in a complex manner and transmits sound signals to the inner ear with high fidelity, pressure gain, and a short delay. Miniaturized sensors allowing high spatial resolution in small spaces and sensitivity to high frequencies were used to explore how pressure drives the TM. Salient findings are: (1) A substantial pressure drop exists across the TM, and varies in frequency from ∼10 to 30 dB. It thus appears reasonable to approximate the drive to the TM as being defined solely by the pressure in the ear canal (EC) close to the TM. (2) Within the middle ear cavity (MEC), spatial variations in sound pressure could vary by more than 20 dB, and the MEC pressure at certain locations/frequencies was as large as in the EC. (3) Spatial variations in pressure along the TM surface on the EC-side were typically less than 5 dB up to 50 kHz. Larger surface variations were observed on the MEC-side. PMID:24606269

  20. Surgical management of 2 different presentations of ear canal atresia in dogs

    PubMed Central

    Béraud, Romain

    2012-01-01

    A 6-year-old French spaniel and a 14-month-old German shepherd dog were diagnosed with ear canal atresia. Based on presentation, computed tomography, and auditory function evaluation, the first dog underwent excision of the horizontal ear canal and bulla curettage, and the second underwent re-anastomosis of the vertical canal to the external meatus. Both dogs had successful outcomes. PMID:23024390

  1. Ear-Canal Reflectance, Umbo Velocity and Tympanometry in Normal Hearing Adults

    PubMed Central

    Rosowski, John J; Nakajima, Hideko H.; Hamade, Mohamad A.; Mafoud, Lorice; Merchant, Gabrielle R.; Halpin, Christopher F.; Merchant, Saumil N.

    2011-01-01

    Objective This study compares measurements of ear-canal reflectance (ECR) to other objective measurements of middle-ear function including, audiometry, umbo velocity (VU), and tympanometry in a population of strictly defined normal hearing ears. Design Data were prospectively gathered from 58 ears of 29 normal hearing subjects, 16 female and 13 male, aged 22–64 years. Subjects met all of the following criteria to be considered as having normal hearing. (1) No history of significant middle-ear disease. (2) No history of otologic surgery. (3) Normal tympanic membrane (TM) on otoscopy. (4) Pure-tone audiometric thresholds of 20 dB HL or better for 0.25 – 8 kHz. (5) Air-bone gaps no greater than 15 dB at 0.25 kHz and 10 dB for 0.5 – 4 kHz. (6) Normal, type-A peaked tympanograms. (7) All subjects had two “normal” ears (as defined by these criteria). Measurements included pure-tone audiometry for 0.25 – 8 kHz, standard 226 Hz tympanometry, Ear canal reflectance(ECR) for 0.2 – 6 kHz at 60 dB SPL using the Mimosa Acoustics HearID system, and Umbo Velocity (VU ) for 0.3 – 6 kHz at 70–90 dB SPL using the HLV-1000 laser Doppler vibrometer (Polytec Inc). Results Mean power reflectance (|ECR|2) was near 1.0 at 0.2– 0.3 kHz, decreased to a broad minimum of 0.3 to 0.4 between 1 and 4 kHz, and then sharply increased to almost 0.8 by 6 kHz. The mean pressure reflectance phase angle (∠ECR) plotted on a linear frequency scale showed a group delay of approximately 0.1 ms for 0.2 – 6 kHz. Small significant differences were observed in |ECR|2 at the lowest frequencies between right and left ears, and between males and females at 4 kHz. |ECR|2 decreased with age, but reached significance only at 1 kHz. Our ECR measurements were generally similar to previous published reports. Highly significant negative correlations were found between |ECR|2 and VU for frequencies below 1 kHz. Significant correlations were also found between the tympanometrically determined peak

  2. Equivalent Ear Canal Volumes in Children Pre- and Post-Tympanostomy Tube Insertion.

    ERIC Educational Resources Information Center

    Shanks, Janet E.; And Others

    1992-01-01

    Evaluation of preoperative and postoperative equivalent ear canal volume measures on 334 children (ages 6 weeks to 6.7 years) with chronic otitis media with effusion found that the determination could be made very accurately for children 4 years and older. Criterion values for tympanic membrane perforation and preoperative and postoperative…

  3. Water used to visualize and remove hidden foreign bodies from the external ear canal.

    PubMed

    Peltola, T J; Saarento, R

    1992-02-01

    Small foreign bodies lodged anteriorly in the tympanic sulcus are usually not visible, due to the curve of the external ear canal. Such objects can be seen with the aid of an otomicroscope and micromirror or with an endoscope, and removed by irrigation. If irrigation fails, epithelial migration on the tympanic membrane may remove lodged foreign bodies, although this may take months. Our new method, which uses water to locate small objects lodged in the tympanic sulcus, includes irrigation of the ear, adjustment of the water level to the middle curve of the external ear canal, and use of the water surface as a concave lens, making the tympanic sulcus visible. With otomicroscopy a curved ear probe can then be used to remove lodged foreign bodies from behind the curve.

  4. Chinchilla middle-ear admittance and sound power: High-frequency estimates and effects of inner-ear modifications

    PubMed Central

    Ravicz, Michael E.; Rosowski, John J.

    2012-01-01

    The middle-ear input admittance relates sound power into the middle ear (ME) and sound pressure at the tympanic membrane (TM). ME input admittance was measured in the chinchilla ear canal as part of a larger study of sound power transmission through the ME into the inner ear. The middle ear was open, and the inner ear was intact or modified with small sensors inserted into the vestibule near the cochlear base. A simple model of the chinchilla ear canal, based on ear canal sound pressure measurements at two points along the canal and an assumption of plane-wave propagation, enables reliable estimates of YTM, the ME input admittance at the TM, from the admittance measured relatively far from the TM. YTM appears valid at frequencies as high as 17 kHz, a much higher frequency than previously reported. The real part of YTM decreases with frequency above 2 kHz. Effects of the inner-ear sensors (necessary for inner ear power computation) were small and generally limited to frequencies below 3 kHz. Computed power reflectance was ∼0.1 below 3.5 kHz, lower than with an intact ME below 2.5 kHz, and nearly 1 above 16 kHz. PMID:23039439

  5. Is Malassezia nana the main species in horses' ear canal microbiome?

    PubMed

    Aldrovandi, Ana Lúcia; Osugui, Lika; Acqua Coutinho, Selene Dall'

    2016-01-01

    The objective of this study was to characterize genotypically Malassezia spp. isolated from the external ear canal of healthy horses. Fifty-five horses, 39 (70.9%) males and 16 (29.1%) females, from different breeds and adults were studied. External ear canals were cleaned and a sterile cotton swab was introduced to collect cerumen. A total of 110 samples were cultured into Dixon medium and were incubated at 32°C for up to 15 days. Macro- and micromorphology and phenotypic identification were performed. DNA was extracted, strains were submitted to polymerase chain reaction technique, and the products obtained were submitted to Restriction Fragment Length Polymorphism using the restriction enzymes BstCI and HhaI. Strains were sent off to genetic sequencing of the regions 26S rDNA D1/D2 and ITS1-5.8S-ITS2 rDNA. Malassezia spp. were isolated from 33/55 (60%) animals and 52/110 (47%) ear canals. No growth on Sabouraud dextrose agar was observed, confirming the lipid dependence of all strains. Polymerase chain reaction-Restriction fragment length polymorphism permitted the molecular identification of Malassezia nana - 42/52 (81%) and Malassezia slooffiae - 10/52 (19%). Sequencing confirmed RFLP identification. It was surprising that M. nana represented over 80% of the strains and no Malassezia equina was isolated in this study, differing from what was expected. Copyright © 2016 Sociedade Brasileira de Microbiologia. Published by Elsevier Editora Ltda. All rights reserved.

  6. Occurrence and distribution of Malassezia species on skin and external ear canal of horses.

    PubMed

    Shokri, Hojjatollah

    2016-01-01

    The aim of this study was to investigate the prevalence of Malassezia species from the body skin and external ear canal of healthy horses. The samples were obtained by scraping the skin surface from the nose, groin and dorsum and swabbing from the external ear canal of 163 animals, and then incubated on sabouraud dextrose agar and modified Dixon agar. Malassezia species were isolated from 34.9% of horses. The percentages of Malassezia species were 64.3% for Arab, 35.7% for Persian, 35.4% for Thoroughbred and 27.1% for Turkmen breeds. The greatest abundance of Malassezia species was found in the external ear canal (47.7%, representing significant difference with other sites), followed by nose (26.3%), groin (15.8%) and dorsum (10.5%) (P < 0.05). A total of 57 strains from six Malassezia species were detected with a frequency rate as follows: M. pachydermatis (33.3%), M. globosa (26.3%), M. sympodialis (14.1%), M. restricta (10.5%), M. obtusa (8.8%) and M. furfur (7%). The most common age-group affected was 1-3 years (59.4%). This study confirmed that cutaneous Malassezia microbiota in healthy horses varies by body site and age but not by breed and gender, representing M. pachydermatis as the most prevalent species on horse skin. © 2015 Blackwell Verlag GmbH.

  7. Stapes Displacement and Intracochlear Pressure in Response to Very High Level, Low Frequency Sounds

    PubMed Central

    Greene, Nathaniel T.; Jenkins, Herman A.; Tollin, Daniel J.; Easter, James R.

    2018-01-01

    The stapes is held in the oval window by the stapedial annular ligament (SAL), which restricts total peak-to-peak displacement of the stapes. Previous studies have suggested that for moderate (< 130 dB SPL) sound levels intracochlear pressure (PIC), measured at the base of the cochlea far from the basilar membrane, increases directly proportionally with stapes displacement (DStap), thus a current model of impulse noise exposure (the Auditory Hazard Assessment Algorithm for Humans, or AHAAH) predicts that peak PIC will vary linearly with DStap up to some saturation point. However, no direct tests of DStap, or of the relationship with PIC during such motion, have been performed during acoustic stimulation of the human ear. In order to examine the relationship between DStap and PIC to very high level sounds, measurements of DStap and PIC were made in cadaveric human temporal bones. Specimens were prepared by mastoidectomy and extended facial recess to expose the ossicular chain. Measurements of PIC were made in scala vestibuli (PSV) and scala tympani (PST), along with the SPL in the external auditory canal (PEAC), concurrently with laser Doppler vibrometry (LDV) measurements of stapes velocity (VStap). Stimuli were moderate (~100 dB SPL) to very high level (up to ~170 dB SPL), low frequency tones (20–2560 Hz). Both DStap and PSV increased proportionally with sound pressure level in the ear canal up to approximately ~150 dB SPL, above which both DStap and PSV showed a distinct deviation from proportionality with PEAC. Both DStap and PSV approached saturation: DStap at a value exceeding 150 μm, which is substantially higher than has been reported for small mammals, while PSV showed substantial frequency dependence in the saturation point. The relationship between PSV and DStap remained constant, and cochlear input impedance did not vary across the levels tested, consistent with prior measurements at lower sound levels. These results suggest that PSV sound pressure

  8. Stapes displacement and intracochlear pressure in response to very high level, low frequency sounds.

    PubMed

    Greene, Nathaniel T; Jenkins, Herman A; Tollin, Daniel J; Easter, James R

    2017-05-01

    The stapes is held in the oval window by the stapedial annular ligament (SAL), which restricts total peak-to-peak displacement of the stapes. Previous studies have suggested that for moderate (<130 dB SPL) sound levels intracochlear pressure (P IC ), measured at the base of the cochlea far from the basilar membrane, increases directly proportionally with stapes displacement (D Stap ), thus a current model of impulse noise exposure (the Auditory Hazard Assessment Algorithm for Humans, or AHAAH) predicts that peak P IC will vary linearly with D Stap up to some saturation point. However, no direct tests of D Stap , or of the relationship with P IC during such motion, have been performed during acoustic stimulation of the human ear. In order to examine the relationship between D Stap and P IC to very high level sounds, measurements of D Stap and P IC were made in cadaveric human temporal bones. Specimens were prepared by mastoidectomy and extended facial recess to expose the ossicular chain. Measurements of P IC were made in scala vestibuli (P SV ) and scala tympani (P ST ), along with the SPL in the external auditory canal (P EAC ), concurrently with laser Doppler vibrometry (LDV) measurements of stapes velocity (V Stap ). Stimuli were moderate (∼100 dB SPL) to very high level (up to ∼170 dB SPL), low frequency tones (20-2560 Hz). Both D Stap and P SV increased proportionally with sound pressure level in the ear canal up to approximately ∼150 dB SPL, above which both D Stap and P SV showed a distinct deviation from proportionality with P EAC . Both D Stap and P SV approached saturation: D Stap at a value exceeding 150 μm, which is substantially higher than has been reported for small mammals, while P SV showed substantial frequency dependence in the saturation point. The relationship between P SV and D Stap remained constant, and cochlear input impedance did not vary across the levels tested, consistent with prior measurements at lower sound levels. These

  9. Sheep as a large animal ear model: Middle-ear ossicular velocities and intracochlear sound pressure.

    PubMed

    Péus, Dominik; Dobrev, Ivo; Prochazka, Lukas; Thoele, Konrad; Dalbert, Adrian; Boss, Andreas; Newcomb, Nicolas; Probst, Rudolf; Röösli, Christof; Sim, Jae Hoon; Huber, Alexander; Pfiffner, Flurin

    2017-08-01

    Animals are frequently used for the development and testing of new hearing devices. Dimensions of the middle ear and cochlea differ significantly between humans and commonly used animals, such as rodents or cats. The sheep cochlea is anatomically more like the human cochlea in size and number of turns. This study investigated the middle-ear ossicular velocities and intracochlear sound pressure (ICSP) in sheep temporal bones, with the aim of characterizing the sheep as an experimental model for implantable hearing devices. Measurements were made on fresh sheep temporal bones. Velocity responses of the middle ear ossicles at the umbo, long process of the incus and stapes footplate were measured in the frequency range of 0.25-8 kHz using a laser Doppler vibrometer system. Results were normalized by the corresponding sound pressure level in the external ear canal (P EC ). Sequentially, ICSPs at the scala vestibuli and tympani were then recorded with custom MEMS-based hydrophones, while presenting identical acoustic stimuli. The sheep middle ear transmitted most effectively around 4.8 kHz, with a maximum stapes velocity of 0.2 mm/s/Pa. At the same frequency, the ICSP measurements in the scala vestibuli and tympani showed the maximum gain relative to the P EC (24 dB and 5 dB, respectively). The greatest pressure difference across the cochlear partition occurred between 4 and 6 kHz. A comparison between the results of this study and human reference data showed middle-ear resonance and best cochlear sensitivity at higher frequencies in sheep. In summary, sheep can be an appropriate large animal model for research and development of implantable hearing devices. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. The Effect of Superior Semicircular Canal Dehiscence on Intracochlear Sound Pressures

    PubMed Central

    Pisano, Dominic V.; Niesten, Marlien E.F.; Merchant, Saumil N.; Nakajima, Hideko Heidi

    2013-01-01

    Semicircular canal dehiscence (SCD) is a pathological opening in the bony wall of the inner ear that can result in conductive hearing loss. The hearing loss is variable across patients, and the precise mechanism and source of variability are not fully understood. Simultaneous measurements of basal intracochlear sound pressures in scala vestibuli (SV) and scala tympani (ST) enable quantification of the differential pressure across the cochlear partition, the stimulus that excites the cochlear partition. We used intracochlear sound pressure measurements in cadaveric preparations to study the effects of SCD size. Sound-induced pressures in SV and ST, as well as stapes velocity and ear-canal pressure were measured simultaneously for various sizes of SCD followed by SCD patching. Our results showed that at low frequencies (<600 Hz), SCD decreased the pressure in both SV and ST, as well as differential pressure, and these effects became more pronounced as dehiscence size was increased. Near 100 Hz, SV decreased about 10 dB for a 0.5 mm dehiscence and 20 dB for a 2 mm dehiscence, while ST decreased about 8 dB for a 0.5 mm dehiscence and 18 dB for a 2mm dehiscence. Differential pressure decreased about 10 dB for a 0.5 mm dehiscence and about 20 dB for a 2 mm dehiscense at 100 Hz. In some ears, for frequencies above 1 kHz, the smallest pinpoint dehiscence had bigger effects on the differential pressure (10 dB decrease) than larger dehiscenses (less than 10 dB decrease), suggesting larger hearing losses in this frequency range. These effects due to SCD were reversible by patching the dehiscence. We also showed that under certain circumstances such as SCD, stapes velocity is not related to how the ear can transduce sound across the cochlear partition because it is not directly related to the differential pressure, emphasizing that certain pathologies cannot be fully assessed by measurements such as stapes velocity. PMID:22814034

  11. Comparison of Ear-Canal Reflectance and Umbo Velocity in Patients with Conductive Hearing Loss

    NASA Astrophysics Data System (ADS)

    Merchant, Gabrielle R.; Nakajima, Hideko H.; Pisano, Dominic V.; Röösli, Christof; Hamade, Mohamad A.; Mafoud, Lorice; Halpin, Christopher F.; Merchant, Saumil N.; Rosowski, John J.

    2011-11-01

    Patients who present at hearing clinics with a conductive hearing loss (CHL) in the presence of an intact, healthy tympanic membrane create a unique challenge for otologists. While patient counseling, treatment options, and outcome vary with differing middle-ear pathologies, a non-invasive diagnostic that can differentiate between these pathologies does not currently exist. We evaluated the clinical utility and diagnostic accuracy of two non-invasive measures of middle-ear mechanics: ear-canal reflectance (ECR) and umbo velocity (VU).

  12. Identification of neonatal hearing impairment: ear-canal measurements of acoustic admittance and reflectance in neonates.

    PubMed

    Keefe, D H; Folsom, R C; Gorga, M P; Vohr, B R; Bulen, J C; Norton, S J

    2000-10-01

    1) To describe broad bandwidth measurements of acoustic admittance (Y) and energy reflectance (R) in the ear canals of neonates. 2) To describe a means for evaluating when a YR response is valid. 3) To describe the relations between these YR measurements and age, gender, left/right ear, and selected risk factors. YR responses were obtained at four test sites in well babies without risk indicators, well babies with at least one risk indicator, and graduates of neonatal intensive care units. YR responses were measured using a chirp stimulus at moderate levels over a frequency range from 250 to 8000 Hz. The system was calibrated based on measurements in a set of cylindrical tubes. The probe assembly was inserted in the ear canal of the neonate, and customized software was used for data acquisition. YR responses were measured in over 4000 ears, and half of the responses were used in exploratory data analyses. The particular YR variables chosen for analysis were energy reflectance, equivalent volume and acoustic conductance. Based on the view that unduly large negative equivalent volumes at low frequencies were physically impossible, it was concluded that approximately 13% of the YR responses showed evidence of improper probe seal in the ear canal. To test how these outliers influenced the overall pattern of YR responses, analyses were conducted both on the full data set (N = 2081) and the data set excluding outliers (N = 1825). The YR responses averaged over frequency varied with conceptional age (conception to date of test), gender, left/right ear, and selected risk factors; in all cases, significant effects were observed more frequently in the data set excluding outliers. After excluding outliers and controlling for conceptional age effects, the dichotomous risk factors accounting for the greatest variance in the YR responses were, in rank order, cleft lip and palate, aminoglycoside therapy, low birth weight, history of ventilation, and low APGAR scores. In separate

  13. Characterizing the ear canal acoustic reflectance and impedance by pole-zero fitting

    PubMed Central

    Robinson, Sarah R.; Nguyen, Cac T.; Allen, Jont B.

    2013-01-01

    This study characterizes middle ear complex acoustic reflectance (CAR) and impedance by fitting poles and zeros to real-ear measurements. The goal of this work is to establish a quantitative connection between pole-zero locations and the underlying physical properties of CAR data. Most previous studies have analyzed CAR magnitude; while the magnitude accounts for reflected power, it does not encode latency information. Thus, an analysis that studies the real and imaginary parts of the data together could be more powerful. Pole-zero fitting of CAR data is examined using data compiled from various studies, dating back to Voss and Allen (1994). Recent CAR measurements were taken using a middle ear acoustic power analyzer (MEPA) system (HearID, Mimosa Acoustics), which makes complex acoustic impedance and reflectance measurements in the ear canal over the 0.2 to 6.0 kHz frequency range. Pole-zero fits to measurements over this range are achieved with an average RMS relative error of less than 3% using 12 poles. Factoring the reflectance fit into its all-pass and minimum-phase components approximates the effect of the ear canal, allowing for comparison across measurements. It was found that individual CAR magnitude variations for normal middle ears in the 1 to 4 kHz range often give rise to closely-placed pole-zero pairs, and that the locations of the poles and zeros in the s-plane may differ between normal and pathological middle ears. This study establishes a methodology for examining the physical and mathematical properties of CAR using a concise parametric model. Pole-zero modeling shows promise for precise parameterization of CAR data and for identification of middle ear pathologies. PMID:23524141

  14. Optical Measurement Of Sound Pressure

    NASA Technical Reports Server (NTRS)

    Trinh, Eugene H.; Gaspar, Mark; Leung, Emily W.

    1989-01-01

    Noninvasive technique does not disturb field it measures. Sound field deflects laser beam proportionally to its amplitude. Knife edge intercepts undeflected beam, allowing only deflected beam to reach photodetector. Apparatus calibrated by comparing output of photodetector with that of microphone. Optical technique valuable where necessary to measure in remote, inaccessible, or hostile environment or to avoid perturbation of measured region.

  15. Sound field separation with sound pressure and particle velocity measurements.

    PubMed

    Fernandez-Grande, Efren; Jacobsen, Finn; Leclère, Quentin

    2012-12-01

    In conventional near-field acoustic holography (NAH) it is not possible to distinguish between sound from the two sides of the array, thus, it is a requirement that all the sources are confined to only one side and radiate into a free field. When this requirement cannot be fulfilled, sound field separation techniques make it possible to distinguish between outgoing and incoming waves from the two sides, and thus NAH can be applied. In this paper, a separation method based on the measurement of the particle velocity in two layers and another method based on the measurement of the pressure and the velocity in a single layer are proposed. The two methods use an equivalent source formulation with separate transfer matrices for the outgoing and incoming waves, so that the sound from the two sides of the array can be modeled independently. A weighting scheme is proposed to account for the distance between the equivalent sources and measurement surfaces and for the difference in magnitude between pressure and velocity. Experimental and numerical studies have been conducted to examine the methods. The double layer velocity method seems to be more robust to noise and flanking sound than the combined pressure-velocity method, although it requires an additional measurement surface. On the whole, the separation methods can be useful when the disturbance of the incoming field is significant. Otherwise the direct reconstruction is more accurate and straightforward.

  16. Selective attention reduces physiological noise in the external ear canals of humans. I: Auditory attention

    PubMed Central

    Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis

    2014-01-01

    In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring, or not requiring, selective auditory attention. Appended to each stimulus presentation, and included in the calculation of each nSFOAE response, was a 30-ms silent period that was used to estimate the level of the inherent physiological noise in the ear canals of our subjects during each behavioral condition. Physiological-noise magnitudes were higher (noisier) for all subjects in the inattention task, and lower (quieter) in the selective auditory-attention tasks. These noise measures initially were made at the frequency of our nSFOAE probe tone (4.0 kHz), but the same attention effects also were observed across a wide range of frequencies. We attribute the observed differences in physiological-noise magnitudes between the inattention and attention conditions to different levels of efferent activation associated with the differing attentional demands of the behavioral tasks. One hypothesis is that when the attentional demand is relatively great, efferent activation is relatively high, and a decrease in the gain of the cochlear amplifier leads to lower-amplitude cochlear activity, and thus a smaller measure of noise from the ear. PMID:24732069

  17. Pressurized transient otoacoustic emissions measured using click and chirp stimuli.

    PubMed

    Keefe, Douglas H; Patrick Feeney, M; Hunter, Lisa L; Fitzpatrick, Denis F; Sanford, Chris A

    2018-01-01

    Transient-evoked otoacoustic emission (TEOAE) responses were measured in normal-hearing adult ears over frequencies from 0.7 to 8 kHz, and analyzed with reflectance/admittance data to measure absorbed sound power and the tympanometric peak pressure (TPP). The mean TPP was close to ambient. TEOAEs were measured in the ear canal at ambient pressure, TPP, and fixed air pressures from 150 to -200 daPa. Both click and chirp stimuli were used to elicit TEOAEs, in which the incident sound pressure level was constant across frequency. TEOAE levels were similar at ambient and TPP, and for frequencies from 0.7 to 2.8 kHz decreased with increasing positive and negative pressures. At 4-8 kHz, TEOAE levels were larger at positive pressures. This asymmetry is possibly related to changes in mechanical transmission through the ossicular chain. The mean TEOAE group delay did not change with pressure, although small changes were observed in the mean instantaneous frequency and group spread. Chirp TEOAEs measured in an adult ear with Eustachian tube dysfunction and TPP of -165 daPa were more robust at TPP than at ambient. Overall, results demonstrate the feasibility and clinical potential of measuring TEOAEs at fixed pressures in the ear canal, which provide additional information relative to TEOAEs measured at ambient pressure.

  18. Underwater hearing and sound localization with and without an air interface.

    PubMed

    Shupak, Avi; Sharoni, Zohara; Yanir, Yoav; Keynan, Yoav; Alfie, Yechezkel; Halpern, Pinchas

    2005-01-01

    Underwater hearing acuity and sound localization are improved by the presence of an air interface around the pinnae and inside the external ear canals. Hearing threshold and the ability to localize sound sources are reduced underwater. The resonance frequency of the external ear is lowered when the external ear canal is filled with water, and the impedance-matching ability of the middle ear is significantly reduced due to elevation of the ambient pressure, the water-mass load on the tympanic membrane, and the addition of a fluid-air interface during submersion. Sound lateralization on land is largely explained by the mechanisms of interaural intensity differences and interaural temporal or phase differences. During submersion, these differences are largely lost due to the increase in underwater sound velocity and cancellation of the head's acoustic shadow effect because of the similarity between the impedance of the skull and the surrounding water. Ten scuba divers wearing a regular opaque face mask or an opaque ProEar 2000 (Safe Dive, Ltd., Hofit, Israel) mask that enables the presence of air at ambient pressure in and around the ear made a dive to a depth of 3 m in the open sea. Four underwater speakers arranged on the horizontal plane at 90-degree intervals and at a distance of 5 m from the diver were used for testing pure-tone hearing thresholds (PTHT), the reception threshold for the recorded sound of a rubber-boat engine, and sound localization. For sound localization, the sound of the rubber boat's engine was randomly delivered by one speaker at a time at 40 dB HL above the recorded sound of a rubber-boat engine, and the diver was asked to point to the sound source. The azimuth was measured by the diver's companion using a navigation board. Underwater PTHT with both masks were significantly higher for frequencies of 250 to 6000 Hz when compared with the thresholds on land (p <0.0001). No differences were found in the PTHT or the reception threshold for the

  19. Distribution of standing-wave errors in real-ear sound-level measurements.

    PubMed

    Richmond, Susan A; Kopun, Judy G; Neely, Stephen T; Tan, Hongyang; Gorga, Michael P

    2011-05-01

    Standing waves can cause measurement errors when sound-pressure level (SPL) measurements are performed in a closed ear canal, e.g., during probe-microphone system calibration for distortion-product otoacoustic emission (DPOAE) testing. Alternative calibration methods, such as forward-pressure level (FPL), minimize the influence of standing waves by calculating the forward-going sound waves separate from the reflections that cause errors. Previous research compared test performance (Burke et al., 2010) and threshold prediction (Rogers et al., 2010) using SPL and multiple FPL calibration conditions, and surprisingly found no significant improvements when using FPL relative to SPL, except at 8 kHz. The present study examined the calibration data collected by Burke et al. and Rogers et al. from 155 human subjects in order to describe the frequency location and magnitude of standing-wave pressure minima to see if these errors might explain trends in test performance. Results indicate that while individual results varied widely, pressure variability was larger around 4 kHz and smaller at 8 kHz, consistent with the dimensions of the adult ear canal. The present data suggest that standing-wave errors are not responsible for the historically poor (8 kHz) or good (4 kHz) performance of DPOAE measures at specific test frequencies.

  20. PVDF-Based Piezoelectric Microphone for Sound Detection Inside the Cochlea: Toward Totally Implantable Cochlear Implants.

    PubMed

    Park, Steve; Guan, Xiying; Kim, Youngwan; Creighton, Francis Pete X; Wei, Eric; Kymissis, Ioannis John; Nakajima, Hideko Heidi; Olson, Elizabeth S

    2018-01-01

    We report the fabrication and characterization of a prototype polyvinylidene fluoride polymer-based implantable microphone for detecting sound inside gerbil and human cochleae. With the current configuration and amplification, the signal-to-noise ratios were sufficiently high for normally occurring sound pressures and frequencies (ear canal pressures >50-60 dB SPL and 0.1-10 kHz), though 10 to 20 dB poorer than for some hearing aid microphones. These results demonstrate the feasibility of the prototype devices as implantable microphones for the development of totally implantable cochlear implants. For patients, this will improve sound reception by utilizing the outer ear and will improve the use of cochlear implants.

  1. PVDF-Based Piezoelectric Microphone for Sound Detection Inside the Cochlea: Toward Totally Implantable Cochlear Implants

    PubMed Central

    Guan, Xiying; Kim, Youngwan; Creighton, Francis (Pete) X.; Wei, Eric; Kymissis, Ioannis(John); Nakajima, Hideko Heidi; Olson, Elizabeth S.

    2018-01-01

    We report the fabrication and characterization of a prototype polyvinylidene fluoride polymer-based implantable microphone for detecting sound inside gerbil and human cochleae. With the current configuration and amplification, the signal-to-noise ratios were sufficiently high for normally occurring sound pressures and frequencies (ear canal pressures >50–60 dB SPL and 0.1–10 kHz), though 10 to 20 dB poorer than for some hearing aid microphones. These results demonstrate the feasibility of the prototype devices as implantable microphones for the development of totally implantable cochlear implants. For patients, this will improve sound reception by utilizing the outer ear and will improve the use of cochlear implants. PMID:29732950

  2. Intracochlear pressure measurements during acoustic shock wave exposure.

    PubMed

    Greene, Nathaniel T; Alhussaini, Mohamed A; Easter, James R; Argo, Theodore F; Walilko, Tim; Tollin, Daniel J

    2018-05-19

    Injuries to the peripheral auditory system are among the most common results of high intensity impulsive acoustic exposure. Prior studies of high intensity sound transmission by the ossicular chain have relied upon measurements in animal models, measurements at more moderate sound levels (i.e. < 130 dB SPL), and/or measured responses to steady-state noise. Here, we directly measure intracochlear pressure in human cadaveric temporal bones, with fiber optic pressure sensors placed in scala vestibuli (SV) and tympani (ST), during exposure to shock waves with peak positive pressures between ∼7 and 83 kPa. Eight full-cephalic human cadaver heads were exposed, face-on, to acoustic shock waves in a 45 cm diameter shock tube. Specimens were exposed to impulses with nominal peak overpressures of 7, 28, 55, & 83 kPa (171, 183, 189, & 192 dB pSPL), measured in the free field adjacent to the forehead. Specimens were prepared bilaterally by mastoidectomy and extended facial recess to expose the ossicular chain. Ear canal (EAC), middle ear, and intracochlear sound pressure levels were measured with fiber-optic pressure sensors. Surface-mounted sensors measured SPL and skull strain near the opening of each EAC and at the forehead. Measurements on the forehead showed incident peak pressures approximately twice that measured by adjacent free-field and EAC entrance sensors, as expected based on the sensor orientation (normal vs tangential to the shock wave propagation). At 7 kPa, EAC pressure showed gain, calculated from the frequency spectra, consistent with the ear canal resonance, and gain in the intracochlear pressures (normalized to the EAC pressure) were consistent with (though somewhat lower than) previously reported middle ear transfer functions. Responses to higher intensity impulses tended to show lower intracochlear gain relative to EAC, suggesting sound transmission efficiency along the ossicular chain is reduced at high intensities. Tympanic membrane

  3. Middle ear polyps: results of traction avulsion after a lateral approach to the ear canal in 62 cats (2004-2014).

    PubMed

    Janssens, Sara Ds; Haagsman, Annika N; Ter Haar, Gert

    2017-08-01

    Objectives The objective of this study was to report the surgical outcome and complication rate of deep traction avulsion (TA) of feline aural inflammatory polyps after a lateral approach (LA) to the ear canal. Methods This was a retrospective analysis of data retrieved from an electronic database of 62 cats treated with TA after an LA (TALA) for removal of ear canal polyps. Long-term outcome was assessed via a telephone questionnaire survey with the owners. Results Domestic shorthair cats (48%) and Maine Coons (37%) were over-represented. The most common presenting clinical signs were otorrhoea, ear scratching and head shaking. Video-otoscopic examination confirmed a polypous mass in the ear canal in all patients. All 62 cats underwent TALA, with a mean surgical time of 33 mins for experienced surgeons (n = 4) and 48 mins (n = 12) for less experienced surgeons. The recurrence rate of polyp regrowth for experienced surgeons was 14.3% vs 35% for the less experienced surgeons. Postoperative complications included Horner's syndrome (11.5%) and facial nerve paralysis (3%). Otitis interna was not observed. Conclusions and relevance A lateral approach to the ear canal in combination with deep TA of an aural inflammatory polyp is an effective first-line technique that results in a low recurrence and complication rate.

  4. Sound pressure level in a municipal preschool

    PubMed Central

    Kemp, Adriana Aparecida Tahara; Delecrode, Camila Ribas; Guida, Heraldo Lorena; Ribeiro, André Knap; Cardoso, Ana Claúdia Vieira

    2013-01-01

    Summary Aim: To evaluate the sound pressure level to which preschool students are exposed. Method: This was a prospective, quantitative, nonexperimental, and descriptive study. To achieve the aim of the study we used an audio dosimeter. The sound pressure level (SPL) measurements were obtained for 2 age based classrooms. Preschool I and II. The measurements were obtained over 4 days in 8-hour sessions, totaling 1920 minutes. Results: Compared with established standards, the SPL measured ranged from 40.6 dB (A) to 105.8 dB (A). The frequency spectrum of the SPL was concentrated in the frequency range between 500 Hz and 4000 Hz. The older children produced higher SPLs than the younger ones, and the levels varied according to the activity performed. Painting and writing were the quietest activities, while free activities period and games were the noisiest. Conclusion: The SPLs measured at the preschool were higher and exceeded the maximum permitted level according to the reference standards. Therefore, the implementation of actions that aim to minimize the negative impact of noise in this environment is essential. PMID:25992013

  5. Automated analysis of blood pressure measurements (Korotkov sound)

    NASA Technical Reports Server (NTRS)

    Golden, D. P.; Hoffler, G. W.; Wolthuis, R. A.

    1972-01-01

    Automatic system for noninvasive measurements of arterial blood pressure is described. System uses Korotkov sound processor logic ratios to identify Korotkov sounds. Schematic diagram of system is provided to show components and method of operation.

  6. The frequency range of TMJ sounds.

    PubMed

    Widmalm, S E; Williams, W J; Djurdjanovic, D; McKay, D C

    2003-04-01

    There are conflicting opinions about the frequency range of temporomandibular joint (TMJ) sounds. Some authors claim that the upper limit is about 650 Hz. The aim was to test the hypothesis that TMJ sounds may contain frequencies well above 650 Hz but that significant amounts of their energy are lost if the vibrations are recorded using contact sensors and/or travel far through the head tissues. Time-frequency distributions of 172 TMJ clickings (three subjects) were compared between recordings with one microphone in the ear canal and a skin contact transducer above the clicking joint and between recordings from two microphones, one in each ear canal. The energy peaks of the clickings recorded with a microphone in the ear canal on the clicking side were often well above 650 Hz and always in a significantly higher area (range 117-1922 Hz, P < 0.05 or lower) than in recordings obtained with contact sensors (range 47-375 Hz) or in microphone recordings from the opposite ear canal (range 141-703 Hz). Future studies are required to establish normative frequency range values of TMJ sounds but need methods also capable of recording the high frequency vibrations.

  7. Scala vestibuli pressure and three-dimensional stapes velocity measured in direct succession in gerbil.

    PubMed

    Decraemer, W F; de La Rochefoucauld, O; Dong, W; Khanna, S M; Dirckx, J J J; Olson, E S

    2007-05-01

    It was shown that the mode of vibration of the stapes has a predominant piston component but rotations producing tilt of the footplate are also present. Tilt and piston components vary with frequency. Separately it was shown that the pressure gain between ear canal and scala vestibuli was a remarkably flat and smooth function of frequency. Is tilt functional contributing to the pressure in the scala vestibuli and helping in smoothing the pressure gain? In experiments on gerbil the pressure in the scala vestibuli directly behind the footplate was measured while recording simultaneously the pressure produced by the sound source in the ear canal. Successively the three-dimensional motion of the stapes was measured in the same animal. Combining the vibration measurements with an anatomical shape measurement from a micro-CT (CT: computed tomography) scan the piston-like motion and the tilt of the footplate was calculated and correlated to the corresponding scala vestibuli pressure curves. No evidence was found for the hypothesis that dips in the piston velocity are filled by peaks in tilt in a systematic way to produce a smooth middle ear pressure gain function. The present data allowed calculations of the individual cochlear input impedances.

  8. The EarLens System: New Sound Transduction Methods

    PubMed Central

    Perkins, Rodney; Fay, Jonathan P.; Rucker, Paul; Rosen, Micha; Olson, Lisa; Puria, Sunil

    2010-01-01

    The hypothesis is tested that an open-canal hearing device, with a microphone in the ear canal, can be designed to provide amplification over a wide bandwidth and without acoustic feedback. In the design under consideration, a transducer consisting of a thin silicone platform with an embedded magnet is placed directly on the tympanic membrane. Sound picked up by a microphone in the ear canal, including sound-localization cues thought to be useful for speech perception in noisy environments, is processed and amplified, and then used to drive a coil near the tympanic-membrane transducer. The perception of sound results from the vibration of the transducer in response the electromagnetic field produced by the coil. Sixteen subjects (ranging from normal-hearing to moderately hearing-impaired) wore this transducer for up to a ten-month period, and were monitored for any adverse reactions. Three key functional characteristics were measured: 1) the maximum equivalent pressure output (MEPO) of the transducer; 2) the feedback gain margin (GM), which describes the maximum allowable gain before feedback occurs; and 3) the tympanic-membrane damping effect (DTM), which describes the change in hearing level due to placement of the transducer on the eardrum. Results indicate that the tympanic-membrane transducer remains in place and is well tolerated. The system can produce sufficient output to reach threshold for those with as much as 60 dBHL of hearing impairment for up to 8 kHz in 86% of the study population, and up to 11.2 kHz in 50% of the population. The feedback gain margin is on average 30 dB except at the ear canal resonance frequencies of 3 and 9 kHz, where the average was reduced to 12 dB and 23 dB respectively. The average value of DTM is close to 0 dB everywhere except in the 2–4 kHz range, where it peaks at 8 dB. A new alternative system that uses photonic energy to transmit both the signal and power to a photodiode and micro-actuator on an EarLens platform is

  9. The role of pars flaccida in human middle ear sound transmission.

    PubMed

    Aritomo, H; Goode, R L; Gonzalez, J

    1988-04-01

    The role of the pars flaccida in middle ear sound transmission was studied with the use of twelve otoscopically normal, fresh, human temporal bones. Peak-to-peak umbo displacement in response to a constant sound pressure level at the tympanic membrane was measured with a noncontacting video measuring system capable of repeatable measurements down to 0.2 micron. Measurements were made before and after pars flaccida modifications at 18 frequencies between 100 and 4000 Hz. Four pars flaccida modifications were studied: (1) acoustic insulation of the pars flaccida to the ear canal with a silicone rubber baffle, (2) stiffening the pars flaccida with cyanoacrylate cement, (3) decreasing the tension of the pars flaccida with a nonperforating incision, and (4) perforation of the pars flaccida. All of the modifications (except the perforation) had a minimal effect on umbo displacement; this seems to imply that the pars flaccida has a minor acoustic role in human beings.

  10. Finite-Element Modelling of the Acoustic Input Admittance of the Newborn Ear Canal and Middle Ear.

    PubMed

    Motallebzadeh, Hamid; Maftoon, Nima; Pitaro, Jacob; Funnell, W Robert J; Daniel, Sam J

    2017-02-01

    Admittance measurement is a promising tool for evaluating the status of the middle ear in newborns. However, the newborn ear is anatomically very different from the adult one, and the acoustic input admittance is different than in adults. To aid in understanding the differences, a finite-element model of the newborn ear canal and middle ear was developed and its behaviour was studied for frequencies up to 2000 Hz. Material properties were taken from previous measurements and estimates. The simulation results were within the range of clinical admittance measurements made in newborns. Sensitivity analyses of the material properties show that in the canal model, the maximum admittance and the frequency at which that maximum admittance occurs are affected mainly by the stiffness parameter; in the middle-ear model, the damping is as important as the stiffness in influencing the maximum admittance magnitude but its effect on the corresponding frequency is negligible. Scaling up the geometries increases the admittance magnitude and shifts the resonances to lower frequencies. The results suggest that admittance measurements can provide more information about the condition of the middle ear when made at multiple frequencies around its resonance.

  11. Optical diffusion property of cerumen from ear canal and correlation to metal content measured by synchrotron x-ray absorption

    NASA Astrophysics Data System (ADS)

    Holden, Todd; Dehipawala, Sumudu; Cheung, E.; Golebiewska, U.; Schneider, P.; Tremberger, G., Jr.; Kokkinos, D.; Lieberman, D.; Dehipawala, Sunil; Cheung, T.

    2012-03-01

    Human (and other mammals) would secrete cerumen (ear wax) to protect the skin of the ear canal against pathogens and insects. The studies of biodiversity of pathogen in human include intestine microbe colony, belly button microbe colony, etc. Metals such as zinc and iron are essentials to bio-molecular pathways and would be related to the underlying pathogen vitality. This project studies the biodiversity of cerumen via its metal content and aims to develop an optical probe for metal content characterization. The optical diffusion mean free path and absorption of human cerumen samples dissolved in solvent have been measured in standard transmission measurements. EXFAS and XANES have been measured at Brookhaven Synchrotron Light Source for the determination of metal contents, presumably embedded within microbes/insects/skin cells. The results show that a calibration procedure can be used to correlate the optical diffusion parameters to the metal content, thus expanding the diagnostic of cerumen in the study of human pathogen biodiversity without the regular use of a synchrotron light source. Although biodiversity measurements would not be seriously affected by dead microbes and absorption based method would do well, the scattering mean free path method would have potential to further study the cell based scattering centers (dead or live) via the information embedded in the speckle pattern in the deep-Fresnel zone.

  12. Exposure to non-ionizing electromagnetic fields emitted from mobile phones induced DNA damage in human ear canal hair follicle cells.

    PubMed

    Akdag, Mehmet; Dasdag, Suleyman; Canturk, Fazile; Akdag, Mehmet Zulkuf

    2018-01-01

    The aim of this study was to investigate effect of radiofrequency radiation (RFR) emitted from mobile phones on DNA damage in follicle cells of hair in the ear canal. The study was carried out on 56 men (age range: 30-60 years old)in four treatment groups with n = 14 in each group. The groups were defined as follows: people who did not use a mobile phone (Control), people use mobile phones for 0-30 min/day (second group), people use mobile phones for 30-60 min/day (third group) and people use mobile phones for more than 60 min/day (fourth group). Ear canal hair follicle cells taken from the subjects were analyzed by the Comet Assay to determine DNA damages. The Comet Assay parameters measured were head length, tail length, comet length, percentage of head DNA, tail DNA percentage, tail moment, and Olive tail moment. Results of the study showed that DNA damage indicators were higher in the RFR exposure groups than in the control subjects. In addition, DNA damage increased with the daily duration of exposure. In conclusion, RFR emitted from mobile phones has a potential to produce DNA damage in follicle cells of hair in the ear canal. Therefore, mobile phone users have to pay more attention when using wireless phones.

  13. Analysis of sound pressure levels emitted by children's toys.

    PubMed

    Sleifer, Pricila; Gonçalves, Maiara Santos; Tomasi, Marinês; Gomes, Erissandra

    2013-06-01

    To verify the levels of sound pressure emitted by non-certified children's toys. Cross-sectional study of sound toys available at popular retail stores of the so-called informal sector. Electronic, mechanical, and musical toys were analyzed. The measurement of each product was carried out by an acoustic engineer in an acoustically isolated booth, by a decibel meter. To obtain the sound parameters of intensity and frequency, the toys were set to produce sounds at a distance of 10 and 50cm from the researcher's ear. The intensity of sound pressure [dB(A)] and the frequency in hertz (Hz) were measured. 48 toys were evaluated. The mean sound pressure 10cm from the ear was 102±10 dB(A), and at 50cm, 94±8 dB(A), with p<0.05. The level of sound pressure emitted by the majority of toys was above 85dB(A). The frequency ranged from 413 to 6,635Hz, with 56.3% of toys emitting frequency higher than 2,000Hz. The majority of toys assessed in this research emitted a high level of sound pressure.

  14. Recovery of Neonatal Head Turning to Decreased Sound Pressure Level.

    ERIC Educational Resources Information Center

    Tarquinio, Nancy; And Others

    1990-01-01

    Investigated newborns' responses to decreased sound pressure level (SPL) by means of a localized head turning habituation procedure. Findings, which demonstrated recovery of neonatal head turning to decreased SPL, were inconsistent with the selective receptor adaptation model. (RH)

  15. Differential Intracochlear Sound Pressure Measurements in Normal Human Temporal Bones

    NASA Astrophysics Data System (ADS)

    Nakajima, Hideko Heidi; Dong, Wei; Olson, Elizabeth S.; Merchant, Saumil N.; Ravicz, Michael E.; Rosowski, John J.

    2009-02-01

    We present the first simultaneous sound pressure measurements in scala vestibuli and scala tympani of the cochlea in human cadaveric temporal bones. Micro-scale fiberoptic pressure sensors enabled the study of differential sound pressure at the cochlear base. This differential pressure is the input to the cochlear partition, driving cochlear waves and auditory transduction. Results showed that: pressure of scala vestibuli was much greater than scala tympani except at low and high frequencies where scala tympani pressure affects the input to the cochlea; the differential pressure proved to be an excellent measure of normal ossicular transduction of sound (shown to decrease 30-50 dB with ossicular disarticulation, whereas the individual scala pressures were significantly affected by non-ossicular conduction of sound at high frequencies); the middle-ear gain and differential pressure were generally bandpass in frequency dependence; and the middle-ear delay in the human was over twice that of the gerbil. Concurrent stapes velocity measurements allowed determination of the differential impedance across the partition and round-window impedance. The differential impedance was generally resistive, while the round-window impedance was consistent with a compliance in conjunction with distributed inertia and damping. Our techniques can be used to study inner-ear conductive pathologies (e.g., semicircular dehiscence), as well as non-ossicular cochlear stimulation (e.g., round-window stimulation) - situations that cannot be completely quantified by measurements of stapes velocity or scala-vestibuli pressure by themselves.

  16. Effects of the intensity of masking noise on ear canal recorded low-frequency cochlear microphonic waveforms in normal hearing subjects.

    PubMed

    Zhang, Ming

    2014-07-01

    Compared to auditory brainstem responses (ABRs), cochlear microphonics (CMs) may be more appropriate to serve as a supplement to the test of otoacoustic emissions (OAEs). Researchers have shown that low-frequency CMs from the apical cochlea are measurable at the tympanic membrane using high-pass masking noise. Our objective is to study the effect of such noise at different intensities on low-frequency CMs recorded at the ear canal, which is not completely known. Six components were involved in this CM measurement including an ear canal electrode (1), a relatively long and low-frequency toneburst (2), and high-pass masking noise at different intensities (3). The rest components include statistical analysis based on multiple human subjects (4), curve modeling based on amplitudes of CM waveforms (CMWs) and noise intensity (5), and a technique based on electrocochleography (ECochG or ECoG) (6). Results show that low-frequency CMWs appeared clearly. The CMW amplitude decreased with an increase in noise level. It decreased first slowly, then faster, and finally slowly again. In conclusion, when masked with high-pass noise, the low-frequency CMs are measurable at the human ear canal. Such noise reduces the low-frequency CM amplitude. The reduction is noise-intensity dependent but not completely linear. The reduction may be caused by the excited basal cochlea which the low-frequency has to travel and pass through. Although not completely clear, six mechanisms related to such reduction are discussed. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Fast Reverse Propagation of Sound in the Living Cochlea

    PubMed Central

    He, Wenxuan; Fridberger, Anders; Porsov, Edward; Ren, Tianying

    2010-01-01

    Abstract The auditory sensory organ, the cochlea, not only detects but also generates sounds. Such sounds, otoacoustic emissions, are widely used for diagnosis of hearing disorders and to estimate cochlear nonlinearity. However, the fundamental question of how the otoacoustic emission exits the cochlea remains unanswered. In this study, emissions were provoked by two tones with a constant frequency ratio, and measured as vibrations at the basilar membrane and at the stapes, and as sound pressure in the ear canal. The propagation direction and delay of the emission were determined by measuring the phase difference between basilar membrane and stapes vibrations. These measurements show that cochlea-generated sound arrives at the stapes earlier than at the measured basilar membrane location. Data also show that basilar membrane vibration at the emission frequency is similar to that evoked by external tones. These results conflict with the backward-traveling-wave theory and suggest that at low and intermediate sound levels, the emission exits the cochlea predominantly through the cochlear fluids. PMID:20513393

  18. Sounding experiments of high pressure gas discharge

    SciT

    Biele, Joachim K.

    A high pressure discharge experiment (200 MPa, 5{center_dot}10{sup 21} molecules/cm{sup 3}, 3000 K) has been set up to study electrically induced shock waves. The apparatus consists of the combustion chamber (4.2 cm{sup 3}) to produce high pressure gas by burning solid propellant grains to fill the electrical pump chamber (2.5 cm{sup 3}) containing an insulated coaxial electrode. Electrical pump energy up to 7.8 kJ at 10 kV, which is roughly three times of the gas energy in the pump chamber, was delivered by a capacitor bank. From the current-voltage relationship the discharge develops at rapidly decreasing voltage. Pressure at themore » combustion chamber indicating significant underpressure as well as overpressure peaks is followed by an increase of static pressure level. These data are not yet completely understood. However, Lorentz forces are believed to generate pinching with subsequent pinch heating, resulting in fast pressure variations to be propagated as rarefaction and shock waves, respectively. Utilizing pure axisymmetric electrode initiation rather than often used exploding wire technology in the pump chamber, repeatable experiments were achieved.« less

  19. A Comparative Study of Sound Speed in Air at Room Temperature between a Pressure Sensor and a Sound Sensor

    ERIC Educational Resources Information Center

    Amrani, D.

    2013-01-01

    This paper deals with the comparison of sound speed measurements in air using two types of sensor that are widely employed in physics and engineering education, namely a pressure sensor and a sound sensor. A computer-based laboratory with pressure and sound sensors was used to carry out measurements of air through a 60 ml syringe. The fast Fourier…

  20. Audio spectrum and sound pressure levels vary between pulse oximeters.

    PubMed

    Chandra, Deven; Tessler, Michael J; Usher, John

    2006-01-01

    The variable-pitch pulse oximeter is an important intraoperative patient monitor. Our ability to hear its auditory signal depends on its acoustical properties and our hearing. This study quantitatively describes the audio spectrum and sound pressure levels of the monitoring tones produced by five variable-pitch pulse oximeters. We compared the Datex-Ohmeda Capnomac Ultima, Hewlett-Packard M1166A, Datex-Engstrom AS/3, Ohmeda Biox 3700, and Datex-Ohmeda 3800 oximeters. Three machines of each of the five models were assessed for sound pressure levels (using a precision sound level meter) and audio spectrum (using a hanning windowed fast Fourier trans-form of three beats at saturations of 99%, 90%, and 85%). The widest range of sound pressure levels was produced by the Hewlett-Packard M1166A (46.5 +/- 1.74 dB to 76.9 +/- 2.77 dB). The loudest model was the Datex-Engstrom AS/3 (89.2 +/- 5.36 dB). Three oximeters, when set to the lower ranges of their volume settings, were indistinguishable from background operating room noise. Each model produced sounds with different audio spectra. Although each model produced a fundamental tone with multiple harmonic overtones, the number of harmonics varied with each model; from three harmonic tones on the Hewlett-Packard M1166A, to 12 on the Ohmeda Biox 3700. There were variations between models, and individual machines of the same model with respect to the fundamental tone associated with a given saturation. There is considerable variance in the sound pressure and audio spectrum of commercially-available pulse oximeters. Further studies are warranted in order to establish standards.

  1. A Sound Pressure-level Meter Without Amplification

    NASA Technical Reports Server (NTRS)

    Stowell, E Z

    1937-01-01

    The N.A.C.A. has developed a simple pressure-level meter for the measurement of sound-pressure levels above 70 db. The instrument employs a carbon microphone but has no amplification. The source of power is five flashlight batteries. Measurements may be made up to the threshold of feeling with an accuracy of plus or minus 2 db; band analysis of complex spectra may be made if desired.

  2. Wind turbine sound pressure level calculations at dwellings.

    PubMed

    Keith, Stephen E; Feder, Katya; Voicescu, Sonia A; Soukhovtsev, Victor; Denning, Allison; Tsang, Jason; Broner, Norm; Leroux, Tony; Richarz, Werner; van den Berg, Frits

    2016-03-01

    This paper provides calculations of outdoor sound pressure levels (SPLs) at dwellings for 10 wind turbine models, to support Health Canada's Community Noise and Health Study. Manufacturer supplied and measured wind turbine sound power levels were used to calculate outdoor SPL at 1238 dwellings using ISO [(1996). ISO 9613-2-Acoustics] and a Swedish noise propagation method. Both methods yielded statistically equivalent results. The A- and C-weighted results were highly correlated over the 1238 dwellings (Pearson's linear correlation coefficient r > 0.8). Calculated wind turbine SPLs were compared to ambient SPLs from other sources, estimated using guidance documents from the United States and Alberta, Canada.

  3. New HRCT-based measurement of the human outer ear canal as a basis for acoustical methods.

    PubMed

    Grewe, Johanna; Thiele, Cornelia; Mojallal, Hamidreza; Raab, Peter; Sankowsky-Rothe, Tobias; Lenarz, Thomas; Blau, Matthias; Teschner, Magnus

    2013-06-01

    As the form and size of the external auditory canal determine its transmitting function and hence the sound pressure in front of the eardrum, it is important to understand its anatomy in order to develop, optimize, and compare acoustical methods. High-resolution computed tomography (HRCT) data were measured retrospectively for 100 patients who had received a cochlear implant. In order to visualize the anatomy of the auditory canal, its length, radius, and the angle at which it runs were determined for the patients’ right and left ears. The canal’s volume was calculated, and a radius function was created. The determined length of the auditory canal averaged 23.6 mm for the right ear and 23.5 mm for the left ear. The calculated auditory canal volume (Vtotal) was 0.7 ml for the right ear and 0.69 ml for the left ear. The auditory canal was found to be significantly longer in men than in women, and the volume greater. The values obtained can be employed to develop a method that represents the shape of the auditory canal as accurately as possible to allow the best possible outcomes for hearing aid fitting.

  4. External ear canal exostosis and otitis media in temporal bones of prehistoric and historic chilean populations. A paleopathological and paleoepidemiological study.

    PubMed

    Castro, Mario; Goycoolea, Marcos; Silva-Pinto, Verónica

    2017-04-01

    External ear canal exostosis is more prevalent in northern coastal groups than in the highlands, suggesting that ocean activities facilitate the appearance of exostosis. However, southern coastal groups exposed to colder ocean water have a lesser incidence of exostosis, possibly due to less duration of exposure. There was a high incidence of otitis media in all groups of native population in Chile. One coastal group had a higher incidence, presumably due to racial factors. This is a paleopathological and paleoepidemiological study in temporal bones which assesses external ear canal exostosis and otitis media in prehistoric and historic native populations in Chile. A total of 460 temporal bones were evaluated for exostosis (ex) and 542 temporal bones were evaluated for otitis media (om). The study involved four groups: (1) Prehistoric Coastal (400-1000 AD) populations in Northern Chile (Pisagua-Tiwanaku) (22 temporal bones ex; 28 om); (2) Prehistoric Highland (400-1000 AD) populations in Northern Chile (292 temporal bones ex; 334 om); (3) Pisagua-Regional Developments (coastal) in Northern Chile (1000-1450 AD) (66 temporal bones ex; 82 om); and (4) Historic (1500-1800 AD) coastal populations in Southern Chile (80 temporal bones ex: 18 Chonos, 62 Fuegians. 98 om: 22 Chonos, 76 Fuegians). Skulls were evaluated visually and with an operating microscope. In addition, the otitis media group was evaluated with Temporal bone radiology - -lateral XRays-Schuller view - to assess pneumatization as evidence of previous middle ear disease. Prehistoric northern coastal groups had an incidence of exostosis of 15.91%, the northern highlands group 1.37%, and the southern coastal group 1.25%. There were changes suggestive of otitis media in: Pisagua/Tiwanaku 53.57%; Pisagua/Regional Developments 70.73%; Northern Highlands population 47.90%; Chonos 63.64%; and Fuegian tribes 64.47%.

  5. Prediction of light aircraft interior sound pressure level from the measured sound power flowing in to the cabin

    NASA Technical Reports Server (NTRS)

    Atwal, Mahabir S.; Heitman, Karen E.; Crocker, Malcolm J.

    1986-01-01

    The validity of the room equation of Crocker and Price (1982) for predicting the cabin interior sound pressure level was experimentally tested using a specially constructed setup for simultaneous measurements of transmitted sound intensity and interior sound pressure levels. Using measured values of the reverberation time and transmitted intensities, the equation was used to predict the space-averaged interior sound pressure level for three different fuselage conditions. The general agreement between the room equation and experimental test data is considered good enough for this equation to be used for preliminary design studies.

  6. Sound absorption of a porous material with a perforated facing at high sound pressure levels

    NASA Astrophysics Data System (ADS)

    Peng, Feng

    2018-07-01

    A semi-empirical model is proposed to predict the sound absorption of an acoustical unit consisting of a rigid-porous material layer with a perforated facing under the normal incidence at high sound pressure levels (SPLs) of pure tones. The nonlinearity of the perforated facing and the porous material, and the interference between them are considered in the model. The sound absorptive performance of the acoustical unit is tested at different incident SPLs and in three typical configurations: 1) when the perforated panel (PP) directly contacts with the porous layer, 2) when the PP is separated from the porous layer by an air gap and 3) when an air cavity is set between the porous material and the hard backing wall. The test results agree well with the corresponding theoretical predictions. Moreover, the results show that the interference effect is correlated to the width of the air gap between the PP and the porous layer, which alters not only the linear acoustic impedance but also the nonlinear acoustic impedance of the unit and hence its sound absorptive properties.

  7. Gerbil middle-ear sound transmission from 100 Hz to 60 kHz1

    PubMed Central

    Ravicz, Michael E.; Cooper, Nigel P.; Rosowski, John J.

    2008-01-01

    Middle-ear sound transmission was evaluated as the middle-ear transfer admittance HMY (the ratio of stapes velocity to ear-canal sound pressure near the umbo) in gerbils during closed-field sound stimulation at frequencies from 0.1 to 60 kHz, a range that spans the gerbil’s audiometric range. Similar measurements were performed in two laboratories. The HMY magnitude (a) increased with frequency below 1 kHz, (b) remained approximately constant with frequency from 5 to 35 kHz, and (c) decreased substantially from 35 to 50 kHz. The HMY phase increased linearly with frequency from 5 to 35 kHz, consistent with a 20–29 μs delay, and flattened at higher frequencies. Measurements from different directions showed that stapes motion is predominantly pistonlike except in a narrow frequency band around 10 kHz. Cochlear input impedance was estimated from HMY and previously-measured cochlear sound pressure. Results do not support the idea that the middle ear is a lossless matched transmission line. Results support the ideas that (1) middle-ear transmission is consistent with a mechanical transmission line or multiresonant network between 5 and 35 kHz and decreases at higher frequencies, (2) stapes motion is pistonlike over most of the gerbil auditory range, and (3) middle-ear transmission properties are a determinant of the audiogram. PMID:18646983

  8. Temperature/pressure and water vapor sounding with microwave spectroscopy

    NASA Technical Reports Server (NTRS)

    Muhleman, D. O.; Janssen, M. A.; Clancy, R. T.; Gulkis, S.; Mccleese, D. J.; Zurek, R.; Haberle, R. M.; Frerking, M.

    1992-01-01

    Two intense microwave spectra lines exist in the martian atmosphere that allow unique sounding capabilities: water vapor at 183 GHz and the (2-1) rotational line of CO at 230 GHz. Microwave spectra line sounding is a well-developed technique for the Earth's atmosphere for sounding from above from spacecraft and airplanes, and from below from fixed surface sites. Two simple instruments for temperature sounding on Mars (the CO line) and water vapor measurements are described. The surface sounder proposed for the MESUR sites is designed to study the boundary layer water vapor distribution and the temperature/pressure profiles with vertical resolution of 0.25 km up to 1 km with reduced resolution above approaching a scale height. The water channel will be sensitive to a few tenths of a micrometer of water and the temperature profile will be retrieved to an accuracy between 1 and 2 K. The latter is routinely done on the Earth using oxygen lines near 60 GHz. The measurements are done with a single-channel heterodyne receiver looking into a 10-cm mirror that is canned through a range of elevation angles plus a target load. The frequency of the receiver is sweep across the water and CO lines generating the two spectra at about 1-hr intervals throughout the mission. The mass and power for the proposed instrument are 2 kg and 5-8 W continuously. The measurements are completely immune to the atmospheric dust and ice particle loads. It was felt that these measurements are the ultimate ones to properly study the martian boundary layer from the surface to a few kilometers. Sounding from above requires an orbiting spacecraft with multichannel microwave spectrometers such as the instrument proposed for MO by a subset of the authors, a putative MESUR orbiter, and a proposed Discovery mission called MOES. Such an instrument can be built with less than 10 kg and use less than 15 W. The obvious advantage of this approach is that the entire atmosphere can be sounded for temperature and

  9. Middle-Ear Pressure Gain and Cochlear Partition Differential Pressure in Chinchilla

    PubMed Central

    Ravicz, Michael E.; Slama, Michaël C.C.; Rosowski, John J.

    2009-01-01

    An important step to describe the effects of inner-ear impedance and pathologies on middle- and inner-ear mechanics is to quantify middle- and inner-ear function in the normal ear. We present middle-ear pressure gain GMEP and trans-cochlear-partition differential sound pressure ΔPCP in chinchilla from 100 Hz to 30 kHz derived from measurements of intracochlear sound pressures in scala vestibuli PSV and scala tympani PST and ear-canal sound pressure near the tympanic membrane PTM. These measurements span the chinchilla's auditory range. GMEP had constant magnitude of about 20 dB between 300 Hz and 20 kHz and phase that implies a 40-μs delay, values with some similarities to previous measurements in chinchilla and other species. ΔPCP was similar to GMEP below about 10 kHz and lower in magnitude at higher frequencies, decreasing to 0 dB at 20 kHz. The high-frequency rolloff correlates with the audiogram and supports the idea that middle-ear transmission limits high-frequency hearing, providing a stronger link between inner-ear macromechanics and hearing. We estimate the cochlear partition impedance ZCP from these and previous data. The chinchilla may be a useful animal model for exploring the effects of nonacoustic inner-ear stimulation such as “bone conduction” on cochlear mechanics. PMID:19945521

  10. Sound pressure gain produced by the human middle ear.

    PubMed

    Kurokawa, H; Goode, R L

    1995-10-01

    The acoustic function of the middle ear is to match sound passing from the low impedance of air to the high impedance of cochlear fluid. Little information is available on the actual middle ear pressure gain in human beings. This article describes experiments on middle ear pressure gain in six fresh human temporal bones. Stapes footplate displacement and phase were measured with a laser Doppler vibrometer before and after removal of the tympanic membrane, malleus, and incus. Acoustic insulation of the round window with clay was performed. Umbo displacement was also measured before tympanic membrane removal to assess baseline tympanic membrane function. The middle ear has its major gain in the lower frequencies, with a peak near 0.9 kHz. The mean gain was 23.0 dB below 1.0 kHz, the resonant frequency of the middle ear; the mean peak gain was 26.6 dB. Above 1.0 kHz, the second pressure gain decreased at a rate of -8.6 dB/octave, with a mean gain of 6.5 dB at 4.0 kHz. Only a small amount of gain was present above 7.0 kHz. Significant individual differences in pressure gain were found between ears that appeared related to variations in tympanic membrane function and not to variations in cochlear impedance.

  11. Procedures for ambient-pressure and tympanometric tests of aural acoustic reflectance and admittance in human infants and adults

    PubMed Central

    Keefe, Douglas H.; Hunter, Lisa L.; Feeney, M. Patrick; Fitzpatrick, Denis F.

    2015-01-01

    Procedures are described to measure acoustic reflectance and admittance in human adult and infant ears at frequencies from 0.2 to 8 kHz. Transfer functions were measured at ambient pressure in the ear canal, and as down- or up-swept tympanograms. Acoustically estimated ear-canal area was used to calculate ear reflectance, which was parameterized by absorbance and group delay over all frequencies (and pressures), with substantial data reduction for tympanograms. Admittance measured at the probe tip in adults was transformed into an equivalent admittance at the eardrum using a transmission-line model for an ear canal with specified area and ear-canal length. Ear-canal length was estimated from group delay around the frequency above 2 kHz of minimum absorbance. Illustrative measurements in ears with normal function are described for an adult, and two infants at 1 month of age with normal hearing and a conductive hearing loss. The sensitivity of this equivalent eardrum admittance was calculated for varying estimates of area and length. Infant-ear patterns of absorbance peaks aligned in frequency with dips in group delay were explained by a model of resonant canal-wall mobility. Procedures will be applied in a large study of wideband clinical diagnosis and monitoring of middle-ear and cochlear function. PMID:26723319

  12. Prediction of light aircraft interior sound pressure level using the room equation

    NASA Technical Reports Server (NTRS)

    Atwal, M.; Bernhard, R.

    1984-01-01

    The room equation is investigated for predicting interior sound level. The method makes use of an acoustic power balance, by equating net power flow into the cabin volume to power dissipated within the cabin using the room equation. The sound power level transmitted through the panels was calculated by multiplying the measured space averaged transmitted intensity for each panel by its surface area. The sound pressure level was obtained by summing the mean square sound pressures radiated from each panel. The data obtained supported the room equation model in predicting the cabin interior sound pressure level.

  13. Generalization of low pressure, gas-liquid, metastable sound speed to high pressures

    NASA Technical Reports Server (NTRS)

    Bursik, J. W.; Hall, R. M.

    1981-01-01

    A theory is developed for isentropic metastable sound propagation in high pressure gas-liquid mixtures. Without simplification, it also correctly predicts the minimum speed for low pressure air-water measurements where other authors are forced to postulate isothermal propagation. This is accomplished by a mixture heat capacity ratio which automatically adjusts from its single phase values to approximately the isothermal value of unity needed for the minimum speed. Computations are made for the pure components parahydrogen and nitrogen, with emphasis on the latter. With simplifying assumptions, the theory reduces to a well known approximate formula limited to low pressure.

  14. Sound pressure level gain in an acoustic metamaterial cavity.

    PubMed

    Song, Kyungjun; Kim, Kiwon; Hur, Shin; Kwak, Jun-Hyuk; Park, Jihyun; Yoon, Jong Rak; Kim, Jedo

    2014-12-11

    The inherent attenuation of a homogeneous viscous medium limits radiation propagation, thereby restricting the use of many high-frequency acoustic devices to only short-range applications. Here, we design and experimentally demonstrate an acoustic metamaterial localization cavity which is used for sound pressure level (SPL) gain using double coiled up space like structures thereby increasing the range of detection. This unique behavior occurs within a subwavelength cavity that is 1/10(th) of the wavelength of the incident acoustic wave, which provides up to a 13 dB SPL gain. We show that the amplification results from the Fabry-Perot resonance of the cavity, which has a simultaneously high effective refractive index and effective impedance. We also experimentally verify the SPL amplification in an underwater environment at higher frequencies using a sample with an identical unit cell size. The versatile scalability of the design shows promising applications in many areas, especially in acoustic imaging and underwater communication.

  15. Sound attenuation in the ear of domestic chickens (Gallus gallus domesticus) as a result of beak opening

    PubMed Central

    Claes, Raf; Dirckx, Joris J. J.

    2017-01-01

    Because the quadrate and the eardrum are connected, the hypothesis was tested that birds attenuate the transmission of sound through their ears by opening the bill, which potentially serves as an additional protective mechanism for self-generated vocalizations. In domestic chickens, it was examined if a difference exists between hens and roosters, given the difference in vocalization capacity between the sexes. To test the hypothesis, vibrations of the columellar footplate were measured ex vivo with laser Doppler vibrometry (LDV) for closed and maximally opened beak conditions, with sounds introduced at the ear canal. The average attenuation was 3.5 dB in roosters and only 0.5 dB in hens. To demonstrate the importance of a putative protective mechanism, audio recordings were performed of a crowing rooster. Sound pressures levels of 133.5 dB were recorded near the ears. The frequency content of the vocalizations was in accordance with the range of highest hearing sensitivity in chickens. The results indicate a small but significant difference in sound attenuation between hens and roosters. However, the amount of attenuation as measured in the experiments on both hens and roosters is small and will provide little effective protection in addition to other mechanisms such as stapedius muscle activity. PMID:29291112

  16. Effects of sound-field frequency modulation amplification on reducing teachers' sound pressure level in the classroom.

    PubMed

    Sapienza, C M; Crandell, C C; Curtis, B

    1999-09-01

    Voice problems are a frequent difficulty that teachers experience. Common complaints by teachers include vocal fatigue and hoarseness. One possible explanation for these symptoms is prolonged elevations in vocal loudness within the classroom. This investigation examined the effectiveness of sound-field frequency modulation (FM) amplification on reducing the sound pressure level (SPL) of the teacher's voice during classroom instruction. Specifically, SPL was examined during speech produced in a classroom lecture by 10 teachers with and without the use of sound-field amplification. Results indicated a significant 2.42-dB decrease in SPL with the use of sound-field FM amplification. These data support the use of sound-field amplification in the vocal hygiene regimen recommended to teachers by speech-language pathologists.

  17. Stream ambient noise, spectrum and propagation of sounds in the goby Padogobius martensii: sound pressure and particle velocity.

    PubMed

    Lugli, Marco; Fine, Michael L

    2007-11-01

    The most sensitive hearing and peak frequencies of courtship calls of the stream goby, Padogobius martensii, fall within a quiet window at around 100 Hz in the ambient noise spectrum. Acoustic pressure was previously measured although Padogobius likely responds to particle motion. In this study a combination pressure (p) and particle velocity (u) detector was utilized to describe ambient noise of the habitat, the characteristics of the goby's sounds and their attenuation with distance. The ambient noise (AN) spectrum is generally similar for p and u (including the quiet window at noisy locations), although the energy distribution of u spectrum is shifted up by 50-100 Hz. The energy distribution of the goby's sounds is similar for p and u spectra of the Tonal sound, whereas the pulse-train sound exhibits larger p-u differences. Transmission loss was high for sound p and u: energy decays 6-10 dB10 cm, and sound pu ratio does not change with distance from the source in the nearfield. The measurement of particle velocity of stream AN and P. martensii sounds indicates that this species is well adapted to communicate acoustically in a complex noisy shallow-water environment.

  18. Sound Pressure Level Gain in an Acoustic Metamaterial Cavity

    PubMed Central

    Song, Kyungjun; Kim, Kiwon; Hur, Shin; Kwak, Jun-Hyuk; Park, Jihyun; Yoon, Jong Rak; Kim, Jedo

    2014-01-01

    The inherent attenuation of a homogeneous viscous medium limits radiation propagation, thereby restricting the use of many high-frequency acoustic devices to only short-range applications. Here, we design and experimentally demonstrate an acoustic metamaterial localization cavity which is used for sound pressure level (SPL) gain using double coiled up space like structures thereby increasing the range of detection. This unique behavior occurs within a subwavelength cavity that is 1/10th of the wavelength of the incident acoustic wave, which provides up to a 13 dB SPL gain. We show that the amplification results from the Fabry-Perot resonance of the cavity, which has a simultaneously high effective refractive index and effective impedance. We also experimentally verify the SPL amplification in an underwater environment at higher frequencies using a sample with an identical unit cell size. The versatile scalability of the design shows promising applications in many areas, especially in acoustic imaging and underwater communication. PMID:25502279

  19. Finite element modeling of sound transmission with perforations of tympanic membrane

    PubMed Central

    Gan, Rong Z.; Cheng, Tao; Dai, Chenkai; Yang, Fan; Wood, Mark W.

    2009-01-01

    A three-dimensional finite element (FE) model of human ear with structures of the external ear canal, middle ear, and cochlea has been developed recently. In this paper, the FE model was used to predict the effect of tympanic membrane (TM) perforations on sound transmission through the middle ear. Two perforations were made in the posterior-inferior quadrant and inferior site of the TM in the model with areas of 1.33 and 0.82 mm2, respectively. These perforations were also created in human temporal bones with the same size and location. The vibrations of the TM (umbo) and stapes footplate were calculated from the model and measured from the temporal bones using laser Doppler vibrometers. The sound pressure in the middle ear cavity was derived from the model and measured from the bones. The results demonstrate that the TM perforations can be simulated in the FE model with geometrical visualization. The FE model provides reasonable predictions on effects of perforation size and location on middle ear transfer function. The middle ear structure-function relationship can be revealed with multi-field coupled FE analysis. PMID:19603881

  20. Effect of ultrasonic cavitation on measurement of sound pressure using hydrophone

    NASA Astrophysics Data System (ADS)

    Thanh Nguyen, Tam; Asakura, Yoshiyuki; Okada, Nagaya; Koda, Shinobu; Yasuda, Keiji

    2017-07-01

    Effect of ultrasonic cavitation on sound pressure at the fundamental, second harmonic, and first ultraharmonic frequencies was investigated from low to high ultrasonic intensities. The driving frequencies were 22, 304, and 488 kHz. Sound pressure was measured using a needle-type hydrophone and ultrasonic cavitation was estimated from the broadband integrated pressure (BIP). With increasing square root of electric power applied to a transducer, the sound pressure at the fundamental frequency linearly increased initially, dropped at approximately the electric power of cavitation inception, and afterward increased again. The sound pressure at the second harmonic frequency was detected just below the electric power of cavitation inception. The first ultraharmonic component appeared at around the electric power of cavitation inception at 304 and 488 kHz. However, at 22 kHz, the first ultraharmonic component appeared at a higher electric power than that of cavitation inception.

  1. Calculating far-field radiated sound pressure levels from NASTRAN output

    NASA Technical Reports Server (NTRS)

    Lipman, R. R.

    1986-01-01

    FAFRAP is a computer program which calculates far field radiated sound pressure levels from quantities computed by a NASTRAN direct frequency response analysis of an arbitrarily shaped structure. Fluid loading on the structure can be computed directly by NASTRAN or an added-mass approximation to fluid loading on the structure can be used. Output from FAFRAP includes tables of radiated sound pressure levels and several types of graphic output. FAFRAP results for monopole and dipole sources compare closely with an explicit calculation of the radiated sound pressure level for those sources.

  2. Absolute measurement of the Hugoniot and sound velocity of liquid copper at multimegabar pressures

    SciT

    McCoy, Chad August; Knudson, Marcus David; Root, Seth

    Measurement of the Hugoniot and sound velocity provides information on the bulk modulus and Grüneisen parameter of a material at extreme conditions. The capability to launch multilayered (copper/aluminum) flyer plates at velocities in excess of 20 km/s with the Sandia Z accelerator has enabled high-precision sound-velocity measurements at previously inaccessible pressures. For these experiments, the sound velocity of the copper flyer must be accurately known in the multi-Mbar regime. Here we describe the development of copper as an absolutely calibrated sound-velocity standard for high-precision measurements at pressures in excess of 400 GPa. Using multilayered flyer plates, we performed absolute measurementsmore » of the Hugoniot and sound velocity of copper for pressures from 500 to 1200 GPa. These measurements enabled the determination of the Grüneisen parameter for dense liquid copper, clearly showing a density dependence above the melt transition. As a result, combined with earlier data at lower pressures, these results constrain the sound velocity as a function of pressure, enabling the use of copper as a Hugoniot and sound-velocity standard for pressures up to 1200 GPa.« less

  3. Absolute measurement of the Hugoniot and sound velocity of liquid copper at multimegabar pressures

    DOE PAGES

    McCoy, Chad August; Knudson, Marcus David; Root, Seth

    2017-11-13

    Measurement of the Hugoniot and sound velocity provides information on the bulk modulus and Grüneisen parameter of a material at extreme conditions. The capability to launch multilayered (copper/aluminum) flyer plates at velocities in excess of 20 km/s with the Sandia Z accelerator has enabled high-precision sound-velocity measurements at previously inaccessible pressures. For these experiments, the sound velocity of the copper flyer must be accurately known in the multi-Mbar regime. Here we describe the development of copper as an absolutely calibrated sound-velocity standard for high-precision measurements at pressures in excess of 400 GPa. Using multilayered flyer plates, we performed absolute measurementsmore » of the Hugoniot and sound velocity of copper for pressures from 500 to 1200 GPa. These measurements enabled the determination of the Grüneisen parameter for dense liquid copper, clearly showing a density dependence above the melt transition. As a result, combined with earlier data at lower pressures, these results constrain the sound velocity as a function of pressure, enabling the use of copper as a Hugoniot and sound-velocity standard for pressures up to 1200 GPa.« less

  4. Pressure generation during neural stimulation with infrared radiation

    NASA Astrophysics Data System (ADS)

    Xia, N.; Tan, X.; Xu, Y.; Richter, C.-P.

    2017-02-01

    This study quantifies laser evoked pressure waves in small confined volumes such as a small dish or the cochlea. The pressure was measured with custom fabricated pressure probes in front of the optical fiber. For the pressure measurements during laser stimulation the probes were inserted into scala tympani or vestibuli. At 164 μJ/pulse, the intracochlear pressure was between 96 and 106 dB (re 20 μPa). The pressure was also measured in the ear canal with a sensitive microphone. It was on average 63 dB (re 20 μPa). At radiant energies large enough to evoke an auditory compound action potential, the outer ear canal equivalent pressure was 36-56 dB (re 20 μPa).

  5. The origin of Korotkoff sounds and the accuracy of auscultatory blood pressure measurements.

    PubMed

    Babbs, Charles F

    2015-12-01

    This study explores the hypothesis that the sharper, high frequency Korotkoff sounds come from resonant motion of the arterial wall, which begins after the artery transitions from a buckled state to an expanding state. The motions of one mass, two nonlinear springs, and one damper, driven by transmural pressure under the cuff, are used to model and compute the Korotkoff sounds according to principles of classical Newtonian physics. The natural resonance of this spring-mass-damper system provides a concise, yet rigorous, explanation for the origin of Korotkoff sounds. Fundamentally, wall stretching in expansion requires more force than wall bending in buckling. At cuff pressures between systolic and diastolic arterial pressure, audible vibrations (> 40 Hz) occur during early expansion of the artery wall beyond its zero pressure radius after the outward moving mass of tissue experiences sudden deceleration, caused by the discontinuity in stiffness between bucked and expanded states. The idealized spring-mass-damper model faithfully reproduces the time-domain waveforms of actual Korotkoff sounds in humans. Appearance of arterial sounds occurs at or just above the level of systolic pressure. Disappearance of arterial sounds occurs at or just above the level of diastolic pressure. Muffling of the sounds is explained by increased resistance of the artery to collapse, caused by downstream venous engorgement. A simple analytical model can define the physical origin of Korotkoff sounds, suggesting improved mechanical or electronic filters for their selective detection and confirming the disappearance of the Korotkoff sounds as the optimal diastolic end point. Copyright © 2015 American Society of Hypertension. Published by Elsevier Inc. All rights reserved.

  6. Influences of pressure on methyl group, elasticity, sound velocity and sensitivity of solid nitromethane

    NASA Astrophysics Data System (ADS)

    Zhong, Mi; Liu, Qi-Jun; Qin, Han; Jiao, Zhen; Zhao, Feng; Shang, Hai-Lin; Liu, Fu-Sheng; Liu, Zheng-Tang

    2017-06-01

    First-principles calculations were employed to investigate the influences of pressure on methyl group, elasticity, sound velocity and sensitivity of solid nitromethane. The obtained structural parameters based on the GGA-PB E +G calculations are in good agreement with theoretical and experimental data. The rotation of methyl group appears under pressure, which influences the mechanical, thermal properties and sensitivity of solid NM. The anisotropy of elasticity, sound velocity and Debye temperature under pressure have been shown, which are related to the thermal properties of solid NM. The enhanced sensitivity with the increasing pressure has been discussed and the change of the most likely transition path is associated with methyl group.

  7. High sound pressure levels in Bavarian discotheques remain after introduction of voluntary agreements.

    PubMed

    Twardella, Dorothee; Wellhoefer, Andrea; Brix, Jutta; Fromme, Hermann

    2008-01-01

    While no legal rules or regulations exist in Germany, voluntary measures were introduced to achieve a reduction of sound pressure levels in discotheques to levels below 100 dB(A). To evaluate the current levels in Bavarian discotheques and to find out whether these voluntary measures ensured compliance with the recommended limits, sound pressure levels were measured in 20 Bavarian discotheques between 11 p.m. and 2 a.m. With respect to the equivalent continuous A-weighted sound pressure level for each 30-minute period (L Aeq,30min ), only 4/20 discotheques remained below the limit of 100 dB(A) in all time periods. Ten discotheques had sound pressure levels below 100 dB(A) for the total measurement period (L Aeq,180min ). None of the evaluated factors (weekday, size, estimated age of attendees, the use of voluntary measures such as participation of disc jockeys in a tutorial, or the availability of a sound level meter for the DJs) was significantly associated with the maximal L Aeq, 30min . Thus, the introduction of voluntary measures was not sufficient to ensure compliance with the recommended limits of sound pressure levels.

  8. A national project to evaluate and reduce high sound pressure levels from music.

    PubMed

    Ryberg, Johanna Bengtsson

    2009-01-01

    The highest recommended sound pressure levels for leisure sounds (music) in Sweden are 100 dB LAeq and 115 dB LAFmax for adults, and 97 dB LAeq and 110 dB LAFmax where children under the age of 13 have access. For arrangements intended for children, levels should be consistently less than 90 dB LAeq. In 2005, a national project was carried out with the aim of improving environments with high sound pressure levels from music, such as concert halls, restaurants, and cinemas. The project covered both live and recorded music. Of Sweden's 290 municipalities, 134 took part in the project, and 93 of these carried out sound measurements. Four hundred and seventy one establishments were investigated, 24% of which exceeded the highest recommended sound pressure levels for leisure sounds in Sweden. Of festival and concert events, 42% exceeded the recommended levels. Those who visit music events/establishments thus run a relatively high risk of exposure to harmful sound levels. Continued supervision in this field is therefore crucial.

  9. Apparatus and method for processing Korotkov sounds. [for blood pressure measurement

    NASA Technical Reports Server (NTRS)

    Golden, D. P., Jr.; Hoffler, G. W.; Wolthuis, R. A. (Inventor)

    1974-01-01

    A Korotkov sound processor, used in a noninvasive automatic blood measuring system where the brachial artery is occluded by an inflatable cuff, is disclosed. The Korotkoff sound associated with the systolic event is determined when the ratio of the absolute value of a voltage signal, representing Korotkov sounds in the range of 18 to 26 Hz to a maximum absolute peak value of the unfiltered signals, first equals or exceeds a value of 0.45. Korotkov sound associated with the diastolic event is determined when a ratio of the voltage signal of the Korotkov sounds in the range of 40 to 60 Hz to the absolute peak value of such signals within a single measurement cycle first falls below a value of 0.17. The processor signals the occurrence of the systolic and diastolic events and these signals can be used to control a recorder to record pressure values for these events.

  10. Simulating cartilage conduction sound to estimate the sound pressure level in the external auditory canal

    NASA Astrophysics Data System (ADS)

    Shimokura, Ryota; Hosoi, Hiroshi; Nishimura, Tadashi; Iwakura, Takashi; Yamanaka, Toshiaki

    2015-01-01

    When the aural cartilage is made to vibrate it generates sound directly into the external auditory canal which can be clearly heard. Although the concept of cartilage conduction can be applied to various speech communication and music industrial devices (e.g. smartphones, music players and hearing aids), the conductive performance of such devices has not yet been defined because the calibration methods are different from those currently used for air and bone conduction. Thus, the aim of this study was to simulate the cartilage conduction sound (CCS) using a head and torso simulator (HATS) and a model of aural cartilage (polyurethane resin pipe) and compare the results with experimental ones. Using the HATS, we found the simulated CCS at frequencies above 2 kHz corresponded to the average measured CCS from seven subjects. Using a model of skull bone and aural cartilage, we found that the simulated CCS at frequencies lower than 1.5 kHz agreed with the measured CCS. Therefore, a combination of these two methods can be used to estimate the CCS with high accuracy.

  11. Ultrasonic Sound Velocity of Diopside Liquid Under High Pressure and High Temperature Conditions

    NASA Astrophysics Data System (ADS)

    Xu, M.; Jing, Z.; Chantel, J.; Yu, T.; Wang, Y.; Jiang, P.

    2017-12-01

    The equation of state (EOS) of silicate liquids is of great significance to the understanding of the dynamics and differentiation of the magmatic systems in Earth and other terrestrial planets. Sound velocity of silicate liquids measured at high pressure can provide direct information on the bulk modulus and its pressure derivative and hence tightly constrain the EOS of silicate liquids. In addition, the sound velocity data can be directly compared to seismic observations to infer the presence of melts in the mantle. While the sound velocity for silicate liquids at ambient pressure has been well established, the high-pressure sound velocity data are still lacking due to experimental challenges. In this study, we successfully determined the sound velocities of diopside (CaMgSi2O6) liquid in a multi-anvil apparatus under high pressure-high temperature conditions from 1 to 4 GPa and 1973 to 2473 K by the ultrasonic interferometry in conjunction with synchrotron X-ray techniques. Diopside was chosen to study because it is not only one of the most important phases in the Earth's upper mantle, but also an end-member composition of model basalt. It is thus an ideal simplified melt composition in the upper mantle. Besides, diopside liquid has been studied by ambient-pressure ultrasonic measurements (e.g., Ai and Lange, 2008) and shock-wave experiments at much higher pressure (e.g., Asimow and Ahrens, 2010), which allows comparison with our results over a large pressure range. Our high-pressure results on the sound velocity of Di liquid are consistent with the ambient-pressure data and show an increase of velocity with pressure (from 3039 m/s at 0.1 GPa to 4215 m/s at 3.5 GPa). Fitting to the Murnaghan EOS gives an isentropic bulk modulus (Ks) of 24.8 GPa and its pressure dependence (K'S) of 7.8. These are consistent with the results from shock-wave experiments on Di liquid (Asimow and Ahrens, 2010), indicating that the technique used in this study is capable to accurately

  12. MP3 player listening sound pressure levels among 10 to 17 year old students.

    PubMed

    Keith, Stephen E; Michaud, David S; Feder, Katya; Haider, Ifaz; Marro, Leonora; Thompson, Emma; Marcoux, Andre M

    2011-11-01

    Using a manikin, equivalent free-field sound pressure level measurements were made from the portable digital audio players of 219 subjects, aged 10 to 17 years (93 males) at their typical and "worst-case" volume levels. Measurements were made in different classrooms with background sound pressure levels between 40 and 52 dBA. After correction for the transfer function of the ear, the median equivalent free field sound pressure levels and interquartile ranges (IQR) at typical and worst-case volume settings were 68 dBA (IQR = 15) and 76 dBA (IQR = 19), respectively. Self-reported mean daily use ranged from 0.014 to 12 h. When typical sound pressure levels were considered in combination with the average daily duration of use, the median noise exposure level, Lex, was 56 dBA (IQR = 18) and 3.2% of subjects were estimated to exceed the most protective occupational noise exposure level limit in Canada, i.e., 85 dBA Lex. Under worst-case listening conditions, 77.6% of the sample was estimated to listen to their device at combinations of sound pressure levels and average daily durations for which there is no known risk of permanent noise-induced hearing loss, i.e., ≤  75 dBA Lex. Sources and magnitudes of measurement uncertainties are also discussed.

  13. Separation and reconstruction of high pressure water-jet reflective sound signal based on ICA

    NASA Astrophysics Data System (ADS)

    Yang, Hongtao; Sun, Yuling; Li, Meng; Zhang, Dongsu; Wu, Tianfeng

    2011-12-01

    The impact of high pressure water-jet on the different materials target will produce different reflective mixed sound. In order to reconstruct the reflective sound signals distribution on the linear detecting line accurately and to separate the environment noise effectively, the mixed sound signals acquired by linear mike array were processed by ICA. The basic principle of ICA and algorithm of FASTICA were described in detail. The emulation experiment was designed. The environment noise signal was simulated by using band-limited white noise and the reflective sound signal was simulated by using pulse signal. The reflective sound signal attenuation produced by the different distance transmission was simulated by weighting the sound signal with different contingencies. The mixed sound signals acquired by linear mike array were synthesized by using the above simulated signals and were whitened and separated by ICA. The final results verified that the environment noise separation and the reconstruction of the detecting-line sound distribution can be realized effectively.

  14. An analysis of collegiate band directors' exposure to sound pressure levels

    NASA Astrophysics Data System (ADS)

    Roebuck, Nikole Moore

    Noise-induced hearing loss (NIHL) is a significant but unfortunate common occupational hazard. The purpose of the current study was to measure the magnitude of sound pressure levels generated within a collegiate band room and determine if those sound pressure levels are of a magnitude that exceeds the policy standards and recommendations of the Occupational Safety and Health Administration (OSHA), and the National Institute of Occupational Safety and Health (NIOSH). In addition, reverberation times were measured and analyzed in order to determine the appropriateness of acoustical conditions for the band rehearsal environment. Sound pressure measurements were taken from the rehearsal of seven collegiate marching bands. Single sample t test were conducted to compare the sound pressure levels of all bands to the noise exposure standards of OSHA and NIOSH. Multiple regression analysis were conducted and analyzed in order to determine the effect of the band room's conditions on the sound pressure levels and reverberation times. Time weighted averages (TWA), noise percentage doses, and peak levels were also collected. The mean Leq for all band directors was 90.5 dBA. The total accumulated noise percentage dose for all band directors was 77.6% of the maximum allowable daily noise dose under the OSHA standard. The total calculated TWA for all band directors was 88.2% of the maximum allowable daily noise dose under the OSHA standard. The total accumulated noise percentage dose for all band directors was 152.1% of the maximum allowable daily noise dose under the NIOSH standards, and the total calculated TWA for all band directors was 93dBA of the maximum allowable daily noise dose under the NIOSH standard. Multiple regression analysis revealed that the room volume, the level of acoustical treatment and the mean room reverberation time predicted 80% of the variance in sound pressure levels in this study.

  15. The effects of alterations in the osseous external auditory canal on perceived sound quality.

    PubMed

    van Spronsen, Erik; Brienesse, Patrick; Ebbens, Fenna A; Waterval, Jerome J; Dreschler, Wouter A

    2015-10-01

    To evaluate the perceptual effect of the altered shape of the osseous external auditory canal (OEAC) on sound quality. Prospective study. Twenty subjects with normal hearing were presented with six simulated sound conditions representing the acoustic properties of six different ear canals (three normal ears and three cavities). The six different real ear unaided responses of these ear canals were used to filter Dutch sentences, resulting in six simulated sound conditions. A seventh unfiltered reference condition was used for comparison. Sound quality was evaluated using paired comparison ratings and a visual analog scale (VAS). Significant differences in sound quality were found between the normal and cavity conditions (all P < .001) using both the seven-point paired comparison rating and the VAS. No significant differences were found between the reference and normal conditions. Sound quality deteriorates when the OEAC is altered into a cavity. This proof of concept study shows that the altered acoustic quality of the OEAC after radical cavity surgery may lead to a clearly perceived deterioration in sound quality. Nevertheless, some questions remain about the extent to which these changes are affected by habituation and by other changes in middle ear anatomy and functionality. 4 © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  16. Experimental and numerical characterization of the sound pressure in standing wave acoustic levitators

    NASA Astrophysics Data System (ADS)

    Stindt, A.; Andrade, M. A. B.; Albrecht, M.; Adamowski, J. C.; Panne, U.; Riedel, J.

    2014-01-01

    A novel method for predictions of the sound pressure distribution in acoustic levitators is based on a matrix representation of the Rayleigh integral. This method allows for a fast calculation of the acoustic field within the resonator. To make sure that the underlying assumptions and simplifications are justified, this approach was tested by a direct comparison to experimental data. The experimental sound pressure distributions were recorded by high spatially resolved frequency selective microphone scanning. To emphasize the general applicability of the two approaches, the comparative studies were conducted for four different resonator geometries. In all cases, the results show an excellent agreement, demonstrating the accuracy of the matrix method.

  17. Hearing with an atympanic ear: good vibration and poor sound-pressure detection in the royal python, Python regius.

    PubMed

    Christensen, Christian Bech; Christensen-Dalsgaard, Jakob; Brandt, Christian; Madsen, Peter Teglberg

    2012-01-15

    Snakes lack both an outer ear and a tympanic middle ear, which in most tetrapods provide impedance matching between the air and inner ear fluids and hence improve pressure hearing in air. Snakes would therefore be expected to have very poor pressure hearing and generally be insensitive to airborne sound, whereas the connection of the middle ear bone to the jaw bones in snakes should confer acute sensitivity to substrate vibrations. Some studies have nevertheless claimed that snakes are quite sensitive to both vibration and sound pressure. Here we test the two hypotheses that: (1) snakes are sensitive to sound pressure and (2) snakes are sensitive to vibrations, but cannot hear the sound pressure per se. Vibration and sound-pressure sensitivities were quantified by measuring brainstem evoked potentials in 11 royal pythons, Python regius. Vibrograms and audiograms showed greatest sensitivity at low frequencies of 80-160 Hz, with sensitivities of -54 dB re. 1 m s(-2) and 78 dB re. 20 μPa, respectively. To investigate whether pythons detect sound pressure or sound-induced head vibrations, we measured the sound-induced head vibrations in three dimensions when snakes were exposed to sound pressure at threshold levels. In general, head vibrations induced by threshold-level sound pressure were equal to or greater than those induced by threshold-level vibrations, and therefore sound-pressure sensitivity can be explained by sound-induced head vibration. From this we conclude that pythons, and possibly all snakes, lost effective pressure hearing with the complete reduction of a functional outer and middle ear, but have an acute vibration sensitivity that may be used for communication and detection of predators and prey.

  18. Sound

    NASA Astrophysics Data System (ADS)

    Capstick, J. W.

    2013-01-01

    1. The nature of sound; 2. Elasticity and vibrations; 3. Transverse waves; 4. Longitudinal waves; 5. Velocity of longitudinal waves; 6. Reflection and refraction. Doppler's principle; 7. Interference. Beats. Combination tones; 8. Resonance and forced vibrations; 9. Quality of musical notes; 10. Organ pipes; 11. Rods. Plates. Bells; 12. Acoustical measurements; 13. The phonograph, microphone and telephone; 14. Consonance; 15. Definition of intervals. Scales. Temperament; 16. Musical instruments; 17. Application of acoustical principles to military purposes; Questions; Answers to questions; Index.

  19. The Measurement of the Oral and Nasal Sound Pressure Levels of Speech

    ERIC Educational Resources Information Center

    Clarke, Wayne M.

    1975-01-01

    A nasal separator was used to measure the oral and nasal components in the speech of a normal adult Australian population. Results indicated no difference in oral and nasal sound pressure levels for read versus spontaneous speech samples; however, females tended to have a higher nasal component than did males. (Author/TL)

  20. Tutorial and Guidelines on Measurement of Sound Pressure Level in Voice and Speech

    ERIC Educational Resources Information Center

    Švec, Jan G.; Granqvist, Svante

    2018-01-01

    Purpose: Sound pressure level (SPL) measurement of voice and speech is often considered a trivial matter, but the measured levels are often reported incorrectly or incompletely, making them difficult to compare among various studies. This article aims at explaining the fundamental principles behind these measurements and providing guidelines to…

  1. Variation of the Korotkoff Stethoscope Sounds During Blood Pressure Measurement: Analysis Using a Convolutional Neural Network.

    PubMed

    Pan, Fan; He, Peiyu; Liu, Chengyu; Li, Taiyong; Murray, Alan; Zheng, Dingchang

    2017-11-01

    Korotkoff sounds are known to change their characteristics during blood pressure (BP) measurement, resulting in some uncertainties for systolic and diastolic pressure (SBP and DBP) determinations. The aim of this study was to assess the variation of Korotkoff sounds during BP measurement by examining all stethoscope sounds associated with each heartbeat from above systole to below diastole during linear cuff deflation. Three repeat BP measurements were taken from 140 healthy subjects (age 21 to 73 years; 62 female and 78 male) by a trained observer, giving 420 measurements. During the BP measurements, the cuff pressure and stethoscope signals were simultaneously recorded digitally to a computer for subsequent analysis. Heartbeats were identified from the oscillometric cuff pressure pulses. The presence of each beat was used to create a time window (1 s, 2000 samples) centered on the oscillometric pulse peak for extracting beat-by-beat stethoscope sounds. A time-frequency two-dimensional matrix was obtained for the stethoscope sounds associated with each beat, and all beats between the manually determined SBPs and DBPs were labeled as "Korotkoff." A convolutional neural network was then used to analyze consistency in sound patterns that were associated with Korotkoff sounds. A 10-fold cross-validation strategy was applied to the stethoscope sounds from all 140 subjects, with the data from ten groups of 14 subjects being analyzed separately, allowing consistency to be evaluated between groups. Next, within-subject variation of the Korotkoff sounds analyzed from the three repeats was quantified, separately for each stethoscope sound beat. There was consistency between folds with no significant differences between groups of 14 subjects (P = 0.09 to P = 0.62). Our results showed that 80.7% beats at SBP and 69.5% at DBP were analyzed as Korotkoff sounds, with significant differences between adjacent beats at systole (13.1%, P = 0.001) and diastole (17.4%, P < 0

  2. Cuffless and Continuous Blood Pressure Estimation from the Heart Sound Signals

    PubMed Central

    Peng, Rong-Chao; Yan, Wen-Rong; Zhang, Ning-Ling; Lin, Wan-Hua; Zhou, Xiao-Lin; Zhang, Yuan-Ting

    2015-01-01

    Cardiovascular disease, like hypertension, is one of the top killers of human life and early detection of cardiovascular disease is of great importance. However, traditional medical devices are often bulky and expensive, and unsuitable for home healthcare. In this paper, we proposed an easy and inexpensive technique to estimate continuous blood pressure from the heart sound signals acquired by the microphone of a smartphone. A cold-pressor experiment was performed in 32 healthy subjects, with a smartphone to acquire heart sound signals and with a commercial device to measure continuous blood pressure. The Fourier spectrum of the second heart sound and the blood pressure were regressed using a support vector machine, and the accuracy of the regression was evaluated using 10-fold cross-validation. Statistical analysis showed that the mean correlation coefficients between the predicted values from the regression model and the measured values from the commercial device were 0.707, 0.712, and 0.748 for systolic, diastolic, and mean blood pressure, respectively, and that the mean errors were less than 5 mmHg, with standard deviations less than 8 mmHg. These results suggest that this technique is of potential use for cuffless and continuous blood pressure monitoring and it has promising application in home healthcare services. PMID:26393591

  3. Cuffless and Continuous Blood Pressure Estimation from the Heart Sound Signals.

    PubMed

    Peng, Rong-Chao; Yan, Wen-Rong; Zhang, Ning-Ling; Lin, Wan-Hua; Zhou, Xiao-Lin; Zhang, Yuan-Ting

    2015-09-17

    Cardiovascular disease, like hypertension, is one of the top killers of human life and early detection of cardiovascular disease is of great importance. However, traditional medical devices are often bulky and expensive, and unsuitable for home healthcare. In this paper, we proposed an easy and inexpensive technique to estimate continuous blood pressure from the heart sound signals acquired by the microphone of a smartphone. A cold-pressor experiment was performed in 32 healthy subjects, with a smartphone to acquire heart sound signals and with a commercial device to measure continuous blood pressure. The Fourier spectrum of the second heart sound and the blood pressure were regressed using a support vector machine, and the accuracy of the regression was evaluated using 10-fold cross-validation. Statistical analysis showed that the mean correlation coefficients between the predicted values from the regression model and the measured values from the commercial device were 0.707, 0.712, and 0.748 for systolic, diastolic, and mean blood pressure, respectively, and that the mean errors were less than 5 mmHg, with standard deviations less than 8 mmHg. These results suggest that this technique is of potential use for cuffless and continuous blood pressure monitoring and it has promising application in home healthcare services.

  4. Sound produced by an oscillating arc in a high-pressure gas

    NASA Astrophysics Data System (ADS)

    Popov, Fedor K.; Shneider, Mikhail N.

    2017-08-01

    We suggest a simple theory to describe the sound generated by small periodic perturbations of a cylindrical arc in a dense gas. Theoretical analysis was done within the framework of the non-self-consistent channel arc model and supplemented with time-dependent gas dynamic equations. It is shown that an arc with power amplitude oscillations on the order of several percent is a source of sound whose intensity is comparable with external ultrasound sources used in experiments to increase the yield of nanoparticles in the high pressure arc systems for nanoparticle synthesis.

  5. Noninvasive measurement of beat-to-beat arterial blood pressure by the Korotkoff sound delay time.

    PubMed

    Xiang, Haiyan; Liu, Yanyong; Li, Yinhua; Qin, Yufei; Yu, Mengsun

    2012-02-01

    To propose a novel noninvasive beat-to-beat arterial blood pressure measurement method based on the Korotkoff sound delay time (KDT) and evaluate its accuracy in preliminary experiments. KDT decreases as the cuff pressure P deflates, which can be described by a function KDT=f (P). Actually, KDT is a function of arterial transmural pressure. Therefore, the variation in blood pressure can be obtained by the transmural pressure, which is estimated by the KDT. Holding the cuff pressure at an approximate constant pressure between systolic pressure and diastolic pressure, the variation in blood pressure ΔEBP between successive heartbeats can be estimated according to KDT and f'(p), which represents the variation of KDT corresponding to unit pressure. Then the blood pressure for each heartbeat can be obtained by accumulating the ΔEBP. Invasive and noninvasive blood pressure values of six participants were measured simultaneously to evaluate the method. The average of the correlation coefficients between the invasive mean arterial pressure (MAP) and the KDT for six participants was -0.91. The average of the correlation coefficients between the invasive MAP and the estimated mean blood pressure (EBP) was 0.92. The mean difference between EBP and MAP was 0.51 mmHg, and the SD was 2.65 mmHg. The mean blood pressure estimated by the KDT is consistent with the invasive MAP. The beat-to-beat blood pressure estimated by KDT provides an accurate estimate of MAP in the preliminary experiments and represents a potential acceptable alternative to invasive blood pressure monitoring during laboratory studies.

  6. Liquid mercury sound velocity measurements under high pressure and high temperature by picosecond acoustics in a diamond anvils cell

    NASA Astrophysics Data System (ADS)

    Decremps, F.; Belliard, L.; Couzinet, B.; Vincent, S.; Munsch, P.; Le Marchand, G.; Perrin, B.

    2009-07-01

    Recent improvements to measure ultrasonic sound velocities of liquids under extreme conditions are described. Principle and feasibility of picosecond acoustics in liquids embedded in a diamond anvils cell are given. To illustrate the capability of these advances in the sound velocity measurement technique, original high pressure and high temperature results on the sound velocity of liquid mercury up to 5 GPa and 575 K are given. This high pressure technique will certainly be useful in several fundamental and applied problems in physics and many other fields such as geophysics, nonlinear acoustics, underwater sound, petrology or physical acoustics.

  7. Diversity in sound pressure levels and estimated active space of resident killer whale vocalizations.

    PubMed

    Miller, Patrick J O

    2006-05-01

    Signal source intensity and detection range, which integrates source intensity with propagation loss, background noise and receiver hearing abilities, are important characteristics of communication signals. Apparent source levels were calculated for 819 pulsed calls and 24 whistles produced by free-ranging resident killer whales by triangulating the angles-of-arrival of sounds on two beamforming arrays towed in series. Levels in the 1-20 kHz band ranged from 131 to 168 dB re 1 microPa at 1 m, with differences in the means of different sound classes (whistles: 140.2+/-4.1 dB; variable calls: 146.6+/-6.6 dB; stereotyped calls: 152.6+/-5.9 dB), and among stereotyped call types. Repertoire diversity carried through to estimates of active space, with "long-range" stereotyped calls all containing overlapping, independently-modulated high-frequency components (mean estimated active space of 10-16 km in sea state zero) and "short-range" sounds (5-9 km) included all stereotyped calls without a high-frequency component, whistles, and variable calls. Short-range sounds are reported to be more common during social and resting behaviors, while long-range stereotyped calls predominate in dispersed travel and foraging behaviors. These results suggest that variability in sound pressure levels may reflect diverse social and ecological functions of the acoustic repertoire of killer whales.

  8. Tutorial and Guidelines on Measurement of Sound Pressure Level in Voice and Speech.

    PubMed

    Švec, Jan G; Granqvist, Svante

    2018-03-15

    Sound pressure level (SPL) measurement of voice and speech is often considered a trivial matter, but the measured levels are often reported incorrectly or incompletely, making them difficult to compare among various studies. This article aims at explaining the fundamental principles behind these measurements and providing guidelines to improve their accuracy and reproducibility. Basic information is put together from standards, technical, voice and speech literature, and practical experience of the authors and is explained for nontechnical readers. Variation of SPL with distance, sound level meters and their accuracy, frequency and time weightings, and background noise topics are reviewed. Several calibration procedures for SPL measurements are described for stand-mounted and head-mounted microphones. SPL of voice and speech should be reported together with the mouth-to-microphone distance so that the levels can be related to vocal power. Sound level measurement settings (i.e., frequency weighting and time weighting/averaging) should always be specified. Classified sound level meters should be used to assure measurement accuracy. Head-mounted microphones placed at the proximity of the mouth improve signal-to-noise ratio and can be taken advantage of for voice SPL measurements when calibrated. Background noise levels should be reported besides the sound levels of voice and speech.

  9. Self-demodulation of amplitude-modulated signal components in amplitude-modulated bone-conducted ultrasonic hearing

    NASA Astrophysics Data System (ADS)

    Ito, Kazuhito; Nakagawa, Seiji

    2015-07-01

    A novel hearing aid system utilizing amplitude-modulated bone-conducted ultrasound (AM-BCU) is being developed for use by profoundly deaf people. However, there is a lack of research on the acoustic aspects of AM-BCU hearing. In this study, acoustic fields in the ear canal under AM-BCU stimulation were examined with respect to the self-demodulation effect of amplitude-modulated signal components generated in the ear canal. We found self-demodulated signals with an audible sound pressure level related to the amplitude-modulated signal components of bone-conducted ultrasonic stimulation. In addition, the increases in the self-demodulated signal levels at low frequencies in the ear canal after occluding the ear canal opening, i.e., the positive occlusion effect, indicate the existence of a pathway by which the self-demodulated signals pass through the aural cartilage and soft tissue, and radiate into the ear canal.

  10. Resonant tube for measurement of sound absorption in gases at low frequency/pressure ratios

    NASA Technical Reports Server (NTRS)

    Zuckerwar, A. J.; Griffin, W. A.

    1980-01-01

    The paper describes a resonant tube for measuring sound absorption in gases, with specific emphasis on the vibrational relaxation peak of N2, over a range of frequency/pressure ratios from 0.1 to 2500 Hz/atm. The experimental background losses measured in argon agree with the theoretical wall losses except at few isolated frequencies. Rigid cavity terminations, external excitation, and a differential technique of background evaluation were used to minimize spurious contributions to the background losses. Room temperature measurements of sound absorption in binary mixtures of N2-CO2 in which both components are excitable resulted in the maximum frequency/pressure ratio in Hz/atm of 0.063 + 123m for the N2 vibrational relaxation peak, where m is mole percent of added CO2; the maximum ratio for the CO2 peak was 34,500 268m where m is mole percent of added N2.

  11. Comparison of measured and calculated sound pressure levels around a large horizontal axis wind turbine generator

    NASA Technical Reports Server (NTRS)

    Shepherd, Kevin P.; Willshire, William L., Jr.; Hubbard, Harvey H.

    1989-01-01

    Results are reported from a large number of simultaneous acoustic measurements around a large horizontal axis downwind configuration wind turbine generator. In addition, comparisons are made between measurements and calculations of both the discrete frequency rotational harmonics and the broad band noise components. Sound pressure time histories and noise radiation patterns as well as narrow band and broadband noise spectra are presented for a range of operating conditions. The data are useful for purposes of environmental impact assessment.

  12. Revisit of the relationship between the elastic properties and sound velocities at high pressures

    SciT

    Wang, Chenju; Yan, Xiaozhen; Institute of Atomic and Molecular Sciences, Sichuan University, Chengdu 610065

    2014-09-14

    The second-order elastic constants and stress-strain coefficients are defined, respectively, as the second derivatives of the total energy and the first derivative of the stress with respect to strain. Since the Lagrangian and infinitesimal strain are commonly used in the two definitions above, the second-order elastic constants and stress-strain coefficients are separated into two categories, respectively. In general, any of the four physical quantities is employed to characterize the elastic properties of materials without differentiation. Nevertheless, differences may exist among them at non-zero pressures, especially high pressures. Having explored the confusing issue systemically in the present work, we find thatmore » the four quantities are indeed different from each other at high pressures and these differences depend on the initial stress applied on materials. Moreover, the various relations between the four quantities depicting elastic properties of materials and high-pressure sound velocities are also derived from the elastic wave equations. As examples, we calculated the high-pressure sound velocities of cubic tantalum and hexagonal rhenium using these nexus. The excellent agreement of our results with available experimental data suggests the general applicability of the relations.« less

  13. Sound velocity measurements of CaSiO3 perovskite under lower mantle pressures

    NASA Astrophysics Data System (ADS)

    Kudo, Y.; Hirose, K.

    2010-12-01

    The chemical composition of the lower mantle and the distribution of subducted crustal materials in the lower mantle can be constrained by the comparison of seismological observations with laboratory measurements of sound velocities of expected constituent minerals in lower mantle conditions. To date, sound velocities of two major constituent minerals of the lower mantle, namely magnesium silicate perovskite and ferropericlase have been well studied although the data are mostly limited to low temperature (300 K). On the other hand, another major mineral, CaSiO3-perovskite appears in both peridtite (~7 wt.%) and subducted basaltic crusts (~23 wt.%) at the lower mantle pressure-temperature conditions. In spite of its abundance in those rocks, little is known about acoustic velocity, mostly because it cannot be quenched to the ambient pressure. Synthesis and measurement should be made under pressure, which has been a challenging project for the current experimental techniques. We have conducted sound velocity measurements of polycrystalline CaSiO3 perovskite by a combination of a diamond anvil cell (DAC) and Brillouin scattering spectroscopy. High-pressure was generated by the DAC with a pair of 300-micron culet diamond anvils. Calcium silicate perovskite was synthesized from gel by laser annealing in the DAC with the CO2 laser. A tetragonal perovskite structure was confirmed by the X-ray diffraction at the station BL10XU, SPring-8. Brillouin scattering measurements were made at 300 K under pressures corresponding to the middle lower mantle conditions. Results demonstrate that the S-wave velocity is significantly lower than previous theoretical results. We will discuss the possible source for this discrepancy and resulting implications for the lower mantle materials.

  14. Sound Pressure Levels Measured in a University Concert Band: A Risk of Noise-Induced Hearing Loss?

    ERIC Educational Resources Information Center

    Holland, Nicholas V., III

    2008-01-01

    Researchers have reported public school band directors as experiencing noise-induced hearing loss. Little research has focused on collegiate band directors and university student musicians. The present study measures the sound pressure levels generated within a university concert band and compares sound levels with the criteria set by the…

  15. Underwater sound pressure variation and bottlenose dolphin (Tursiops truncatus) hearing thresholds in a small pool.

    PubMed

    Finneran, James J; Schlundt, Carolyn E

    2007-07-01

    Studies of underwater hearing are often hampered by the behavior of sound waves in small experimental tanks. At lower frequencies, tank dimensions are often not sufficient for free field conditions, resulting in large spatial variations of sound pressure. These effects may be mitigated somewhat by increasing the frequency bandwidth of the sound stimulus, so effects of multipath interference average out over many frequencies. In this study, acoustic fields and bottlenose dolphin (Tursiops truncatus) hearing thresholds were compared for pure tone and frequency modulated signals. Experiments were conducted in a vinyl-walled, seawater-filled pool approximately 3.7 x 6 x 1.5 m. Acoustic signals were pure tone and linear and sinusoidal frequency modulated tones with bandwidths/modulation depths of 1%, 2%, 5%, 10%, and 20%. Thirteen center frequencies were tested between 1 and 100 kHz. Acoustic fields were measured (without the dolphin present) at three water depths over a 60 x 65 cm grid with a 5-cm spacing. Hearing thresholds were measured using a behavioral response paradigm and up/down staircase technique. The use of FM signals significantly improved the sound field without substantially affecting the measured hearing thresholds.

  16. A geospatial model of ambient sound pressure levels in the contiguous United States.

    PubMed

    Mennitt, Daniel; Sherrill, Kirk; Fristrup, Kurt

    2014-05-01

    This paper presents a model that predicts measured sound pressure levels using geospatial features such as topography, climate, hydrology, and anthropogenic activity. The model utilizes random forest, a tree-based machine learning algorithm, which does not incorporate a priori knowledge of source characteristics or propagation mechanics. The response data encompasses 270 000 h of acoustical measurements from 190 sites located in National Parks across the contiguous United States. The explanatory variables were derived from national geospatial data layers and cross validation procedures were used to evaluate model performance and identify variables with predictive power. Using the model, the effects of individual explanatory variables on sound pressure level were isolated and quantified to reveal systematic trends across environmental gradients. Model performance varies by the acoustical metric of interest; the seasonal L50 can be predicted with a median absolute deviation of approximately 3 dB. The primary application for this model is to generalize point measurements to maps expressing spatial variation in ambient sound levels. An example of this mapping capability is presented for Zion National Park and Cedar Breaks National Monument in southwestern Utah.

  17. Noise trauma induced by a mousetrap--sound pressure level measurement of vole captive bolt devices.

    PubMed

    Frank, Matthias; Napp, Matthias; Lange, Joern; Grossjohann, Rico; Ekkernkamp, Axel; Beule, Achim G

    2010-05-01

    While ballistic parameters of vole captive bolt devices have been reported, there is no investigation on their hazardous potential to cause noise trauma. The aim of this experimental study was to measure the sound pressure levels of vole captive bolt devices. Two different shooting devices were examined with a modular precision sound level meter on an outdoor firing range. Measurements were taken in a semi-circular configuration with measuring points 0 degrees in front of the muzzle, 90 degrees at right angle of the muzzle, and 180 degrees behind the shooting device. Distances between muzzle and microphone were 0.5, 1, 2, 10, and 20 m. Sound pressure levels exceeded 130 dB(C) at any measuring point within the 20-m area. Highest measurements (more than 172 dB[C]) were taken in the 0 degrees direction at the 0.5-m distance for both shooting devices proving the hazardous potential of these gadgets to cause noise trauma.

  18. Pressure Contact Sounding Data for NASA's Atmospheric Variability Experiment (AVE 3)

    NASA Technical Reports Server (NTRS)

    Fuelberg, H. E.; Hill, C. K.; Turner, R. E.; Long, K. E.

    1975-01-01

    The basic rawinsonde data are described at each pressure contact from the surface to sounding termination for the 41 stations participating in the AVE III measurement program that began at 0000 GMT on February 6 and ended at 1200 GMT on February 7, 1975. Soundings were taken at 3-hour intervals during a large period of the experiment from most stations within the United States east of about 105 degrees west longitude. Methods of data processing, change in reduction scheme since the AVE II pilot experiment, and data accuracy are briefly discussed. An example of contact data is presented, and microfiche cards of all the contact data are included in the appendix. The AVE III project was conducted to better understand and establish the extent of applications for meteorological satellite sensor data through correlative ground truth experiments and to provide basic experimental data for use in studies of atmospheric scales of-motion interrelationships.

  19. Effects of coordination and pressure on sound attenuation, boson peak and elasticity in amorphous solids.

    PubMed

    DeGiuli, Eric; Laversanne-Finot, Adrien; Düring, Gustavo; Lerner, Edan; Wyart, Matthieu

    2014-08-14

    Connectedness and applied stress strongly affect elasticity in solids. In various amorphous materials, mechanical stability can be lost either by reducing connectedness or by increasing pressure. We present an effective medium theory of elasticity that extends previous approaches by incorporating the effect of compression, of amplitude e, allowing one to describe quantitative features of sound propagation, transport, the boson peak, and elastic moduli near the elastic instability occurring at a compression ec. The theory disentangles several frequencies characterizing the vibrational spectrum: the onset frequency where strongly-scattered modes appear in the vibrational spectrum, the pressure-independent frequency ω* where the density of states displays a plateau, the boson peak frequency ωBP found to scale as , and the Ioffe-Regel frequency ωIR where scattering length and wavelength become equal. We predict that sound attenuation crosses over from ω(4) to ω(2) behaviour at ω0, consistent with observations in glasses. We predict that a frequency-dependent length scale ls(ω) and speed of sound ν(ω) characterize vibrational modes, and could be extracted from scattering data. One key result is the prediction of a flat diffusivity above ω0, in agreement with previously unexplained observations. We find that the shear modulus does not vanish at the elastic instability, but drops by a factor of 2. We check our predictions in packings of soft particles and study the case of covalent networks and silica, for which we predict ωIR ≈ ωBP. Overall, our approach unifies sound attenuation, transport and length scales entering elasticity in a single framework where disorder is not the main parameter controlling the boson peak, in agreement with observations. This framework leads to a phase diagram where various glasses can be placed, connecting microscopic structure to vibrational properties.

  20. An Ultrasonic Frequency Sweep Interferometer For Sound Speed Measurements On Liquids At High Temperature And Pressure

    NASA Astrophysics Data System (ADS)

    Ai, Y.; Lange, R. A.

    2003-12-01

    One of the most direct methods for obtaining melt compressibility is through measurements of sound speed via acoustic interferometry. This technique may be applied to silicate melts by either varying the path length or the frequency of the acoustic wave through the melt. To date, only the variable path length (VPL) technique has been applied, which restricts measurements to atmospheric pressure owing to the requirement of mechanical movement of the upper buffer rod. This, in turn, precludes the study of volatile-bearing liquids at pressure and a systematic study of how melt compressibility varies with pressure. We have developed a frequency sweep (FS) interferometer that can be applied at high pressure, which is based on frequency spectrum analysis on mirror reflection waves from high-temperature liquids. First, a theoretical acoustic model for a rod-liquid-rod (RLR) interferometer is proposed and solutions to the resultant wave equations are obtained. The solutions demonstrate that only two kinds of non-dispersive waves exist within the upper buffer rod. They have computable group velocities and waveform patterns that are entirely dependent on the material and diameter of the buffer rods. Experimental tests verify the theoretical model and indicate that buffer rods made of molybdenum metal and > 1.9 cm diameter are ideal for sound speed measurements in silicate melts with the FS interferometer. On the basis of the theoretical acoustic model, a mechanical assembly and signal-processing algorithm was designed to implement the FS interferometer. A very short pulse (e.g. 1 microsecond) encompassing a range of frequencies that span about 1 MHz is sent down the upper buffer rod and the first two mirror reflections from the liquid are collected and stored. Because they have the same waveform and have 180o phase difference, Fourier spectrum analysis can be performed to find the frequency response function of the two reflections, which is related to the sound speed and

  1. Sound Pressures and Correlations of Noise on the Fuselage of a Jet Aircraft in Flight

    NASA Technical Reports Server (NTRS)

    Shattuck, Russell D.

    1961-01-01

    Tests were conducted at altitudes of 10,000, 20,000, and 30,000 feet at speeds of Mach 0.4, 0.6, and O.8. It was found that the sound pressure levels on the aft fuselage of a jet aircraft in flight can be estimated using an equation involving the true airspeed and the free air density. The cross-correlation coefficient over a spacing of 2.5 feet was generalized with Strouhal number. The spectrum of the noise in flight is comparatively flat up to 10,000 cycles per second.

  2. Exploratory investigation of sound pressure level in the wake of an oscillating airfoil in the vicinity of stall

    NASA Technical Reports Server (NTRS)

    Gray, R. B.; Pierce, G. A.

    1972-01-01

    Wind tunnel tests were performed on two oscillating two-dimensional lifting surfaces. The first of these models had an NACA 0012 airfoil section while the second simulated the classical flat plate. Both of these models had a mean angle of attack of 12 degrees while being oscillated in pitch about their midchord with a double amplitude of 6 degrees. Wake surveys of sound pressure level were made over a frequency range from 16 to 32 Hz and at various free stream velocities up to 100 ft/sec. The sound pressure level spectrum indicated significant peaks in sound intensity at the oscillation frequency and its first harmonic near the wake of both models. From a comparison of these data with that of a sound level meter, it is concluded that most of the sound intensity is contained within these peaks and no appreciable peaks occur at higher harmonics. It is concluded that within the wake the sound intensity is largely pseudosound while at one chord length outside the wake, it is largely true vortex sound. For both the airfoil and flat plate the peaks appear to be more strongly dependent upon the airspeed than on the oscillation frequency. Therefore reduced frequency does not appear to be a significant parameter in the generation of wake sound intensity.

  3. Sound Velocities of Iron-Nickel and Iron-Nickel-Silicon Alloys at High Pressure

    NASA Astrophysics Data System (ADS)

    Miller, R. A.; Jackson, J. M.; Sturhahn, W.; Zhao, J.; Murphy, C. A.

    2014-12-01

    Seismological and cosmochemical studies suggest Earth's core is primarily composed of iron with ~5 to 10 wt% nickel and some light elements [e.g. 1]. To date, the concentration of nickel and the amount and identity of light elements remain poorly constrained due in part to the difficulty of conducting experimental measurements at core conditions. The vibrational properties of a variety iron alloys paired with seismic observations can help better constrain the composition of the core. We directly measured the partial phonon density of states of bcc- and hcp-structured Fe0.9Ni0.1 and Fe0.85Ni0.1Si0.05 at high pressures. The samples were compressed using a panoramic diamond anvil cell. A subset of the experiments were conducted using neon as a pressure transmitting medium. Measurements of high statistical quality were performed with nuclear resonant inelastic x-ray scattering (NRIXS) at sector 3-ID-B of the Advanced Photon Source [2, 3, 4]. The unit cell volume of each sample was determined at each compression point with in-situ x-ray diffraction at sector 3-ID-B before and after each NRIXS measurement. The Debye, compressional, and shear sound velocities were determined from the low energy region of the partial phonon density of states paired with the volume measurements. We will present partial phonon density of states and sound velocities for Fe0.9Ni0.1 and Fe0.85Ni0.1Si0.05 at high-pressure and compare with those of pure iron. References: [1] McDonough, W.F. (2004): Compositional Model for the Earth's Core. Elsevier Ltd., Oxford. [2] Murphy, C.A., J.M. Jackson, W. Sturhahn, and B. Chen (2011): Melting and thermal pressure of hcp-Fe from the phonon density of states, Phys. Earth Planet. Int., doi:10.1016/j.pepi.2011.07.001. [3] Murphy, C.A., J.M. Jackson, W. Sturhahn, and B. Chen (2011): Grüneisen parameter of hcp-Fe to 171 GPa, Geophys. Res. Lett., doi:10.1029/2011GL049531. [4] Murphy, C.A., J.M. Jackson, and W. Sturhahn (2013): Experimental constraints on the

  4. Encoding of speech sounds at auditory brainstem level in good and poor hearing aid performers.

    PubMed

    Shetty, Hemanth Narayan; Puttabasappa, Manjula

    Hearing aids are prescribed to alleviate loss of audibility. It has been reported that about 31% of hearing aid users reject their own hearing aid because of annoyance towards background noise. The reason for dissatisfaction can be located anywhere from the hearing aid microphone till the integrity of neurons along the auditory pathway. To measure spectra from the output of hearing aid at the ear canal level and frequency following response recorded at the auditory brainstem from individuals with hearing impairment. A total of sixty participants having moderate sensorineural hearing impairment with age range from 15 to 65 years were involved. Each participant was classified as either Good or Poor Hearing aid Performers based on acceptable noise level measure. Stimuli /da/ and /si/ were presented through loudspeaker at 65dB SPL. At the ear canal, the spectra were measured in the unaided and aided conditions. At auditory brainstem, frequency following response were recorded to the same stimuli from the participants. Spectrum measured in each condition at ear canal was same in good hearing aid performers and poor hearing aid performers. At brainstem level, better F 0 encoding; F 0 and F 1 energies were significantly higher in good hearing aid performers than in poor hearing aid performers. Though the hearing aid spectra were almost same between good hearing aid performers and poor hearing aid performers, subtle physiological variations exist at the auditory brainstem. The result of the present study suggests that neural encoding of speech sound at the brainstem level might be mediated distinctly in good hearing aid performers from that of poor hearing aid performers. Thus, it can be inferred that subtle physiological changes are evident at the auditory brainstem in a person who is willing to accept noise from those who are not willing to accept noise. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier

  5. Design and development of second order MEMS sound pressure gradient sensor

    NASA Astrophysics Data System (ADS)

    Albahri, Shehab

    The design and development of a second order MEMS sound pressure gradient sensor is presented in this dissertation. Inspired by the directional hearing ability of the parasitoid fly, Ormia ochracea, a novel first order directional microphone that mimics the mechanical structure of the fly's ears and detects the sound pressure gradient has been developed. While the first order directional microphones can be very beneficial in a large number of applications, there is great potential for remarkable improvements in performance through the use of second order systems. The second order directional microphone is able to provide a theoretical improvement in Sound to Noise ratio (SNR) of 9.5dB, compared to the first-order system that has its maximum SNR of 6dB. Although second order microphone is more sensitive to sound angle of incidence, the nature of the design and fabrication process imposes different factors that could lead to deterioration in its performance. The first Ormia ochracea second order directional microphone was designed in 2004 and fabricated in 2006 at Binghamton University. The results of the tested parts indicate that the Ormia ochracea second order directional microphone performs mostly as an Omni directional microphone. In this work, the previous design is reexamined and analyzed to explain the unexpected results. A more sophisticated tool implementing a finite element package ANSYS is used to examine the previous design response. This new tool is used to study different factors that used to be ignored in the previous design, mainly; response mismatch and fabrication uncertainty. A continuous model using Hamilton's principle is introduced to verify the results using the new method. Both models agree well, and propose a new way for optimizing the second order directional microphone using geometrical manipulation. In this work we also introduce a new fabrication process flow to increase the fabrication yield. The newly suggested method uses the shell

  6. Sound levels, hearing habits and hazards of using portable cassete players

    NASA Astrophysics Data System (ADS)

    Hellström, P.-A.; Axelsson, A.

    1988-12-01

    The maximum output sound pressure level ( SPL) from different types of portable cassette players (PCP's) and different headphones was analyzed by using KEMAR in one-third octave bands. The equivalent free-field dB(A) level (EqA-FFSL) was computed from the one-third octave bands corrected by the free-field to the eardrum transfer function. The dB(A) level varied between 104 dB from a low-cost PCP with supra-aural headphones (earphones with headbands and foam pads fitting against the pinna) to 126 dB from a high quality PCP with semi-aural headphones (small earphones without headbands to be used in the concha of the external ear). The cassette tapes used in this study were recorded with music, white noise, narrowband noise and pure tones. The equivalent and maximum SPL was measured in the ear canal (1 mm from eardrum) with the use of mini-microphones in 15 young subjects listening to pop music from PCP's at the highest level they considered comfortable. These SPL measurements corresponded to 112 dB(A) in free field. In a temporary threshold shift ( TTS) study, ten teenagers—four girls and six boys—listened to pop music for 1 h with PCP's at a level they enjoyed. The mean TTS value was 5-10 dB for frequencies between 1 and 8 kHz. In one subject the maximum TTS was 35 dB at 5-6 dB kHz. In order to acquire information about listening habits among youngsters using PCP's, 154 seventh and eighth graders (age 14-15) were interviewed. They used PCP's much less than expected during most of the year, but an increase was reported during the summer holidays.

  7. Auditory mechanics in a bush-cricket: direct evidence of dual sound inputs in the pressure difference receiver

    PubMed Central

    Montealegre-Z, Fernando; Soulsbury, Carl D.; Robson Brown, Kate A.; Robert, Daniel

    2016-01-01

    The ear of the bush-cricket, Copiphora gorgonensis, consists of a system of paired eardrums (tympana) on each foreleg. In these insects, the ear is backed by an air-filled tube, the acoustic trachea (AT), which transfers sound from the prothoracic acoustic spiracle to the internal side of the eardrums. Both surfaces of the eardrums of this auditory system are exposed to sound, making it a directionally sensitive pressure difference receiver. A key feature of the AT is its capacity to reduce the velocity of sound propagation and alter the acoustic driving forces at the tympanum. The mechanism responsible for reduction in sound velocity in the AT remains elusive, yet it is deemed to depend on adiabatic or isothermal conditions. To investigate the biophysics of such multiple input ears, we used micro-scanning laser Doppler vibrometry and micro-computed X-ray tomography. We measured the velocity of sound propagation in the AT, the transmission gains across auditory frequencies and the time-resolved mechanical dynamics of the tympanal membranes in C. gorgonensis. Tracheal sound transmission generates a gain of approximately 15 dB SPL, and a propagation velocity of ca 255 m s−1, an approximately 25% reduction from free field propagation. Modelling tracheal acoustic behaviour that accounts for thermal and viscous effects, we conclude that reduction in sound velocity within the AT can be explained, among others, by heat exchange between the sound wave and the tracheal walls. PMID:27683000

  8. Measurement of sound pressure and temperature in tissue-mimicking material using an optical fiber Bragg grating sensor.

    PubMed

    Imade, Keisuke; Kageyama, Takashi; Koyama, Daisuke; Watanabe, Yoshiaki; Nakamura, Kentaro; Akiyama, Iwaki

    2016-10-01

    The experimental investigation of an optical fiber Bragg grating (FBG) sensor for biomedical application is described. The FBG sensor can be used to measure sound pressure and temperature rise simultaneously in biological tissues exposed to ultrasound. The theoretical maximum values that can be measured with the FBG sensor are 73.0 MPa and 30 °C. In this study, measurement of sound pressure up to 5 MPa was performed at an ultrasound frequency of 2 MHz. A maximum temperature change of 6 °C was measured in a tissue-mimicking material. Values yielded by the FBG sensor agreed with those measured using a thermocouple and a hydrophone. Since this sensor is used to monitor the sound pressure and temperature simultaneously, it can also be used for industrial applications, such as ultrasonic cleaning of semiconductors under controlled temperatures.

  9. So small, so loud: extremely high sound pressure level from a pygmy aquatic insect (Corixidae, Micronectinae).

    PubMed

    Sueur, Jérôme; Mackie, David; Windmill, James F C

    2011-01-01

    To communicate at long range, animals have to produce intense but intelligible signals. This task might be difficult to achieve due to mechanical constraints, in particular relating to body size. Whilst the acoustic behaviour of large marine and terrestrial animals has been thoroughly studied, very little is known about the sound produced by small arthropods living in freshwater habitats. Here we analyse for the first time the calling song produced by the male of a small insect, the water boatman Micronecta scholtzi. The song is made of three distinct parts differing in their temporal and amplitude parameters, but not in their frequency content. Sound is produced at 78.9 (63.6-82.2) SPL rms re 2.10(-5) Pa with a peak at 99.2 (85.7-104.6) SPL re 2.10(-5) Pa estimated at a distance of one metre. This energy output is significant considering the small size of the insect. When scaled to body length and compared to 227 other acoustic species, the acoustic energy produced by M. scholtzi appears as an extreme value, outperforming marine and terrestrial mammal vocalisations. Such an extreme display may be interpreted as an exaggerated secondary sexual trait resulting from a runaway sexual selection without predation pressure.

  10. So Small, So Loud: Extremely High Sound Pressure Level from a Pygmy Aquatic Insect (Corixidae, Micronectinae)

    PubMed Central

    Sueur, Jérôme; Mackie, David; Windmill, James F. C.

    2011-01-01

    To communicate at long range, animals have to produce intense but intelligible signals. This task might be difficult to achieve due to mechanical constraints, in particular relating to body size. Whilst the acoustic behaviour of large marine and terrestrial animals has been thoroughly studied, very little is known about the sound produced by small arthropods living in freshwater habitats. Here we analyse for the first time the calling song produced by the male of a small insect, the water boatman Micronecta scholtzi. The song is made of three distinct parts differing in their temporal and amplitude parameters, but not in their frequency content. Sound is produced at 78.9 (63.6–82.2) SPL rms re 2.10−5 Pa with a peak at 99.2 (85.7–104.6) SPL re 2.10−5 Pa estimated at a distance of one metre. This energy output is significant considering the small size of the insect. When scaled to body length and compared to 227 other acoustic species, the acoustic energy produced by M. scholtzi appears as an extreme value, outperforming marine and terrestrial mammal vocalisations. Such an extreme display may be interpreted as an exaggerated secondary sexual trait resulting from a runaway sexual selection without predation pressure. PMID:21698252

  11. Sound velocity of liquid Fe-Ni-S at high pressure

    NASA Astrophysics Data System (ADS)

    Kawaguchi, Saori I.; Nakajima, Yoichi; Hirose, Kei; Komabayashi, Tetsuya; Ozawa, Haruka; Tateno, Shigehiko; Kuwayama, Yasuhiro; Tsutsui, Satoshi; Baron, Alfred Q. R.

    2017-05-01

    The sound velocity of liquid Fe47Ni28S25 and Fe63Ni12S25 was measured up to 52 GPa/2480 K in externally resistance-heated and laser-heated diamond-anvil cells using high-resolution inelastic X-ray scattering. From these experimental data, we obtained the elastic parameters of liquid Fe47Ni28S25, KS0 = 96.1 ± 2.7 GPa and KS0' = 4.00 ± 0.13, where KS0 and KS0' are the adiabatic bulk modulus and its pressure derivative at 1 bar, when the density is fixed at ρ0 = 5.62 ± 0.09 g/cm3 for 1 bar and 2000 K. With these parameters, the sound velocity and density of liquid Fe47Ni28S25 were calculated to be 8.41 ± 0.17 km/s and 8.93 ± 0.19 to 9.10 ± 0.18 g/cm3, respectively, at the core mantle boundary conditions of 135 GPa and 3600-4300 K. These values are 9.4% higher and 17-18% lower than those of pure Fe, respectively. Extrapolation of measurements and comparison with seismological models suggest the presence of 5.8-7.5 wt % sulfur in the Earth's outer core if it is the only light element.

  12. Experimental constraints on the sound velocities of cementite Fe3C to core pressures

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Lai, Xiaojing; Li, Jie; Liu, Jiachao; Zhao, Jiyong; Bi, Wenli; Ercan Alp, E.; Hu, Michael Y.; Xiao, Yuming

    2018-07-01

    Sound velocities of cementite Fe3C have been measured up to 1.5 Mbar and at 300 K in a diamond anvil cell using the nuclear resonant inelastic X-ray scattering (NRIXS) technique. From the partial phonon density of states (pDOS) and equation of state (EOS) of Fe3C, we derived its elastic parameters including shear modulus, compressional (VP) and shear-wave (VS) velocities to core pressures. A pressure-induced spin-pairing transition in the powdered Fe3C sample was found to occur gradually between 10 and 50 GPa by the X-ray Emission Spectroscopy (XES) measurements. Following the completion of the spin-pairing transition, the VP and VS of low-spin Fe3C increased with pressure at a markedly lower rate than its high-spin counterpart. Our results suggest that the incorporation of carbon in solid iron to form iron carbide phases, Fe3C and Fe7C3, could effectively lower the VS but respectively raise the Poisson's ratio by 0.05 and 0.07 to approach the seismically observed values for the Earth's inner core. The comparison with the preliminary reference Earth model (PREM) implies that an inner core composition containing iron and its carbon-rich alloys can satisfactorily explain the observed seismic properties of the inner core.

  13. Sound Transmission through Cylindrical Shell Structures Excited by Boundary Layer Pressure Fluctuations

    NASA Technical Reports Server (NTRS)

    Tang, Yvette Y.; Silcox, Richard J.; Robinson, Jay H.

    1996-01-01

    This paper examines sound transmission into two concentric cylindrical sandwich shells subject to turbulent flow on the exterior surface of the outer shell. The interior of the shells is filled with fluid medium and there is an airgap between the shells in the annular space. The description of the pressure field is based on the cross-spectral density formulation of Corcos, Maestrello, and Efimtsov models of the turbulent boundary layer. The classical thin shell theory and the first-order shear deformation theory are applied for the inner and outer shells, respectively. Modal expansion and the Galerkin approach are used to obtain closed-form solutions for the shell displacements and the radiation and transmission pressures in the cavities including both the annular space and the interior. The average spectral density of the structural responses and the transmitted interior pressures are expressed explicitly in terms of the summation of the cross-spectral density of generalized force induced by the boundary layer turbulence. The effects of acoustic and hydrodynamic coincidences on the spectral density are observed. Numerical examples are presented to illustrate the method for both subsonic and supersonic flows.

  14. Effect of water vapor on sound absorption in nitrogen at low frequency/pressure ratios

    NASA Technical Reports Server (NTRS)

    Zuckerwar, A. J.; Griffin, W. A.

    1981-01-01

    Sound absorption measurements were made in N2-H2O binary mixtures at 297 K over the frequency/pressure range f/P of 0.1-2500 Hz/atm to investigate the vibrational relaxation peak of N2 and its location on f/P axis as a function of humidity. At low humidities the best fit to a linear relationship between the f/P(max) and humidity yields an intercept of 0.013 Hz/atm and a slope of 20,000 Hz/atm-mole fraction. The reaction rate constants derived from this model are lower than those obtained from the extrapolation of previous high-temperature data.

  15. Pressure sound level measurements at an educational environment in Goiânia, Goiás, Brazil

    NASA Astrophysics Data System (ADS)

    Costa, J. J. L.; do Nascimento, E. O.; de Oliveira, L. N.; Caldas, L. V. E.

    2018-03-01

    In this work, 25 points located on the ground floor of the Federal Institute of Education, Science and Technology of Goias - IFG - Campus Goiânia, were analyzed in morning periods of two Saturdays. The pressure sound levels were measured at internal and external environments during routine activities seeking to perform an environmental monitoring at this institution. The initial hypothesis was that an amusement park (Mutirama Park) was responsible for originating noise pollution in the institute, but the results showed, within the campus environment, sound pressure levels in accordance with the Municipal legislation of Goiânia for all points.

  16. Sound pressure levels generated at risk volume steps of portable listening devices: types of smartphone and genres of music.

    PubMed

    Kim, Gibbeum; Han, Woojae

    2018-05-01

    The present study estimated the sound pressure levels of various music genres at the volume steps that contemporary smartphones deliver, because these levels put the listener at potential risk for hearing loss. Using six different smartphones (Galaxy S6, Galaxy Note 3, iPhone 5S, iPhone 6, LG G2, and LG G3), the sound pressure levels for three genres of K-pop music (dance-pop, hip-hop, and pop-ballad) and a Billboard pop chart of assorted genres were measured through an earbud for the first risk volume that was at the risk sign proposed by the smartphones, as well as consecutive higher volumes using a sound level meter and artificial mastoid. The first risk volume step of the Galaxy S6 and the LG G2, among the six smartphones, had the significantly lowest (84.1 dBA) and highest output levels (92.4 dBA), respectively. As the volume step increased, so did the sound pressure levels. The iPhone 6 was loudest (113.1 dBA) at the maximum volume step. Of the music genres, dance-pop showed the highest output level (91.1 dBA) for all smartphones. Within the frequency range of 20~ 20,000 Hz, the sound pressure level peaked at 2000 Hz for all the smartphones. The results showed that the sound pressure levels of either the first volume step or the maximum volume step were not the same for the different smartphone models and genres of music, which means that the risk volume sign and its output levels should be unified across the devices for their users. In addition, the risk volume steps proposed by the latest smartphone models are high enough to cause noise-induced hearing loss if their users habitually listen to music at those levels.

  17. Respiratory modulation of oscillometric cuff pressure pulses and Korotkoff sounds during clinical blood pressure measurement in healthy adults.

    PubMed

    Chen, Diliang; Chen, Fei; Murray, Alan; Zheng, Dingchang

    2016-05-10

    Accurate blood pressure (BP) measurement depends on the reliability of oscillometric cuff pressure pulses (OscP) and Korotkoff sounds (KorS) for automated oscillometric and manual techniques. It has been widely accepted that respiration is one of the main factors affecting BP measurement. However, little is known about how respiration affects the signals from which BP measurement is obtained. The aim was to quantify the modulation effect of respiration on oscillometric pulses and KorS during clinical BP measurement. Systolic and diastolic BPs were measured manually from 40 healthy subjects (from 23 to 65 years old) under normal and regular deep breathing. The following signals were digitally recorded during linear cuff deflation: chest motion from a magnetometer to obtain reference respiration, cuff pressure from an electronic pressure sensor to derive OscP, and KorS from a digital stethoscope. The effects of respiration on both OscP and KorS were determined from changes in their amplitude associated with respiration between systole and diastole. These changes were normalized to the mean signal amplitude of OscP and KorS to derive the respiratory modulation depth. Reference respiration frequency, and the frequencies derived from the amplitude modulation of OscP and KorS were also calculated and compared. Respiratory modulation depth was 14 and 40 % for OscP and KorS respectively under normal breathing condition, with significant increases (both p < 0.05) to 16 and 49 % with deeper breathing. There was no statistically significant difference between the reference respiration frequency and those derived from the oscillometric and Korotkoff signals (both p > 0.05) during deep breathing, and for the oscillometric signal during normal breathing (p > 0.05). Our study confirmed and quantified the respiratory modulation effect on the oscillometric pulses and KorS during clinical BP measurement, with increased modulation depth under regular deeper breathing.

  18. Peak Sound Pressure Levels and Associated Auditory Risk from an H[subscript 2]-Air "Egg-Splosion"

    ERIC Educational Resources Information Center

    Dolhun, John J.

    2016-01-01

    The noise level from exploding chemical demonstrations and the effect they could have on audiences, especially young children, needs attention. Auditory risk from H[subscript 2]- O2 balloon explosions have been studied, but no studies have been done on H[subscript 2]-air "eggsplosions." The peak sound pressure level (SPL) was measured…

  19. The Influence of Fundamental Frequency and Sound Pressure Level Range on Breathing Patterns in Female Classical Singing

    ERIC Educational Resources Information Center

    Collyer, Sally; Thorpe, C. William; Callaghan, Jean; Davis, Pamela J.

    2008-01-01

    Purpose: This study investigated the influence of fundamental frequency (F0) and sound pressure level (SPL) range on respiratory behavior in classical singing. Method: Five trained female singers performed an 8-s messa di voce (a crescendo and decrescendo on one F0) across their musical F0 range. Lung volume (LV) change was estimated, and…

  20. Application of the Extreme Value Distribution to Estimate the Uncertainty of Peak Sound Pressure Levels at the Workplace.

    PubMed

    Lenzuni, Paolo

    2015-07-01

    The purpose of this article is to develop a method for the statistical inference of the maximum peak sound pressure level and of the associated uncertainty. Both quantities are requested by the EU directive 2003/10/EC for a complete and solid assessment of the noise exposure at the workplace. Based on the characteristics of the sound pressure waveform, it is hypothesized that the distribution of the measured peak sound pressure levels follows the extreme value distribution. The maximum peak level is estimated as the largest member of a finite population following this probability distribution. The associated uncertainty is also discussed, taking into account not only the contribution due to the incomplete sampling but also the contribution due to the finite precision of the instrumentation. The largest of the set of measured peak levels underestimates the maximum peak sound pressure level. The underestimate can be as large as 4 dB if the number of measurements is limited to 3-4, which is common practice in occupational noise assessment. The extended uncertainty is also quite large (~2.5 dB), with a weak dependence on the sampling details. Following the procedure outlined in this article, a reliable comparison between the peak sound pressure levels measured in a workplace and the EU directive action limits is possible. Non-compliance can occur even when the largest of the set of measured peak levels is several dB below such limits. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  1. Multichannel loudness compensation method based on segmented sound pressure level for digital hearing aids

    NASA Astrophysics Data System (ADS)

    Liang, Ruiyu; Xi, Ji; Bao, Yongqiang

    2017-07-01

    To improve the performance of gain compensation based on three-segment sound pressure level (SPL) in hearing aids, an improved multichannel loudness compensation method based on eight-segment SPL was proposed. Firstly, the uniform cosine modulated filter bank was designed. Then, the adjacent channels which have low or gradual slopes were adaptively merged to obtain the corresponding non-uniform cosine modulated filter according to the audiogram of hearing impaired persons. Secondly, the input speech was decomposed into sub-band signals and the SPL of every sub-band signal was computed. Meanwhile, the audible SPL range from 0 dB SPL to 120 dB SPL was equally divided into eight segments. Based on these segments, a different prescription formula was designed to compute more detailed gain to compensate according to the audiogram and the computed SPL. Finally, the enhanced signal was synthesized. Objective experiments showed the decomposed signals after cosine modulated filter bank have little distortion. Objective experiments showed that the hearing aids speech perception index (HASPI) and hearing aids speech quality index (HASQI) increased 0.083 and 0.082 on average, respectively. Subjective experiments showed the proposed algorithm can effectively improve the speech recognition of six hearing impaired persons.

  2. Study on osteogenesis promoted by low sound pressure level infrasound in vivo and some underlying mechanisms.

    PubMed

    Long, Hua; Zheng, Liheng; Gomes, Fernando Cardoso; Zhang, Jinhui; Mou, Xiang; Yuan, Hua

    2013-09-01

    To clarify the effects of low sound pressure level (LSPL) infrasound on local bone turnover and explore its underlying mechanisms, femoral defected rats were stabilized with a single-side external fixator. After exposure to LSPL infrasound for 30min twice everyday for 6 weeks, the pertinent features of bone healing were assessed by radiography, peripheral quantitative computerized tomography (pQCT), histology and immunofluorescence assay. Infrasound group showed a more consecutive and smoother process of fracture healing and modeling in radiographs and histomorphology. It also showed significantly higher average bone mineral content (BMC) and bone mineral density (BMD). Immunofluorescence showed increased expression of calcitonin gene related peptide (CGRP) and decreased Neuropeptide Y (NPY) innervation in microenvironment. The results suggested the osteogenesis promotion effects of LSPL infrasound in vivo. Neuro-osteogenic network in local microenvironment was probably one target mediating infrasonic osteogenesis, which might provide new strategy to accelerate bone healing and remodeling. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Composition of the Earth's inner core from high-pressure sound velocity measurements in Fe-Ni-Si alloys

    NASA Astrophysics Data System (ADS)

    Antonangeli, Daniele; Siebert, Julien; Badro, James; Farber, Daniel L.; Fiquet, Guillaume; Morard, Guillaume; Ryerson, Frederick J.

    2010-06-01

    We performed room-temperature sound velocity and density measurements on a polycrystalline alloy, Fe0.89Ni0.04Si0.07, in the hexagonal close-packed (hcp) phase up to 108 GPa. Over the investigated pressure range the aggregate compressional sound velocity is ∼ 9% higher than in pure iron at the same density. The measured aggregate compressional (VP) and shear (VS) sound velocities, extrapolated to core densities and corrected for anharmonic temperature effects, are compared with seismic profiles. Our results provide constraints on the silicon abundance in the core, suggesting a model that simultaneously matches the primary seismic observables, density, P-wave and S-wave velocities, for an inner core containing 4 to 5 wt.% of Ni and 1 to 2 wt.% of Si.

  4. The effects of experimentally induced conductive hearing loss on spectral and temporal aspects of sound transmission through the ear

    PubMed Central

    Lupo, J. Eric; Koka, Kanthaiah; Thornton, Jennifer L.; Tollin, Daniel J.

    2010-01-01

    Conductive hearing loss (CHL) is known to produce hearing deficits, including deficits in sound localization ability. The differences in sound intensities and timing experienced between the two tympanic membranes are important cues to sound localization (ILD and ITD, respectively). Although much is known about the effect of CHL on hearing levels, little investigation has been conducted into the actual impact of CHL on sound location cues. This study investigated effects of CHL induced by earplugs on cochlear microphonic (CM) amplitude and timing and their corresponding effect on the ILD and ITD location cues. Acoustic and CM measurements were made in 5 chinchillas before and after earplug insertion, and again after earplug removal using pure tones (500 Hz to 24 kHz). ILDs in the unoccluded condition demonstrated position and frequency dependence where peak far-lateral ILDs approached 30 dB for high frequencies. Unoccluded ear ITD cues demonstrated positional and frequency dependence with increased ITD cue for both decreasing frequency (± 420 µs at 500 Hz, ± 310 µs for 1–4 kHz ) and increasingly lateral sound source locations. Occlusion of the ear canal with foam plugs resulted in a mild, frequency-dependent conductive hearing loss of 10–38 dB (mean 31 ± 3.9 dB) leading to a concomitant frequency dependent increase in ILDs at all source locations. The effective ITDs increased in a frequency dependent manner with ear occlusion as a direct result of the acoustic properties of the plugging material, the latter confirmed via acoustical measurements using a model ear canal with varying volumes of acoustic foam. Upon ear plugging with acoustic foam, a mild CHL is induced. Furthermore, the CHL induced by acoustic foam results in substantial changes in the magnitudes of both the ITD and ILD cues to sound location. PMID:21073935

  5. Joint reconstruction of the initial pressure and speed of sound distributions from combined photoacoustic and ultrasound tomography measurements

    NASA Astrophysics Data System (ADS)

    Matthews, Thomas P.; Anastasio, Mark A.

    2017-12-01

    The initial pressure and speed of sound (SOS) distributions cannot both be stably recovered from photoacoustic computed tomography (PACT) measurements alone. Adjunct ultrasound computed tomography (USCT) measurements can be employed to estimate the SOS distribution. Under the conventional image reconstruction approach for combined PACT/USCT systems, the SOS is estimated from the USCT measurements alone and the initial pressure is estimated from the PACT measurements by use of the previously estimated SOS. This approach ignores the acoustic information in the PACT measurements and may require many USCT measurements to accurately reconstruct the SOS. In this work, a joint reconstruction method where the SOS and initial pressure distributions are simultaneously estimated from combined PACT/USCT measurements is proposed. This approach allows accurate estimation of both the initial pressure distribution and the SOS distribution while requiring few USCT measurements.

  6. The clarinet: how blowing pressure, lip force, lip position and reed "hardness" affect pitch, sound level, and spectrum.

    PubMed

    Almeida, Andre; George, David; Smith, John; Wolfe, Joe

    2013-09-01

    Using an automated clarinet playing system, the frequency f, sound level L, and spectral characteristics are measured as functions of blowing pressure P and the force F applied by the mechanical lip at different places on the reed. The playing regime on the (P,F) plane lies below an extinction line F(P) with a negative slope of a few square centimeters and above a pressure threshold with a more negative slope. Lower values of F and P can produce squeaks. Over much of the playing regime, lines of equal frequency have negative slope. This is qualitatively consistent with passive reed behavior: Increasing F or P gradually closes the reed, reducing its equivalent acoustic compliance, which increases the frequency of the peaks of the parallel impedance of bore and reed. High P and low F produce the highest sound levels and stronger higher harmonics. At low P, sound level can be increased at constant frequency by increasing P while simultaneously decreasing F. At high P, where lines of equal f and of equal L are nearly parallel, this compensation is less effective. Applying F further from the mouthpiece tip moves the playing regime to higher F and P, as does a stiffer reed.

  7. Longitudinal sound velocities, elastic anisotropy, and phase transition of high-pressure cubic H2O ice to 82 GPa

    NASA Astrophysics Data System (ADS)

    Kuriakose, Maju; Raetz, Samuel; Hu, Qing Miao; Nikitin, Sergey M.; Chigarev, Nikolay; Tournat, Vincent; Bulou, Alain; Lomonosov, Alexey; Djemia, Philippe; Gusev, Vitalyi E.; Zerr, Andreas

    2017-10-01

    Water ice is a molecular solid whose behavior under compression reveals the interplay of covalent bonding in molecules and forces acting between them. This interplay determines high-pressure phase transitions, the elastic and plastic behavior of H2O ice, which are the properties needed for modeling the convection and internal structure of the giant planets and moons of the solar system as well as H2O -rich exoplanets. We investigated experimentally and theoretically elastic properties and phase transitions of cubic H2O ice at room temperature and high pressures between 10 and 82 GPa. The time-domain Brillouin scattering (TDBS) technique was used to measure longitudinal sound velocities (VL) in polycrystalline ice samples compressed in a diamond anvil cell. The high spatial resolution of the TDBS technique revealed variations of VL caused by elastic anisotropy, allowing us to reliably determine the fastest and the slowest sound velocity in a single crystal of cubic H2O ice and thus to evaluate existing equations of state. Pressure dependencies of the single-crystal elastic moduli Ci j(P ) of cubic H2O ice to 82 GPa have been obtained which indicate its hardness and brittleness. These results were compared with ab initio calculations. It is suggested that the transition from molecular ice VII to ionic ice X occurs at much higher pressures than proposed earlier, probably above 80 GPa.

  8. A displacement-pressure finite element formulation for analyzing the sound transmission in ducted shear flows with finite poroelastic lining.

    PubMed

    Nennig, Benoit; Tahar, Mabrouk Ben; Perrey-Debain, Emmanuel

    2011-07-01

    In the present work, the propagation of sound in a lined duct containing sheared mean flow is studied. Walls of the duct are acoustically treated with absorbent poroelastic foams. The propagation of elasto-acoustic waves in the liner is described by Biot's model. In the fluid domain, the propagation of sound in a sheared mean flow is governed by the Galbrun's equation. The problem is solved using a mixed displacement-pressure finite element formulation in both domains. A 3D implementation of the model has been performed and is illustrated on axisymmetric examples. Convergence and accuracy of the numerical model are shown for the particular case of the modal propagation in a infinite duct containing a uniform flow. Practical examples concerning the sound attenuation through dissipative silencers are discussed. In particular, effects of the refraction effects in the shear layer as well as the mounting conditions of the foam on the transmission loss are shown. The presence of a perforate screen at the air-porous interface is also considered and included in the model. © 2011 Acoustical Society of America

  9. Making Sound Connections

    ERIC Educational Resources Information Center

    Deal, Walter F., III

    2007-01-01

    Sound provides and offers amazing insights into the world. Sound waves may be defined as mechanical energy that moves through air or other medium as a longitudinal wave and consists of pressure fluctuations. Humans and animals alike use sound as a means of communication and a tool for survival. Mammals, such as bats, use ultrasonic sound waves to…

  10. Magnetic transition and sound velocities of Fe 3S at high pressure: implications for Earth and planetary cores

    NASA Astrophysics Data System (ADS)

    Lin, Jung-Fu; Fei, Yingwei; Sturhahn, Wolfgang; Zhao, Jiyong; Mao, Ho-kwang; Hemley, Russell J.

    2004-09-01

    Magnetic, elastic, thermodynamic, and vibrational properties of the most iron-rich sulfide, Fe3S, known to date have been studied with synchrotron Mössbauer spectroscopy (SMS) and nuclear resonant inelastic X-ray scattering (NRIXS) up to 57 GPa at room temperature. The magnetic hyperfine fields derived from the time spectra of the synchrotron Mössbauer spectroscopy show that the low-pressure magnetic phase displays two magnetic hyperfine field sites and that a magnetic collapse occurs at 21 GPa. The magnetic to non-magnetic transition significantly affects the elastic, thermodynamic, and vibrational properties of Fe3S. The magnetic collapse of Fe3S may also affect the phase relations in the iron-sulfur system, changing the solubility of sulfur in iron under higher pressures. Determination of the physical properties of the non-magnetic Fe3S phase is important for the interpretation of the amount and properties of sulfur present in the planetary cores. Sound velocities of Fe3S obtained from the measured partial phonon density of states (PDOS) for 57Fe incorporated in the alloy show that Fe3S has higher compressional and shear wave velocity than those of hcp-Fe and hcp-Fe0.92Ni0.08 alloy under high pressures, making sulfur a potential light element in the Earth's core based on geophysical arguments. The VP and VS of the non-magnetic Fe3S follow a Birch's law trend whereas the slopes decrease in the magnetic phase, indicating that the decrease of the magnetic moment significantly affects the sound velocities. If the Martian core is in the solid state containing 14.2 wt.% sulfur, it is likely that the non-magnetic Fe3S phase is a dominant component and that our measured sound velocities of Fe3S can be used to construct the corresponding velocity profile of the Martian core. It is also conceivable that Fe3P and Fe3C undergo similar magnetic phase transitions under high pressures.

  11. Sound velocity of MgSiO 3 perovskite to Mbar pressure

    NASA Astrophysics Data System (ADS)

    Murakami, Motohiko; Sinogeikin, Stanislav V.; Hellwig, Holger; Bass, Jay D.; Li, Jie

    2007-04-01

    Brillouin scattering measurements of the aggregate shear wave velocities in MgSiO 3 perovskite were conducted at high pressure conditions relevant to the Earth's lowermost mantle, approaching 1 Mbar. Infrared laser annealing of samples in a diamond anvil cell allowed us to obtain high quality Brillouin spectra and to drastically extend the upper limit of pressure for Brillouin measurements. We found that the pressure derivative of the shear modulus (d G / d P = G') of MgSiO 3 perovskite is 1.56 ± 0.04, which is distinctly lower than that of previous lower-pressure experiments below 9 GPa. Extrapolation of the high-pressure shear velocities of perovskite to ambient pressure, are in excellent agreement with earlier room pressure Brillouin measurements. The shear modulus, shear velocity and longitudinal velocity at ambient pressure were determined to be G0 = 172.9(15) GPa, VS = 6.49(3) and VP = 10.85(3) km/sec. The mineralogical model that provides a best fit to global seismological 1-D velocity profiles is one that contains 85-90 vol% perovskite in the lower mantle. The results of this study are the first to demonstrate that the elastic wave velocities for a near-adiabatic lower mantle with a bulk composition dominated by magnesium silicate perovskite are consistent with the average lower mantle seismic velocity structure. The large pressure range over which acoustic measurements of MgSiO 3 perovskite performed in this study has thus allowed us to put tighter constraints on compositional models of the Earth's lower mantle.

  12. Narrow sound pressure level tuning in the auditory cortex of the bats Molossus molossus and Macrotus waterhousii.

    PubMed

    Macías, Silvio; Hechavarría, Julio C; Cobo, Ariadna; Mora, Emanuel C

    2014-03-01

    In the auditory system, tuning to sound level appears in the form of non-monotonic response-level functions that depict the response of a neuron to changing sound levels. Neurons with non-monotonic response-level functions respond best to a particular sound pressure level (defined as "best level" or level evoking the maximum response). We performed a comparative study on the location and basic functional organization of the auditory cortex in the gleaning bat, Macrotus waterhousii, and the aerial-hawking bat, Molossus molossus. Here, we describe the response-level function of cortical units in these two species. In the auditory cortices of M. waterhousii and M. molossus, the characteristic frequency of the units increased from caudal to rostral. In M. waterhousii, there was an even distribution of characteristic frequencies while in M. molossus there was an overrepresentation of frequencies present within echolocation pulses. In both species, most of the units showed best levels in a narrow range, without an evident topography in the amplitopic organization, as described in other species. During flight, bats decrease the intensity of their emitted pulses when they approach a prey item or an obstacle resulting in maintenance of perceived echo intensity. Narrow level tuning likely contributes to the extraction of echo amplitudes facilitating echo-intensity compensation. For aerial-hawking bats, like M. molossus, receiving echoes within the optimal sensitivity range can help the bats to sustain consistent analysis of successive echoes without distortions of perception caused by changes in amplitude. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Sound quality assessment of Diesel combustion noise using in-cylinder pressure components

    NASA Astrophysics Data System (ADS)

    Payri, F.; Broatch, A.; Margot, X.; Monelletta, L.

    2009-01-01

    The combustion process in direct injection (DI) Diesel engines is an important source of noise, and it is thus the main reason why end-users could be reluctant to drive vehicles powered with this type of engine. This means that the great potential of Diesel engines for environment preservation—due to their lower consumption and the subsequent reduction of CO2 emissions—may be lost. Moreover, the advanced combustion concepts—e.g. the HCCI (homogeneous charge compression ignition)—developed to comply with forthcoming emissions legislation, while maintaining the efficiency of current engines, are expected to be noisier because they are characterized by a higher amount of premixed combustion. For this reason many efforts have been dedicated by car manufacturers in recent years to reduce the overall level and improve the sound quality of engine noise. Evaluation procedures are required, both for noise levels and sound quality, that may be integrated in the global engine development process in a timely and cost-effective manner. In previous published work, the authors proposed a novel method for the assessment of engine noise level. A similar procedure is applied in this paper to demonstrate the suitability of combustion indicators for the evaluation of engine noise quality. These indicators, which are representative of the peak velocity of fuel burning and the resonance in the combustion chamber, are well correlated with the combustion noise mark obtained from jury testing. Quite good accuracy in the prediction of the engine noise quality has been obtained with the definition of a two-component regression, which also permits the identification of the combustion process features related to the resulting noise quality, so that corrective actions may be proposed.

  14. Sound velocity of Fe-S liquids at high pressure: Implications for the Moon's molten outer core

    SciT

    Jing, Zhicheng; Wang, Yanbin; Kono, Yoshio

    2014-07-21

    Sound velocities of Fe and Fe–S liquids were determined by combining the ultrasonic measurements and synchrotron X-ray techniques under high pressure–temperature conditions from 1 to 8 GPa and 1573 K to 1973 K. Four different liquid compositions were studied including Fe, Fe–10 wt% S, Fe–20 wt% S, and Fe–27 wt% S. Our data show that the velocity of Fe-rich liquids increases upon compression and decreases with increasing sulfur content, whereas temperature has negligible effect on the velocity of Fe–S liquids. The sound velocity data were combined with ambient-pressure densities to fit the Murnaghan equation of state (EOS). Compared to themore » lunar seismic model, our velocity data constrain the sulfur content at 4±3 wt%, indicating a significantly denser (6.5±0.5 g/cm 3) and hotter (1870-70+100 K) outer core than previously estimated. A new lunar structure model incorporating available geophysical observations points to a smaller core radius. Our model suggests a top–down solidification scenario for the evolution of the lunar core. Such “iron snow” process may have been an important mechanism for the growth of the inner core.« less

  15. Method of and apparatus for measuring temperature and pressure. [atmospheric sounding

    NASA Technical Reports Server (NTRS)

    Korb, C. L.; Kalshoven, J. E., Jr. (Inventor)

    1985-01-01

    Laser beams are transmitted through gas to a reflecting target, which may be either a solid surface or particulate matter in gas or the gas molecules. The return beams are measured to determine the amount of energy absorbed by the gas. For temperature measurements, the laser beam has a wavelength at which the gas exhibits a relatively temperature sensitive and pressure insensitive absorption characteristic for pressure measurements, the laser beam has a wavelength at which the gas has a relatively pressure sensitive and temperature insensitive absorption characteristic. To reduce the effects of scattering on the absorption measurements a reference laser beam with a weak absorption characteristic is transmitted colinearly with the data beam having a strong absorption characteristic. The two signals are processed as a ratio to eliminate back scattering. Embodiments of transmitters and receivers described include a sequential laser pulse transmitter and receiver, a simultaneous laser pulse transmitter and receiver.

  16. High levels of sound pressure: acoustic reflex thresholds and auditory complaints of workers with noise exposure.

    PubMed

    Duarte, Alexandre Scalli Mathias; Ng, Ronny Tah Yen; de Carvalho, Guilherme Machado; Guimarães, Alexandre Caixeta; Pinheiro, Laiza Araujo Mohana; Costa, Everardo Andrade da; Gusmão, Reinaldo Jordão

    2015-01-01

    The clinical evaluation of subjects with occupational noise exposure has been difficult due to the discrepancy between auditory complaints and auditory test results. This study aimed to evaluate the contralateral acoustic reflex thresholds of workers exposed to high levels of noise, and to compare these results to the subjects' auditory complaints. This clinical retrospective study evaluated 364 workers between 1998 and 2005; their contralateral acoustic reflexes were compared to auditory complaints, age, and noise exposure time by chi-squared, Fisher's, and Spearman's tests. The workers' age ranged from 18 to 50 years (mean=39.6), and noise exposure time from one to 38 years (mean=17.3). We found that 15.1% (55) of the workers had bilateral hearing loss, 38.5% (140) had bilateral tinnitus, 52.8% (192) had abnormal sensitivity to loud sounds, and 47.2% (172) had speech recognition impairment. The variables hearing loss, speech recognition impairment, tinnitus, age group, and noise exposure time did not show relationship with acoustic reflex thresholds; however, all complaints demonstrated a statistically significant relationship with Metz recruitment at 3000 and 4000Hz bilaterally. There was no significance relationship between auditory complaints and acoustic reflexes. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  17. Sound velocities of the 23 Å phase at high pressure and implications for seismic velocities in subducted slabs

    NASA Astrophysics Data System (ADS)

    Cai, N.; Chen, T.; Qi, X.; Inoue, T.; Li, B.

    2017-12-01

    Dense hydrous phases are believed to play an important role in transporting water back into the deep interior of the Earth. Recently, a new Al-bearing hydrous Mg-silicate, named the 23 Å phase (ideal composition Mg12Al2Si4O16(OH)14), was reported (Cai et al., 2015), which could be a very important hydrous phase in subducting slabs. Here for the first time we report the measurements of the compressional and shear wave velocities of the 23 Å phase under applied pressures up to 14 GPa and room temperature, using a bulk sample with a grain size of less than 20 μm and density of 2.947 g/cm3. The acoustic measurements were conducted in a 1000-ton uniaxial split-cylinder multi-anvil apparatus using ultrasonic interferometry techniques (Li et al., 1996). The pressures were determined in situ by using an alumina buffer rod as the pressure marker (Wang et al., 2015). A dual-mode piezoelectric transducer enabled us to measure P and S wave travel times simultaneously, which in turn allowed a precise determination of the sound velocities and elastic bulk and shear moduli at high pressures. A fit to the acoustic data using finite strain analysis combined with a Hashin-Shtrikman (HS) bounds calculation yields: Ks0 = 113.3 GPa, G0 = 42.8 GPa, and K' = 3.8, G' = 1.9 for the bulk and shear moduli and their pressure derivatives. The velocities (especially for S wave) of this 23 Å phase (ambient Vp = 7.53 km/s, Vs = 3.72 km/s) are lower than those of phase A, olivine, pyrope, etc., while the Vp/Vs ratio (from 2.02 to 1.94, decreasing with increasing pressure) is quite high. These results suggest that a hydrous assemblage containing 23 Å phase should be distinguishable from a dry one at high pressure and temperature conditions relevant to Al-bearing subducted slabs.

  18. Respiratory Muscle Strength, Sound Pressure Level, and Vocal Acoustic Parameters and Waist Circumference of Children With Different Nutritional Status.

    PubMed

    Pascotini, Fernanda dos Santos; Ribeiro, Vanessa Veis; Christmann, Mara Keli; Tomasi, Lidia Lis; Dellazzana, Amanda Alves; Haeffner, Leris Salete Bonfanti; Cielo, Carla Aparecida

    2016-01-01

    Relate respiratory muscle strength (RMS), sound pressure (SP) level, and vocal acoustic parameters to the abdominal circumference (AC) and nutritional status of children. This is a cross-sectional study. Eighty-two school children aged between 8 and 10 years, grouped by nutritional states (eutrophic, overweight, or obese) and AC percentile (≤25, 25-75, and ≥75), were included in the study. Evaluations of maximal inspiratory pressure (IPmax) and maximal expiratory pressure (EPmax) were conducted using the manometer and SP and acoustic parameters through the Multi-Dimensional Voice Program Advanced (KayPENTAX, Montvale, New Jersey). There were significant differences (P < 0.05) in the EPmax of children with AC between the 25th and 75th percentiles (72.4) and those less than or equal to the 25th percentile (61.9) and in the SP of those greater than or equal to the 75th percentile (73.4) and less than or equal to the 25th percentile (66.6). The IPmax, EPmax, SP levels, and acoustic variables were not different in relation to the nutritional states of the children. There was a strong and positive correlation between the coefficient of amplitude perturbations (shimmer), the harmonics-to-noise ratio and the variation of the fundamental frequency, respectively, 0.79 and 0.71. RMS and acoustic voice characteristics in children do not appear to be influenced by nutritional states, and respiratory pressure does not interfere with acoustic voice characteristics. However, localized fat, represented by the AC, alters the EPmax and the SP, each of which increases as the AC increases. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  19. Indoor seismology by probing the Earth's interior by using sound velocity measurements at high pressures and temperatures.

    PubMed

    Li, Baosheng; Liebermann, Robert C

    2007-05-29

    The adiabatic bulk (K(S)) and shear (G) moduli of mantle materials at high pressure and temperature can be obtained directly by measuring compressional and shear wave velocities in the laboratory with experimental techniques based on physical acoustics. We present the application of the current state-of-the-art experimental techniques by using ultrasonic interferometry in conjunction with synchrotron x radiation to study the elasticity of olivine and pyroxenes and their high-pressure phases. By using these updated thermoelasticity data for these phases, velocity and density profiles for a pyrolite model are constructed and compared with radial seismic models. We conclude that pyrolite provides an adequate explanation of the major seismic discontinuities at 410- and 660-km depths, the gradient in the transition zone, as well as the velocities in the lower mantle, if the uncertainties in the modeling and the variations in different seismic models are considered. The characteristics of the seismic scaling factors in response to thermal anomalies suggest that anticorrelations between bulk sound and shear wave velocities, as well as the large positive density anomalies observed in the lower mantle, cannot be explained fully without invoking chemical variations.

  20. Comparative analysis of performance in reading and writing of children exposed and not exposed to high sound pressure levels.

    PubMed

    Santos, Juliana Feitosa dos; Souza, Ana Paula Ramos de; Seligman, Lilian

    2013-01-01

    To analyze the possible relationships between high sound pressure levels in the classroom and performance in the use of lexical and phonological routes in reading and writing. This consisted on a quantitative and exploratory study. The following measures were carried out: acoustic measurement, using the dosimeter, visual inspection of the external auditory canal, tonal audiometry thresholds, speech recognition tests and acoustic immittance; instrument for evaluation of reading and writing of isolated words. The non-parametric χ² test and Fisher's exact test were used for data analysis. The results of acoustic measurements in 4 schools in Santa Maria divided the sample of 87 children of third and fourth years of primary school, aged 8 to 10 years, in 2 groups. The 1st group was exposed to sound levels higher than 80 dB(A) (Study group) and the 2nd group at levels lower than 80 dB(A) (Control group). Higher prevalence of correct answers in reading and writing of nonwords, reading irregular words and frequency effect were observed. Predominance of correct answers in the writing of irregular words was observed in the Control group. For the Study group, a higher number of type errors neologism in reading and writing were observed, especially regarding the writing of nonwords and the extension effect; fewer errors of lexicalization type and verbal paragraphy in writing were observed. In assessing the reading and writing skills, children in the Study group exposed to high noise levels had poorer performance in the use of lexical and phonological routes, both in reading and in writing.

  1. Continuous monitoring of blood pressure by analyzing the blood flow sound of arteriovenous fistula in hemodialysis patients.

    PubMed

    Kamijo, Yuka; Kanda, Eiichiro; Horiuchi, Hayato; Kounoue, Noriyuki; Ono, Keisuke; Maeda, Keizo; Yanai, Akane; Honda, Kazuya; Tsujimoto, Ryuji; Yanagi, Mai; Ishibashi, Yoshitaka; Yoshida, Masayuki

    2018-06-01

    Patients with end-stage renal disease undergoing hemodialysis (HD) have an elevated risk of cardiovascular disease-related morbidity and mortality. To prevent from such a life-threatening event, the continuous blood pressure (BP) monitoring system may contribute to detect BP decline in early stages and may help to do appropriate disposal. Our research team has introduced an electronic stethoscope (Asahi Kasei Co, Ltd., Tokyo, Japan), which translates sound intensity of Arteriovenous Fistula (AVF) to BP data using the technique of Fourier transformation that can predict continuous BP non-invasively. This study, we investigated whether electronic stethoscope-guided estimated BP (e-BP) would actually reflect systolic BP measured by sphygmomanometer (s-BP), and whether e-BP could predict fall of BP during HD. Twenty-six patients who underwent HD treatment in our hospital were evaluated prospectively. We obtained sound intensity data from the electronic stethoscope which was equipped with the return line of HD. Then, the data were translated into e-BP data to be compared with s-BP. Correlation of total of 315 data sets obtained from each method was examined. An accuracy of diagnosis of intra-dialytic hypotension (IDH) was evaluated. Total of 315 data sets were obtained. A close correlation was observed between e-BP and s-BP (r = 0.887, p < 0.0001). Sensitivity and positive predictive value of predicted-BP for detection of IDH was 90 and 81.3%, respectively. Electronic stethoscope-guided BP measurement would be helpful for real-time diagnosis of BP fall in HD patients. Further investigations are needed.

  2. Evaluation of the Effects of Various Sound Pressure Levels on the Level of Serum Aldosterone Concentration in Rats

    PubMed Central

    Nassiri, Parvin; Zare, Sajad; Monazzam, Mohammad R.; Pourbakht, Akram; Azam, Kamal; Golmohammadi, Taghi

    2017-01-01

    Introduction: Noise exposure may have anatomical, nonauditory, and auditory influences. Considering nonauditory impacts, noise exposure can cause alterations in the automatic nervous system, including increased pulse rates, heightened blood pressure, and abnormal secretion of hormones. The present study aimed at examining the effect of various sound pressure levels (SPLs) on the serum aldosterone level among rats. Materials and Methods: A total of 45 adult male rats with an age range of 3 to 4 months and a weight of 200 ± 50 g were randomly divided into 15 groups of three. Three groups were considered as the control groups and the rest (i.e., 12 groups) as the case groups. Rats of the case groups were exposed to SPLs of 85, 95, and 105 dBA. White noise was used as the noise to which the rats were exposed. To measure the level of rats’ serum aldosterone, 3 mL of each rat’s sample blood was directly taken from the heart of anesthetized animals by using syringes. The taken blood samples were put in labeled test tubes that contained anticoagulant Ethylenediaminetetraacetic acid. In the laboratory, the level of aldosterone was assessed through Enzyme-linked immunosorbent assay protocol. The collected data were analyzed by the use of Statistical Package for Social Sciences (SPSS) version 18. Results: The results revealed that there was no significant change in the level of rats’ serum aldosterone as a result of exposure to SPLs of 65, 85, and 95 dBA. However, the level of serum aldosterone experienced a remarkable increase after exposure to the SPL of 105 dBA (P < 0.001). Thus, the SPL had a significant impact on the serum aldosterone level (P < 0.001). In contrast, the exposure time and the level of potassium in the used water did not have any measurable influence on the level of serum aldosterone (P = 0.25 and 0.39). Conclusion: The findings of this study demonstrated that serum aldosterone can be used as a biomarker in the face of sound exposure. PMID

  3. Effects of High Sound Exposure During Air-Conducted Vestibular Evoked Myogenic Potential Testing in Children and Young Adults.

    PubMed

    Rodriguez, Amanda I; Thomas, Megan L A; Fitzpatrick, Denis; Janky, Kristen L

    Vestibular evoked myogenic potential (VEMP) testing is increasingly utilized in pediatric vestibular evaluations due to its diagnostic capability to identify otolith dysfunction and feasibility of testing. However, there is evidence demonstrating that the high-intensity stimulation level required to elicit a reliable VEMP response causes acoustic trauma in adults. Despite utility of VEMP testing in children, similar findings are unknown. It is hypothesized that increased sound exposure may exist in children because differences in ear-canal volume (ECV) compared with adults, and the effect of stimulus parameters (e.g., signal duration and intensity) will alter exposure levels delivered to a child's ear. The objectives of this study are to (1) measure peak to peak equivalent sound pressure levels (peSPL) in children with normal hearing (CNH) and young adults with normal hearing (ANH) using high-intensity VEMP stimuli, (2) determine the effect of ECV on peSPL and calculate a safe exposure level for VEMP, and (3) assess whether cochlear changes exist after VEMP exposure. This was a 2-phase approach. Fifteen CNH and 12 ANH participated in phase I. Equivalent ECV was measured. In 1 ear, peSPL was recorded for 5 seconds at 105 to 125 dB SPL, in 5-dB increments for 500- and 750-Hz tone bursts. Recorded peSPL values (accounting for stimulus duration) were then used to calculate safe sound energy exposure values for VEMP testing using the 132-dB recommended energy allowance from the 2003 European Union Guidelines. Fifteen CNH and 10 ANH received cervical and ocular VEMP testing in 1 ear in phase II. Subjects completed tympanometry, pre- and postaudiometric threshold testing, distortion product otoacoustic emissions, and questionnaire addressing subjective otologic symptoms to study the effect of VEMP exposure on cochlear function. (1) In response to high-intensity stimulation levels (e.g., 125 dB SPL), CNH had significantly higher peSPL measurements and smaller ECVs compared

  4. Towards direct realisation of the SI unit of sound pressure in the audible hearing range based on optical free-field acoustic particle measurements

    SciT

    Koukoulas, Triantafillos, E-mail: triantafillos.koukoulas@npl.co.uk; Piper, Ben

    Since the introduction of the International System of Units (the SI system) in 1960, weights, measures, standardised approaches, procedures, and protocols have been introduced, adapted, and extensively used. A major international effort and activity concentrate on the definition and traceability of the seven base SI units in terms of fundamental constants, and consequently those units that are derived from the base units. In airborne acoustical metrology and for the audible range of frequencies up to 20 kHz, the SI unit of sound pressure, the pascal, is realised indirectly and without any knowledge or measurement of the sound field. Though themore » principle of reciprocity was originally formulated by Lord Rayleigh nearly two centuries ago, it was devised in the 1940s and eventually became a calibration standard in the 1960s; however, it can only accommodate a limited number of acoustic sensors of specific types and dimensions. International standards determine the device sensitivity either through coupler or through free-field reciprocity but rely on the continuous availability of specific acoustical artefacts. Here, we show an optical method based on gated photon correlation spectroscopy that can measure sound pressures directly and absolutely in fully anechoic conditions, remotely, and without disturbing the propagating sound field. It neither relies on the availability or performance of any measurement artefact nor makes any assumptions of the device geometry and sound field characteristics. Most importantly, the required units of sound pressure and microphone sensitivity may now be experimentally realised, thus providing direct traceability to SI base units.« less

  5. On the efficacy of spatial sampling using manual scanning paths to determine the spatial average sound pressure level in rooms.

    PubMed

    Hopkins, Carl

    2011-05-01

    In architectural acoustics, noise control and environmental noise, there are often steady-state signals for which it is necessary to measure the spatial average, sound pressure level inside rooms. This requires using fixed microphone positions, mechanical scanning devices, or manual scanning. In comparison with mechanical scanning devices, the human body allows manual scanning to trace out complex geometrical paths in three-dimensional space. To determine the efficacy of manual scanning paths in terms of an equivalent number of uncorrelated samples, an analytical approach is solved numerically. The benchmark used to assess these paths is a minimum of five uncorrelated fixed microphone positions at frequencies above 200 Hz. For paths involving an operator walking across the room, potential problems exist with walking noise and non-uniform scanning speeds. Hence, paths are considered based on a fixed standing position or rotation of the body about a fixed point. In empty rooms, it is shown that a circle, helix, or cylindrical-type path satisfy the benchmark requirement with the latter two paths being highly efficient at generating large number of uncorrelated samples. In furnished rooms where there is limited space for the operator to move, an efficient path comprises three semicircles with 45°-60° separations.

  6. Offshore exposure experiments on cuttlefish indicate received sound pressure and particle motion levels associated with acoustic trauma

    PubMed Central

    Solé, Marta; Sigray, Peter; Lenoir, Marc; van der Schaar, Mike; Lalander, Emilia; André, Michel

    2017-01-01

    Recent findings on cephalopods in laboratory conditions showed that exposure to artificial noise had a direct consequence on the statocyst, sensory organs, which are responsible for their equilibrium and movements in the water column. The question remained about the contribution of the consequent near-field particle motion influence from the tank walls, to the triggering of the trauma. Offshore noise controlled exposure experiments (CEE) on common cuttlefish (Sepia officinalis), were conducted at three different depths and distances from the source and particle motion and sound pressure measurements were performed at each location. Scanning electron microscopy (SEM) revealed injuries in statocysts, which severity was quantified and found to be proportional to the distance to the transducer. These findings are the first evidence of cephalopods sensitivity to anthropogenic noise sources in their natural habitat. From the measured received power spectrum of the sweep, it was possible to determine that the animals were exposed at levels ranging from 139 to 142 dB re 1 μPa2 and from 139 to 141 dB re 1 μPa2, at 1/3 octave bands centred at 315 Hz and 400 Hz, respectively. These results could therefore be considered a coherent threshold estimation of noise levels that can trigger acoustic trauma in cephalopods. PMID:28378762

  7. Intentional changes in sound pressure level and rate: their impact on measures of respiration, phonation, and articulation.

    PubMed

    Dromey, C; Ramig, L O

    1998-10-01

    The purpose of the study was to compare the effects of changing sound pressure level (SPL) and rate on respiratory, phonatory, and articulatory behavior during sentence production. Ten subjects, 5 men and 5 women, repeated the sentence, "I sell a sapapple again," under 5 SPL and 5 rate conditions. From a multi-channel recording, measures were made of lung volume (LV), SPL, fundamental frequency (F0), semitone standard deviation (STSD), and upper and lower lip displacements and peak velocities. Loud speech led to increases in LV initiation, LV termination, F0, STSD, and articulatory displacements and peak velocities for both lips. Token-to-token variability in these articulatory measures generally decreased as SPL increased, whereas rate increases were associated with increased lip movement variability. LV excursion decreased as rate increased. F0 for the men and STSD for both genders increased with rate. Lower lip displacements became smaller for faster speech. The interspeaker differences in velocity change as a function of rate contrasted with the more consistent velocity performance across speakers for changes in SPL. Because SPL and rate change are targeted in therapy for dysarthria, the present data suggest directions for future research with disordered speakers.

  8. Offshore exposure experiments on cuttlefish indicate received sound pressure and particle motion levels associated with acoustic trauma

    NASA Astrophysics Data System (ADS)

    Solé, Marta; Sigray, Peter; Lenoir, Marc; van der Schaar, Mike; Lalander, Emilia; André, Michel

    2017-04-01

    Recent findings on cephalopods in laboratory conditions showed that exposure to artificial noise had a direct consequence on the statocyst, sensory organs, which are responsible for their equilibrium and movements in the water column. The question remained about the contribution of the consequent near-field particle motion influence from the tank walls, to the triggering of the trauma. Offshore noise controlled exposure experiments (CEE) on common cuttlefish (Sepia officinalis), were conducted at three different depths and distances from the source and particle motion and sound pressure measurements were performed at each location. Scanning electron microscopy (SEM) revealed injuries in statocysts, which severity was quantified and found to be proportional to the distance to the transducer. These findings are the first evidence of cephalopods sensitivity to anthropogenic noise sources in their natural habitat. From the measured received power spectrum of the sweep, it was possible to determine that the animals were exposed at levels ranging from 139 to 142 dB re 1 μPa2 and from 139 to 141 dB re 1 μPa2, at 1/3 octave bands centred at 315 Hz and 400 Hz, respectively. These results could therefore be considered a coherent threshold estimation of noise levels that can trigger acoustic trauma in cephalopods.

  9. How is sound conducted to the cochlea in toothed whales?

    NASA Astrophysics Data System (ADS)

    Zosuls, Aleks; Mountain, David C.; Ketten, Darlene R.

    2015-12-01

    Toothed whales (Odontocetes) typically have small occluded ear canals and sea water has a characteristic impedance that is much more similar to the impedance of soft tissues of the head than is the case for the air-tissue interface in terrestrial mammals. This makes it plausible that significant acoustic energy is being transmitted to their middle ear by tissue conduction. In addition, some authors have proposed that sound reaches the cochlea via bone conduction rather than via the tympanic membrane. To address these issues, we have developed a method to measure stapes velocity in response to vibrational stimulation at arbitrary locations on heads and ears harvested from stranded animals. Stapes velocity was measured with a Laser Doppler Velocimeter at the footplate with the cochlea drained. In all species tested, the transfer function of stapes velocity referenced to actuator velocity showed a high-pass characteristic. The corner frequency varied with species and experiment between 4 kHz and 60 kHz. This is similar to what is seen in odontocete audiograms but the cutoff slope is typically steeper than in the audiograms. There was no indication of high frequency cutoff within our measurement range. Disrupting the ossicles and fat bodies affected the transfer functions.

  10. Underwater Sound: Deep-Ocean Propagation: Variations of temperature and pressure have great influence on the propagation of sound in the ocean.

    PubMed

    Frosch, R A

    1964-11-13

    The absorption of sound in sea water varies markedly with frequency, being much greater at high than at low frequencies. It is sufficiently small at frequencies below several kilocycles per second, however, to permit propagation to thousands of miles. Oceanographic factors produce variations in sound velocity with depth, and these variations have a strong influence on long-range propagation. The deep ocean is characterized by a strong channel, generally at a depth of 500 to 1500 meters. In addition to guided propagation in this channel, the velocity structure gives rise to strongly peaked propagation from surface sources to surface receivers 48 to 56 kilometers away, with strong shadow zones of weak intensity in between. The near-surface shadow zone, in the latter case, may be filled in by bottom reflections or near-surface guided propagation due to a surface isothermal layer. The near-surface shadow zones can be avoided with certainty only through locating sources and receivers deep in the ocean.

  11. Equivalent threshold sound pressure levels (ETSPL) for Sennheiser HDA 280 supra-aural audiometric earphones in the frequency range 125 Hz to 8000 Hz.

    PubMed

    Poulsen, Torben; Oakley, Sebastian

    2009-05-01

    Hearing threshold sound pressure levels were measured for the Sennheiser HDA 280 audiometric earphone. Hearing thresholds were measured for 25 normal-hearing test subjects at the 11 audiometric test frequencies from 125 Hz to 8000 Hz. Sennheiser HDA 280 is a supra-aural earphone that may be seen as a substitute for the classical Telephonics TDH 39. The results are given as the equivalent threshold sound pressure level (ETSPL) measured in an acoustic coupler specified in IEC 60318-3. The results are in good agreement with an independent investigation from PTB, Braunschweig, Germany. From acoustic laboratory measurements ETSPL values are calculated for the ear simulator specified in IEC 60318-1. Fitting of earphone and coupler is discussed. The data may be used for a future update of the RETSPL standard for supra-aural audiometric earphones, ISO 389-1.

  12. The effects of experimentally induced conductive hearing loss on spectral and temporal aspects of sound transmission through the ear.

    PubMed

    Eric Lupo, J; Koka, Kanthaiah; Thornton, Jennifer L; Tollin, Daniel J

    2011-02-01

    Conductive hearing loss (CHL) is known to produce hearing deficits, including deficits in sound localization ability. The differences in sound intensities and timing experienced between the two tympanic membranes are important cues to sound localization (ILD and ITD, respectively). Although much is known about the effect of CHL on hearing levels, little investigation has been conducted into the actual impact of CHL on sound location cues. This study investigated effects of CHL induced by earplugs on cochlear microphonic (CM) amplitude and timing and their corresponding effect on the ILD and ITD location cues. Acoustic and CM measurements were made in 5 chinchillas before and after earplug insertion, and again after earplug removal using pure tones (500 Hz to 24 kHz). ILDs in the unoccluded condition demonstrated position and frequency dependence where peak far-lateral ILDs approached 30 dB for high frequencies. Unoccluded ear ITD cues demonstrated positional and frequency dependence with increased ITD cue for both decreasing frequency (±420 μs at 500 Hz, ±310 μs for 1-4 kHz) and increasingly lateral sound source locations. Occlusion of the ear canal with foam plugs resulted in a mild, frequency-dependent conductive hearing loss of 10-38 dB (mean 31 ± 3.9 dB) leading to a concomitant frequency dependent increase in ILDs at all source locations. The effective ITDs increased in a frequency dependent manner with ear occlusion as a direct result of the acoustic properties of the plugging material, the latter confirmed via acoustical measurements using a model ear canal with varying volumes of acoustic foam. Upon ear plugging with acoustic foam, a mild CHL is induced. Furthermore, the CHL induced by acoustic foam results in substantial changes in the magnitudes of both the ITD and ILD cues to sound location. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. Early Development and Orientation of the Acoustic Funnel Provides Insight into the Evolution of Sound Reception Pathways in Cetaceans

    PubMed Central

    Yamato, Maya; Pyenson, Nicholas D.

    2015-01-01

    Whales receive underwater sounds through a fundamentally different mechanism than their close terrestrial relatives. Instead of hearing through the ear canal, cetaceans hear through specialized fatty tissues leading to an evolutionarily novel feature: an acoustic funnel located anterior to the tympanic aperture. We traced the ontogenetic development of this feature in 56 fetal specimens from 10 different families of toothed (odontocete) and baleen (mysticete) whales, using X-ray computed tomography. We also charted ear ossification patterns through ontogeny to understand the impact of heterochronic developmental processes. We determined that the acoustic funnel arises from a prominent V-shaped structure established early in ontogeny, formed by the malleus and the goniale. In odontocetes, this V-formation develops into a cone-shaped funnel facing anteriorly, directly into intramandibular acoustic fats, which is likely functionally linked to the anterior orientation of sound reception in echolocation. In contrast, the acoustic funnel in balaenopterids rotates laterally, later in fetal development, consistent with a lateral sound reception pathway. Balaenids and several fossil mysticetes retain a somewhat anteriorly oriented acoustic funnel in the mature condition, indicating that a lateral sound reception pathway in balaenopterids may be a recent evolutionary innovation linked to specialized feeding modes, such as lunge-feeding. PMID:25760328

  14. Auricular Split-Thickness Skin Graft for Ear Canal Coverage.

    PubMed

    Haidar, Yarah M; Walia, Sartaaj; Sahyouni, Ronald; Ghavami, Yaser; Lin, Harrison W; Djalilian, Hamid R

    2016-12-01

    Split-thickness skin graft (STSG) continues to be the preferred means of external auditory canal (EAC) reconstruction. We thus sought to describe our experience using skin from the posterior aspect of the auricle (SPAA) as a donor site in EAC reconstruction. Grafts were, on average, 5 × 10 mm in size and obtained with a No. 10 blade after tumescence injection. The cases of 39 patients who underwent 41 procedures were retrospectively reviewed. Of the 38 patients with both 3- and 6-month follow-ups, no postoperative stenosis or bony exposure occurred. STSG from the SPAA can be a good option in EAC reconstruction. Total EAC/tympanic membrane coverage can be obtained with STSG from the SPAA. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2016.

  15. Speed-of-Sound Measurements in (Argon + Carbon Dioxide) over the Temperature Range from (275 to 500) K at Pressures up to 8 MPa.

    PubMed

    Wegge, Robin; McLinden, Mark O; Perkins, Richard A; Richter, Markus; Span, Roland

    2016-08-01

    The speed of sound of two (argon + carbon dioxide) mixtures was measured over the temperature range from (275 to 500) K with pressures up to 8 MPa utilizing a spherical acoustic resonator. The compositions of the gravimetrically prepared mixtures were (0.50104 and 0.74981) mole fraction carbon dioxide. The vibrational relaxation of pure carbon dioxide led to high sound absorption, which significantly impeded the sound-speed measurements on carbon dioxide and its mixtures; pre-condensation may have also affected the results for some measurements near the dew line. Thus, in contrast to the standard operating procedure for speed-of-sound measurements with a spherical resonator, non-radial resonances at lower frequencies were taken into account. Still, the data show a comparatively large scatter, and the usual repeatability of this general type of instrument could not be realized with the present measurements. Nonetheless, the average relative combined expanded uncertainty ( k = 2) in speed of sound ranged from (0.042 to 0.056)% for both mixtures, with individual state-point uncertainties increasing to 0.1%. These uncertainties are adequate for our intended purpose of evaluating thermodynamic models. The results are compared to a Helmholtz energy equation of state for carbon capture and storage applications; relative deviations of (-0.64 to 0.08)% for the (0.49896 argon + 0.50104 carbon dioxide) mixture, and of (-1.52 to 0.77)% for the (0.25019 argon + 0.74981 carbon dioxide) mixture were observed.

  16. Speed-of-Sound Measurements in (Argon + Carbon Dioxide) over the Temperature Range from (275 to 500) K at Pressures up to 8 MPa

    PubMed Central

    Wegge, Robin; McLinden, Mark O.; Perkins, Richard A.; Richter, Markus; Span, Roland

    2016-01-01

    The speed of sound of two (argon + carbon dioxide) mixtures was measured over the temperature range from (275 to 500) K with pressures up to 8 MPa utilizing a spherical acoustic resonator. The compositions of the gravimetrically prepared mixtures were (0.50104 and 0.74981) mole fraction carbon dioxide. The vibrational relaxation of pure carbon dioxide led to high sound absorption, which significantly impeded the sound-speed measurements on carbon dioxide and its mixtures; pre-condensation may have also affected the results for some measurements near the dew line. Thus, in contrast to the standard operating procedure for speed-of-sound measurements with a spherical resonator, non-radial resonances at lower frequencies were taken into account. Still, the data show a comparatively large scatter, and the usual repeatability of this general type of instrument could not be realized with the present measurements. Nonetheless, the average relative combined expanded uncertainty (k = 2) in speed of sound ranged from (0.042 to 0.056)% for both mixtures, with individual state-point uncertainties increasing to 0.1%. These uncertainties are adequate for our intended purpose of evaluating thermodynamic models. The results are compared to a Helmholtz energy equation of state for carbon capture and storage applications; relative deviations of (−0.64 to 0.08)% for the (0.49896 argon + 0.50104 carbon dioxide) mixture, and of (−1.52 to 0.77)% for the (0.25019 argon + 0.74981 carbon dioxide) mixture were observed. PMID:27458321

  17. Assessment and evaluation of noise controls on roof bolting equipment and a method for predicting sound pressure levels in underground coal mining

    NASA Astrophysics Data System (ADS)

    Matetic, Rudy J.

    Over-exposure to noise remains a widespread and serious health hazard in the U.S. mining industries despite 25 years of regulation. Every day, 80% of the nation's miners go to work in an environment where the time weighted average (TWA) noise level exceeds 85 dBA and more than 25% of the miners are exposed to a TWA noise level that exceeds 90 dBA, the permissible exposure limit (PEL). Additionally, MSHA coal noise sample data collected from 2000 to 2002 show that 65% of the equipment whose operators exceeded 100% noise dosage comprise only seven different types of machines; auger miners, bulldozers, continuous miners, front end loaders, roof bolters, shuttle cars (electric), and trucks. In addition, the MSHA data indicate that the roof bolter is third among all the equipment and second among equipment in underground coal whose operators exceed 100% dosage. A research program was implemented to: (1) determine, characterize and to measure sound power levels radiated by a roof bolting machine during differing drilling configurations (thrust, rotational speed, penetration rate, etc.) and utilizing differing types of drilling methods in high compressive strength rock media (>20,000 psi). The research approach characterized the sound power level results from laboratory testing and provided the mining industry with empirical data relative to utilizing differing noise control technologies (drilling configurations and types of drilling methods) in reducing sound power level emissions on a roof bolting machine; (2) distinguish and correlate the empirical data into one, statistically valid, equation, in which, provided the mining industry with a tool to predict overall sound power levels of a roof bolting machine given any type of drilling configuration and drilling method utilized in industry; (3) provided the mining industry with several approaches to predict or determine sound pressure levels in an underground coal mine utilizing laboratory test results from a roof bolting

  18. Differences in chewing sounds of dry-crisp snacks by multivariate data analysis

    NASA Astrophysics Data System (ADS)

    De Belie, N.; Sivertsvik, M.; De Baerdemaeker, J.

    2003-09-01

    Chewing sounds of different types of dry-crisp snacks (two types of potato chips, prawn crackers, cornflakes and low calorie snacks from extruded starch) were analysed to assess differences in sound emission patterns. The emitted sounds were recorded by a microphone placed over the ear canal. The first bite and the first subsequent chew were selected from the time signal and a fast Fourier transformation provided the power spectra. Different multivariate analysis techniques were used for classification of the snack groups. This included principal component analysis (PCA) and unfold partial least-squares (PLS) algorithms, as well as multi-way techniques such as three-way PLS, three-way PCA (Tucker3), and parallel factor analysis (PARAFAC) on the first bite and subsequent chew. The models were evaluated by calculating the classification errors and the root mean square error of prediction (RMSEP) for independent validation sets. It appeared that the logarithm of the power spectra obtained from the chewing sounds could be used successfully to distinguish the different snack groups. When different chewers were used, recalibration of the models was necessary. Multi-way models distinguished better between chewing sounds of different snack groups than PCA on bite or chew separately and than unfold PLS. From all three-way models applied, N-PLS with three components showed the best classification capabilities, resulting in classification errors of 14-18%. The major amount of incorrect classifications was due to one type of potato chips that had a very irregular shape, resulting in a wide variation of the emitted sounds.

  19. Fluid Shifts: Otoacoustical Emission Changes in Response to Posture and Lower Body Negative Pressure

    NASA Technical Reports Server (NTRS)

    Melgoza, R.; Kemp, D.; Ebert, D.; Danielson, R.; Stenger, M.; Hargens, A.; Dulchavsky, S.

    2016-01-01

    INTRODUCTION: The purpose of the NASA Fluid Shifts Study is to characterize fluid distribution and compartmentalization associated with long-duration spaceflight and to correlate these findings with vision changes and other elements of the visual impairment and intracranial pressure (VIIP) syndrome. VIIP signs and symptoms, as well as postflight lumbar puncture data, suggest that elevated intracranial pressure (ICP) may be associated with spaceflight-induced cephalad fluid shifts, but this hypothesis has not been tested. Due to the invasive nature of direct measures of ICP, a noninvasive technique of monitoring ICP is desired for use during spaceflight. The phase angle and amplitude of otoacoustic emissions (OAEs) have been shown to be sensitive to posture change and ICP (1, 2), therefore use of OAEs is an attractive option. OAEs are low-level sounds produced by the sensory cells of the cochlea in response to auditory stimulation. These sounds travel peripherally from the cochlea, through the oval window, to the ear canal where they can be recorded. OAE transmission is sensitive to changes in the stiffness of the oval window, occurring as a result of changes in cochlear pressure. Increased stiffness of the oval window largely affects the transmission of sound from the cochlea at frequencies between 800 Hz and 1600 Hz. OAEs can be self-recorded in the laboratory or on the ISS using a handheld device. Our primary objectives regarding OAE measures in this experiment were to 1) validate this method during preflight testing of each crewmember (while sitting, supine and in head-down tilt position), and 2) determine if OAE measures (and presumably ICP) are responsive to lower body negative pressure and to spaceflight. METHODS: Distortion-product otoacoustic emissions (DPOAEs) and transient evoked otoacoustic emissions (TEOAEs) were recorded preflight using the Otoport Advance OAE system (Otodynamics Ltd., Hatfield, UK). Data were collected in four conditions (seated

  20. Differential effects of suppressors on hazardous sound pressure levels generated by AR-15 rifles: Considerations for recreational shooters, law enforcement, and the military.

    PubMed

    Lobarinas, Edward; Scott, Ryan; Spankovich, Christopher; Le Prell, Colleen G

    2016-01-01

    Firearm discharges produce hazardous levels of impulse noise that can lead to permanent hearing loss. In the present study, we evaluated the effects of suppression, ammunition, and barrel length on AR-15 rifles. Sound levels were measured left/right of a user's head, and 1-m left of the muzzle, per MIL-STD-1474-D, under both unsuppressed and suppressed conditions. Nine commercially available AR-15 rifles and 14 suppressors were used. Suppressors significantly decreased peak dB SPL at the 1-m location and the left ear location. However, under most rifle/ammunition conditions, levels remained above 140 dB peak SPL near a user's right ear. In a subset of conditions, subsonic ammunition produced values near or below 140 dB peak SPL. Overall suppression ranged from 7-32 dB across conditions. These data indicate that (1) suppressors reduce discharge levels to 140 dB peak SPL or below in only a subset of AR-15 conditions, (2) shorter barrel length and use of muzzle brake devices can substantially increase exposure level for the user, and (3) there are significant left/right ear sound pressure differences under suppressed conditions as a function of the AR-15 direct impingement design that must be considered during sound measurements to fully evaluate overall efficacy.

  1. Breath sounds

    MedlinePlus

    The lung sounds are best heard with a stethoscope. This is called auscultation. Normal lung sounds occur ... the bottom of the rib cage. Using a stethoscope, the doctor may hear normal breathing sounds, decreased ...

  2. Speed of Sound in Aqueous Solutions at sub-GPa Pressures: a New Experiment to Unveil the Properties of Extra-Terrestrial Oceans

    NASA Astrophysics Data System (ADS)

    Bollengier, O.; Brown, J. M.; Vance, S.; Shaw, G. H.

    2015-12-01

    Geophysical data from the Galileo and Cassini-Huygens missions are consistent with the presence of aqueous subsurface oceans in Ganymede, Callisto and Titan, the largest icy satellites of the solar system. To understand the history and present state of these moons, the next generation of evolution models will require an accurate description of the properties of these liquid layers to predict the phase boundaries, heat transports and chemical exchanges within them. Sound speed measurements in pressure and temperature allow for the reconstruction of the Gibbs free energy surface of a phase, which in turn gives access to the desired properties (chemical potential, density, heat capacity...). However, such data are still scarce for aqueous solutions bearing Na+, Mg2+, Cl- and SO42- ions (major ions expected in extra-terrestrial oceans) at the high pressures and low temperatures expected for water inside these moons (up to 1.5 GPa for Ganymede, down to freezing temperatures). For pure water, IAPWS accuracy for sound speeds is given to 0.3% above 0.4 GPa. MgSO4aqueous solutions have been explored to 0.7 GPa with a precision limited to about 0.5%. Most other aqueous solutions bearing any combination of these four ions have not been explored at all above a few hundreds MPa. To acquire new high-precision sound speeds in aqueous solutions of various compositions, we set up a new experimental system working in the 0 - 0.7 GPa pressure range and 240 - 350 K temperature range. The device consists in an oil-pressurized steel vessel enclosing a titanium alloy rod supporting the sample and a sealing bellows. A transducer at the top end of the titanium rod generates ultrasonic waves and collects the series of subsequent reflections. Preliminary tests with pure water illustrate a precision of 0.02% and an accuracy within 0.1% of IAPWS on our whole pressure range. Revision of the properties of pure water and H2O-MgSO4 solutions up to 0.7 GPa along with the first data in the H2O-MgCl2

  3. Prevalence of high frequency hearing loss consistent with noise exposure among people working with sound systems and general population in Brazil: A cross-sectional study

    PubMed Central

    El Dib, Regina P; Silva, Edina MK; Morais, José F; Trevisani, Virgínia FM

    2008-01-01

    Background Music is ever present in our daily lives, establishing a link between humans and the arts through the senses and pleasure. Sound technicians are the link between musicians and audiences or consumers. Recently, general concern has arisen regarding occurrences of hearing loss induced by noise from excessively amplified sound-producing activities within leisure and professional environments. Sound technicians' activities expose them to the risk of hearing loss, and consequently put at risk their quality of life, the quality of the musical product and consumers' hearing. The aim of this study was to measure the prevalence of high frequency hearing loss consistent with noise exposure among sound technicians in Brazil and compare this with a control group without occupational noise exposure. Methods This was a cross-sectional study comparing 177 participants in two groups: 82 sound technicians and 95 controls (non-sound technicians). A questionnaire on music listening habits and associated complaints was applied, and data were gathered regarding the professionals' numbers of working hours per day and both groups' hearing complaint and presence of tinnitus. The participants' ear canals were visually inspected using an otoscope. Hearing assessments were performed (tonal and speech audiometry) using a portable digital AD 229 E audiometer funded by FAPESP. Results There was no statistically significant difference between the sound technicians and controls regarding age and gender. Thus, the study sample was homogenous and would be unlikely to lead to bias in the results. A statistically significant difference in hearing loss was observed between the groups: 50% among the sound technicians and 10.5% among the controls. The difference could be addressed to high sound levels. Conclusion The sound technicians presented a higher prevalence of high frequency hearing loss consistent with noise exposure than did the general population, although the possibility of residual

  4. A pressure plate study on fore and hindlimb loading and the association with hoof contact area in sound ponies at the walk and trot.

    PubMed

    Oosterlinck, M; Pille, F; Back, W; Dewulf, J; Gasthuys, F

    2011-10-01

    The aim of this study was to evaluate the association between fore- and hind-hoof contact area and limb loading. Data from a previous study on forelimb loading and symmetry were compared with data on hindlimb kinetics, and the fore- and hind-hoof contact area at the walk and trot was evaluated. Five sound ponies, selected for symmetrical feet, were walked and trotted over a pressure plate embedded in a custom-made runway. The hindlimb peak vertical force (PVF) and vertical impulse (VI) were found to be significantly lower than in the forelimb, whereas their high symmetry ratios (>95%) did not show a significant difference from forelimb data. Hindlimb PVF in ponies was found to be slightly higher when compared to data reported for horses even though the ponies moved at a similar or lower relative velocity. The contact area had low intra-individual variability and was significantly smaller in the hind- than in the fore-hooves. A larger contact area was significantly associated with lower peak vertical pressure (PVP) but higher PVF and VI. No significant differences between left and right sides were found for contact area or loading variables. Pressure plate measurements demonstrated a significant association between hoof contact area and limb loading, in addition to intrinsic differences between fore and hindlimb locomotor function. The pressure plate provides the clinician with a tool to quantify simultaneously contralateral differences in hoof contact area and limb loading. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. A Preliminary Investigation of the Air-Bone Gap: Changes in Intracochlear Sound Pressure With Air- and Bone-conducted Stimuli After Cochlear Implantation.

    PubMed

    Banakis Hartl, Renee M; Mattingly, Jameson K; Greene, Nathaniel T; Jenkins, Herman A; Cass, Stephen P; Tollin, Daniel J

    2016-10-01

    A cochlear implant electrode within the cochlea contributes to the air-bone gap (ABG) component of postoperative changes in residual hearing after electrode insertion. Preservation of residual hearing after cochlear implantation has gained importance as simultaneous electric-acoustic stimulation allows for improved speech outcomes. Postoperative loss of residual hearing has previously been attributed to sensorineural changes; however, presence of increased postoperative ABG remains unexplained and could result in part from altered cochlear mechanics. Here, we sought to investigate changes to these mechanics via intracochlear pressure measurements before and after electrode implantation to quantify the contribution to postoperative ABG. Human cadaveric heads were implanted with titanium fixtures for bone conduction transducers. Velocities of stapes capitulum and cochlear promontory between the two windows were measured using single-axis laser Doppler vibrometry and fiber-optic sensors measured intracochlear pressures in scala vestibuli and tympani for air- and bone-conducted stimuli before and after cochlear implant electrode insertion through the round window. Intracochlear pressures revealed only slightly reduced responses to air-conducted stimuli consistent with previous literature. No significant changes were noted to bone-conducted stimuli after implantation. Velocities of the stapes capitulum and the cochlear promontory to both stimuli were stable after electrode placement. Presence of a cochlear implant electrode causes alterations in intracochlear sound pressure levels to air, but not bone, conducted stimuli and helps to explain changes in residual hearing noted clinically. These results suggest the possibility of a cochlear conductive component to postoperative changes in hearing sensitivity.

  6. Pressure Sounding of the Middle Atmosphere from ATMOS Solar Occultation Measurements of Atmospheric CO(sub 2) Absorption Lines

    NASA Technical Reports Server (NTRS)

    Abrams, M.; Gunson, M.; Lowes, L.; Rinsland, C.; Zander, R.

    1994-01-01

    A method for retrieving the atmospheric pressure corresponding to the tangent point of an infrared spectrum recorded in the solar occultation mode is described and applied to measurements made by the Atmospheric Trace Molecule Spectroscopy (ATMOS) Fourier transform spectrometer. Tangent pressure values are inferred from measurements of isolated CO(sub 2) lines with temperature-insensitive intensities. Tangent pressures are determined with a spectroscopic precision of 1-3%, corresponding to a tangent point height precision, depending on the scale height, of 70-210 meters.

  7. Alloying effects of Ni, Si, and S on the phase diagram and sound velocities of Fe under high pressures and high temperatures

    NASA Astrophysics Data System (ADS)

    Lin, J.; Fei, Y.; Sturhahn, W.; Zhao, J.; Mao, H.; Hemley, R.

    2004-05-01

    Iron-nickel is the most abundant constituent of the Earth's core. The amount of Ni in the core is about 5.5 wt%. Geophysical and cosmochemical studies suggest that the Earth's outer core also contains approximately 10% of light element(s) and a certain amount of light element(s) may be present in the inner core. Si and S are believed to be alloying light elements in the iron-rich planetary cores such as the Earth and Mars. Therefore, understanding the alloying effects of Ni, Si, and S on the phase diagram and physical properties of Fe under core conditions is crucial for geophysical and geochemical models of planetary interiors. The addition of Ni and Si does not appreciably change the compressibility of hcp-Fe under high pressures. Studies of the phase relations of Fe and Fe-Ni alloys indicate that Fe with up to 10 wt% Ni is likely to be in the hcp structure under inner core conditions. On the other hand, adding Si into Fe strongly stabilizes the bcc structure to much higher pressures and temperatures (Lin et al., 2002). We have also studied the sound velocities and magnetic properties of Fe0.92Ni0.08, Fe0.85Si0.15, and Fe3S alloys with nuclear resonant inelastic x-ray scattering and nuclear forward scattering up to 106 GPa, 70 GPa, and 57 GPa, respectively. The sound velocities of the alloys are obtained from the measured partial phonon density of states for 57Fe incorporated in the alloys. Addition of Ni slightly decreases the VP and VS of Fe under high pressures (Lin et al., 2003). Si or S alloyed with Fe increases the VP and VS under high pressures, which provides a better match to seismological data of the Earth's core. We note that the increase in the VP and VS of Fe0.85Si0.15 and Fe3S is mainly contributed from the density decrease of adding Si and S in iron. Time spectra of the nuclear forward scattering reveal that the most iron rich sulfide, Fe3S, undergoes a magnetic to non-magnetic transition at approximately 18 GPa from a low-pressure magnetically

  8. Testing a Method for Quantifying the Output of Implantable Middle Ear Hearing Devices

    PubMed Central

    Rosowski, J.J.; Chien, W.; Ravicz, M.E.; Merchant, S.N.

    2008-01-01

    This report describes tests of a standard practice for quantifying the performance of implantable middle ear hearing devices (also known as implantable hearing aids). The standard and these tests were initiated by the Food and Drug Administration of the United States Government. The tests involved measurements on two hearing devices, one commercially available and the other home built, that were implanted into ears removed from human cadavers. The tests were conducted to investigate the utility of the practice and its outcome measures: the equivalent ear canal sound pressure transfer function that relates electrically driven middle ear velocities to the equivalent sound pressure needed to produce those velocities, and the maximum effective ear canal sound pressure. The practice calls for measurements in cadaveric ears in order to account for the varied anatomy and function of different human middle ears. PMID:17406105

  9. The Contribution of Head Movement to the Externalization and Internalization of Sounds

    PubMed Central

    Brimijoin, W. Owen; Boyd, Alan W.; Akeroyd, Michael A.

    2013-01-01

    Background When stimuli are presented over headphones, they are typically perceived as internalized; i.e., they appear to emanate from inside the head. Sounds presented in the free-field tend to be externalized, i.e., perceived to be emanating from a source in the world. This phenomenon is frequently attributed to reverberation and to the spectral characteristics of the sounds: those sounds whose spectrum and reverberation matches that of free-field signals arriving at the ear canal tend to be more frequently externalized. Another factor, however, is that the virtual location of signals presented over headphones moves in perfect concert with any movements of the head, whereas the location of free-field signals moves in opposition to head movements. The effects of head movement have not been systematically disentangled from reverberation and/or spectral cues, so we measured the degree to which movements contribute to externalization. Methodology/Principal Findings We performed two experiments: 1) Using motion tracking and free-field loudspeaker presentation, we presented signals that moved in their spatial location to match listeners’ head movements. 2) Using motion tracking and binaural room impulse responses, we presented filtered signals over headphones that appeared to remain static relative to the world. The results from experiment 1 showed that free-field signals from the front that move with the head are less likely to be externalized (23%) than those that remain fixed (63%). Experiment 2 showed that virtual signals whose position was fixed relative to the world are more likely to be externalized (65%) than those fixed relative to the head (20%), regardless of the fidelity of the individual impulse responses. Conclusions/Significance Head movements play a significant role in the externalization of sound sources. These findings imply tight integration between binaural cues and self motion cues and underscore the importance of self motion for spatial auditory

  10. Deflation opening pressure of the eustachian tube.

    PubMed

    Cohen, D

    1989-03-01

    Measurements derived from tests of the performance of the eustachian tube (ET) under a variety of normal and pathologic conditions are widely diffuse and overlap considerably. In this survey, the deflation opening pressure (DOP) of the ET was tested in 31 patients suffering either from recurrent otitis (these patients had ventilating tubes inserted) or from chronic otitis media. Oxygen was deflated from the external ear canal, through the middle ear to the pharyngeal end of the ET. The DOP was the pressure needed for the passing of the oxygen. This pressure was usually between 100 to 200 mm H2O. No difference was found in the DOP between infants and adults or between discharging ears and dry ones. A second measurement was obtained through measuring the deflation flow pressure (DFP) required for the continuous passage of oxygen through the ET. The DFP was less than the DOP by approximately 20 to 60 mm H2O, and again no difference was noted between age groups or between infected and noninfected ears. It was concluded that DOP and DFP measurements of the ET are similar in a variety of conditions and therefore cannot indicate whether the ET is normally or abnormally functioning. The existence of a linear connection between the health of the ET and its performance is not proven; hence the role of the ET in predicting the likely outcome of tympanoplasty should be assessed within a different context.

  11. Analysis of masking effects on speech intelligibility with respect to moving sound stimulus

    NASA Astrophysics Data System (ADS)

    Chen, Chiung Yao

    2004-05-01

    The purpose of this study is to compare the disturbed degree of speech by an immovable noise source and an apparent moving one (AMN). In the study of the sound localization, we found that source-directional sensitivity (SDS) well associates with the magnitude of interaural cross correlation (IACC). Ando et al. [Y. Ando, S. H. Kang, and H. Nagamatsu, J. Acoust. Soc. Jpn. (E) 8, 183-190 (1987)] reported that potential correlation between left and right inferior colliculus at auditory path in the brain is in harmony with the correlation function of amplitude input into two ear-canal entrances. We assume that the degree of disturbance under the apparent moving noisy source is probably different from that being installed in front of us within a constant distance in a free field (no reflection). Then, we found there is a different influence on speech intelligibility between a moving and a fixed source generated by 1/3-octave narrow-band noise with the center frequency 2 kHz. However, the reasons for the moving speed and the masking effects on speech intelligibility were uncertain.

  12. Numerical calculation of listener-specific head-related transfer functions and sound localization: Microphone model and mesh discretization

    PubMed Central

    Ziegelwanger, Harald; Majdak, Piotr; Kreuzer, Wolfgang

    2015-01-01

    Head-related transfer functions (HRTFs) can be numerically calculated by applying the boundary element method on the geometry of a listener’s head and pinnae. The calculation results are defined by geometrical, numerical, and acoustical parameters like the microphone used in acoustic measurements. The scope of this study was to estimate requirements on the size and position of the microphone model and on the discretization of the boundary geometry as triangular polygon mesh for accurate sound localization. The evaluation involved the analysis of localization errors predicted by a sagittal-plane localization model, the comparison of equivalent head radii estimated by a time-of-arrival model, and the analysis of actual localization errors obtained in a sound-localization experiment. While the average edge length (AEL) of the mesh had a negligible effect on localization performance in the lateral dimension, the localization performance in sagittal planes, however, degraded for larger AELs with the geometrical error as dominant factor. A microphone position at an arbitrary position at the entrance of the ear canal, a microphone size of 1 mm radius, and a mesh with 1 mm AEL yielded a localization performance similar to or better than observed with acoustically measured HRTFs. PMID:26233020

  13. Pure-Tone Audiometry With Forward Pressure Level Calibration Leads to Clinically-Relevant Improvements in Test-Retest Reliability.

    PubMed

    Lapsley Miller, Judi A; Reed, Charlotte M; Robinson, Sarah R; Perez, Zachary D

    2018-02-21

    Clinical pure-tone audiometry is conducted using stimuli delivered through supra-aural headphones or insert earphones. The stimuli are calibrated in an acoustic (average ear) coupler. Deviations in individual-ear acoustics from the coupler acoustics affect test validity, and variations in probe insertion and headphone placement affect both test validity and test-retest reliability. Using an insert earphone designed for otoacoustic emission testing, which contains a microphone and loudspeaker, an individualized in-the-ear calibration can be calculated from the ear-canal sound pressure measured at the microphone. However, the total sound pressure level (SPL) measured at the microphone may be affected by standing-wave nulls at higher frequencies, producing errors in stimulus level of up to 20 dB. An alternative is to calibrate using the forward pressure level (FPL) component, which is derived from the total SPL using a wideband acoustic immittance measurement, and represents the pressure wave incident on the eardrum. The objective of this study is to establish test-retest reliability for FPL calibration of pure-tone audiometry stimuli, compared with in-the-ear and coupler sound pressure calibrations. The authors compared standard audiometry using a modern clinical audiometer with TDH-39P supra-aural headphones calibrated in a coupler to a prototype audiometer with an ER10C earphone calibrated three ways: (1) in-the-ear using the total SPL at the microphone, (2) in-the-ear using the FPL at the microphone, and (3) in a coupler (all three are derived from the same measurement). The test procedure was similar to that commonly used in hearing-conservation programs, using pulsed-tone test frequencies at 0.5, 1, 2, 3, 4, 6, and 8 kHz, and an automated modified Hughson-Westlake audiometric procedure. Fifteen adult human participants with normal to mildly-impaired hearing were selected, and one ear from each was tested. Participants completed 10 audiograms on each system, with

  14. Characterization of the Ignition Over-Pressure/Sound Suppression Water in the Space Launch System Mobile Launcher Using Volume of Fluid Modeling

    NASA Technical Reports Server (NTRS)

    West, Jeff

    2015-01-01

    The Space Launch System (SLS) Vehicle consists of a Core Stage with four RS-25 engines and two Solid Rocket Boosters (SRBs). This vehicle is launched from the Launchpad using a Mobile Launcher (ML) which supports the SLS vehicle until its liftoff from the ML under its own power. The combination of the four RS-25 engines and two SRBs generate a significant Ignition Over-Pressure (IOP) and Acoustic Sound environment. One of the mitigations of these environments is the Ignition Over-Pressure/Sound Suppression (IOP/SS) subsystem installed on the ML. This system consists of six water nozzles located parallel to and 24 inches downstream of each SRB nozzle exit plane as well as 16 water nozzles located parallel to and 53 inches downstream of the RS-25 nozzle exit plane. During launch of the SLS vehicle, water is ejected through each water nozzle to reduce the intensity of the transient pressure environment imposed upon the SLS vehicle. While required for the mitigation of the transient pressure environment on the SLS vehicle, the IOP/SS subsystem interacts (possibly adversely) with other systems located on the Launch Pad. One of the other systems that the IOP/SS water is anticipated to interact with is the Hydrogen Burn-Off Igniter System (HBOI). The HBOI system's purpose is to ignite the unburned hydrogen/air mixture that develops in and around the nozzle of the RS-25 engines during engine start. Due to the close proximity of the water system to the HBOI system, the presence of the IOP/SS may degrade the effectiveness of the HBOI system. Another system that the IOP/SS water may interact with adversely is the RS-25 engine nozzles and the SRB nozzles. The adverse interaction anticipated is the wetting, to a significant degree, of the RS-25 nozzles resulting in substantial weight of ice forming and water present to a significant degree upstream of the SRB nozzle exit plane inside the nozzle itself, posing significant additional blockage of the effluent that exits the nozzle

  15. A preliminary investigation of the air-bone gap: Changes in intracochlear sound pressure with air- and bone-conducted stimuli after cochlear implantation

    PubMed Central

    Banakis Hartl, Renee M.; Mattingly, Jameson K.; Greene, Nathaniel T.; Jenkins, Herman A.; Cass, Stephen P.; Tollin, Daniel J.

    2016-01-01

    Hypothesis A cochlear implant electrode within the cochlea contributes to the air-bone gap (ABG) component of postoperative changes in residual hearing after electrode insertion. Background Preservation of residual hearing after cochlear implantation has gained importance as simultaneous electric-acoustic stimulation allows for improved speech outcomes. Postoperative loss of residual hearing has previously been attributed to sensorineural changes; however, presence of increased postoperative air-bone gap remains unexplained and could result in part from altered cochlear mechanics. Here, we sought to investigate changes to these mechanics via intracochlear pressure measurements before and after electrode implantation to quantify the contribution to postoperative air-bone gap. Methods Human cadaveric heads were implanted with titanium fixtures for bone conduction transducers. Velocities of stapes capitulum and cochlear promontory between the two windows were measured using single-axis laser Doppler vibrometry and fiber-optic sensors measured intracochlear pressures in scala vestibuli and tympani for air- and bone-conducted stimuli before and after cochlear implant electrode insertion through the round window. Results Intracochlear pressures revealed only slightly reduced responses to air-conducted stimuli consistent with prior literature. No significant changes were noted to bone-conducted stimuli after implantation. Velocities of the stapes capitulum and the cochlear promontory to both stimuli were stable following electrode placement. Conclusion Presence of a cochlear implant electrode causes alterations in intracochlear sound pressure levels to air, but not bone, conducted stimuli and helps to explain changes in residual hearing noted clinically. These results suggest the possibility of a cochlear conductive component to postoperative changes in hearing sensitivity. PMID:27579835

  16. Nonlinear aspects of infrasonic pressure transfer into the perilymph.

    PubMed

    Krukowski, B; Carlborg, B; Densert, O

    1980-06-01

    The perilymphatic pressure was studied in response to various low frequency pressure changes in the ear canal. The pressure transfer was analysed and found to be nonlinear in many aspects. The pressure response was found to contain two time constants representing the inner ear pressure regulating mechanisms. The time constants showed an asymmetry in response to positive and negative going inputs--the effects to some extent proportional to input levels. Further nonlinearities were found when infrasonic sine waves were applied to the ear. Harmonic distortion and modulation appeared. When short bursts of infrasound were introduced a clear d.c. shift was observed as a consequence of an asymmetry in the response to positive and negative going pressure inputs. A temporary change in mean perilymphatic pressure was thus achieved and continued throughout the duration of the signal. At very low frequencies a distinct phase shift was detected in the sine waves. This appeared as a phase lead, breaking the continuity of the output sine wave.

  17. Continuous 24-hour measurement of middle ear pressure.

    PubMed

    Tideholm, B; Jönsson, S; Carlborg, B; Welinder, R; Grenner, J

    1996-07-01

    A new method was developed for continuous measurement of the middle ear pressure during a 24-h period. The equipment consisted of a piezo-electric pressure device and a digital memory. To allow continuous pressure recordings during normal every-day activities the equipment was made light and portable. The measurement accuracy of the equipment as well as the base-line and temperature stability were tested and found to meet to our requirements satisfactorily. In 4 volunteers with different middle ear conditions, a small perforation was made through the tympanic membrane. A rubber stopper containing a small polyethylene tube was fitted into the external ear canal. Tubal function tests were made to establish the equipment's ability to monitor fast pressure changes. The tests were well in accordance with other methods of direct pressure measurements. The equipment was carried by the volunteers for 24 h to monitor any slow or rapid dynamic pressure changes in the middle ear. Four continuous 24-h measurements are presented. The method was found to be suitable for valid measurements of dynamic pressure changes in the middle ear during normal every-day activities. It may become a useful instrument in the search for a better understanding of the development of chronic middle ear disease.

  18. Thermoelastic properties of liquid Fe-C revealed by sound velocity and density measurements at high pressure

    NASA Astrophysics Data System (ADS)

    Shimoyama, Yuta; Terasaki, Hidenori; Urakawa, Satoru; Takubo, Yusaku; Kuwabara, Soma; Kishimoto, Shunpachi; Watanuki, Tetsu; Machida, Akihiko; Katayama, Yoshinori; Kondo, Tadashi

    2016-11-01

    Carbon is one of the possible light elements in the cores of the terrestrial planets. The P wave velocity (VP) and density (ρ) are important factors for estimating the chemical composition and physical properties of the core. We simultaneously measured the VP and ρ of Fe-3.5 wt % C up to 3.4 GPa and 1850 K by using ultrasonic pulse-echo method and X-ray absorption methods. The VP of liquid Fe-3.5 wt % C decreased linearly with increasing temperature at constant pressure. The addition of carbon decreased the VP of liquid Fe by about 2% at 3 GPa and 1700 K and decreased the Fe density by about 2% at 2 GPa and 1700 K. The bulk modulus of liquid Fe-C and its pressure (P) and temperature (T) effects were precisely determined from directly measured ρ and VP data to be K0,1700 K = 83.9 GPa, dKT/dP = 5.9(2), and dKT/dT = -0.063 GPa/K. The addition of carbon did not affect the isothermal bulk modulus (KT) of liquid Fe, but it decreased the dK/dT of liquid Fe. In the ρ-VP relationship, VP increases linearly with ρ and can be approximated as VP (m/s) = -6786(506) + 1537(71) × ρ (g/cm3), suggesting that Birch's law is valid for liquid Fe-C at the present P-T conditions. Our results imply that at the conditions of the lunar core, the elastic properties of an Fe-C core are more affected by temperature than those of Fe-S core.

  19. Effects of middle ear quasi-static stiffness on sound transmission quantified by a novel 3-axis optical force sensor.

    PubMed

    Dobrev, Ivo; Sim, Jae Hoon; Aqtashi, Baktash; Huber, Alexander M; Linder, Thomas; Röösli, Christof

    2018-01-01

    Intra-operative quantification of the ossicle mobility could provide valuable feedback for the current status of the patient's conductive hearing. However, current methods for evaluation of middle ear mobility are mostly limited to the surgeon's subjective impression through manual palpation of the ossicles. This study investigates how middle ear transfer function is affected by stapes quasi-static stiffness of the ossicular chain. The stiffness of the middle ear is induced by a) using a novel fiber-optic 3-axis force sensor to quantify the quasi-static stiffness of the middle ear, and b) by artificial reduction of stapes mobility due to drying of the middle ear. Middle ear transfer function, defined as the ratio of the stapes footplate velocity versus the ear canal sound pressure, was measured with a single point LDV in two conditions. First, a controlled palpation force was applied at the stapes head in two in-plane (superior-inferior or posterior-anterior) directions, and at the incus lenticular process near the incudostapedial joint in the piston (lateral-medial) direction with a novel 3-axis PalpEar force sensor (Sensoptic, Losone, Switzerland), while the corresponding quasi-static displacement of the contact point was measured via a 3-axis micrometer stage. The palpation force was applied sequentially, step-wise in the range of 0.1-20 gF (1-200 mN). Second, measurements were repeated with various stages of stapes fixation, simulated by pre-load on the stapes head or drying of the temporal bone, and with severe ossicle immobilization, simulated by gluing of the stapes footplate. Simulated stapes fixation (forced drying of 5-15 min) severely decreases (20-30 dB) the low frequency (<1 kHz) response of the middle ear, while increasing (5-10 dB) the high frequency (>4 kHz) response. Stapes immobilization (gluing of the footplate) severely reduces (20-40 dB) the low and mid frequency response (<4 kHz) but has lesser effect (<10 dB) at higher frequencies

  20. Photoacoustic sounds from meteors

    DOE PAGES

    Spalding, Richard; Tencer, John; Sweatt, William; ...

    2017-02-01

    Concurrent sound associated with very bright meteors manifests as popping, hissing, and faint rustling sounds occurring simultaneously with the arrival of light from meteors. Numerous instances have been documented with –11 to –13 brightness. These sounds cannot be attributed to direct acoustic propagation from the upper atmosphere for which travel time would be several minutes. Concurrent sounds must be associated with some form of electromagnetic energy generated by the meteor, propagated to the vicinity of the observer, and transduced into acoustic waves. Previously, energy propagated from meteors was assumed to be RF emissions. This has not been well validated experimentally.more » Herein we describe experimental results and numerical models in support of photoacoustic coupling as the mechanism. Recent photometric measurements of fireballs reveal strong millisecond flares and significant brightness oscillations at frequencies ≥40 Hz. Strongly modulated light at these frequencies with sufficient intensity can create concurrent sounds through radiative heating of common dielectric materials like hair, clothing, and leaves. This heating produces small pressure oscillations in the air contacting the absorbers. Calculations show that –12 brightness meteors can generate audible sound at ~25 dB SPL. As a result, the photoacoustic hypothesis provides an alternative explanation for this longstanding mystery about generation of concurrent sounds by fireballs.« less

  1. Finite element modelling of human auditory periphery including a feed-forward amplification of the cochlea.

    PubMed

    Wang, Xuelin; Wang, Liling; Zhou, Jianjun; Hu, Yujin

    2014-08-01

    A three-dimensional finite element model is developed for the simulation of the sound transmission through the human auditory periphery consisting of the external ear canal, middle ear and cochlea. The cochlea is modelled as a straight duct divided into two fluid-filled scalae by the basilar membrane (BM) having an orthotropic material property with dimensional variation along its length. In particular, an active feed-forward mechanism is added into the passive cochlear model to represent the activity of the outer hair cells (OHCs). An iterative procedure is proposed for calculating the nonlinear response resulting from the active cochlea in the frequency domain. Results on the middle-ear transfer function, BM steady-state frequency response and intracochlear pressure are derived. A good match of the model predictions with experimental data from the literatures demonstrates the validity of the ear model for simulating sound pressure gain of middle ear, frequency to place map, cochlear sensitivity and compressive output for large intensity input. The current model featuring an active cochlea is able to correlate directly the sound stimulus in the ear canal with the vibration of BM and provides a tool to explore the mechanisms by which sound pressure in the ear canal is converted to a stimulus for the OHCs.

  2. Sound Absorbers

    NASA Astrophysics Data System (ADS)

    Fuchs, H. V.; Möser, M.

    Sound absorption indicates the transformation of sound energy into heat. It is, for instance, employed to design the acoustics in rooms. The noise emitted by machinery and plants shall be reduced before arriving at a workplace; auditoria such as lecture rooms or concert halls require a certain reverberation time. Such design goals are realised by installing absorbing components at the walls with well-defined absorption characteristics, which are adjusted for corresponding demands. Sound absorbers also play an important role in acoustic capsules, ducts and screens to avoid sound immission from noise intensive environments into the neighbourhood.

  3. Intra- and Intersexual swim bladder dimorphisms in the plainfin midshipman fish (Porichthys notatus): Implications of swim bladder proximity to the inner ear for sound pressure detection.

    PubMed

    Mohr, Robert A; Whitchurch, Elizabeth A; Anderson, Ryan D; Forlano, Paul M; Fay, Richard R; Ketten, Darlene R; Cox, Timothy C; Sisneros, Joseph A

    2017-11-01

    The plainfin midshipman fish, Porichthys notatus, is a nocturnal marine teleost that uses social acoustic signals for communication during the breeding season. Nesting type I males produce multiharmonic advertisement calls by contracting their swim bladder sonic muscles to attract females for courtship and spawning while subsequently attracting cuckholding type II males. Here, we report intra- and intersexual dimorphisms of the swim bladder in a vocal teleost fish and detail the swim bladder dimorphisms in the three sexual phenotypes (females, type I and II males) of plainfin midshipman fish. Micro-computerized tomography revealed that females and type II males have prominent, horn-like rostral swim bladder extensions that project toward the inner ear end organs (saccule, lagena, and utricle). The rostral swim bladder extensions were longer, and the distance between these swim bladder extensions and each inner-ear end organ type was significantly shorter in both females and type II males compared to that in type I males. Our results revealed that the normalized swim bladder length of females and type II males was longer than that in type I males while there was no difference in normalized swim bladder width among the three sexual phenotypes. We predict that these intrasexual and intersexual differences in swim bladder morphology among midshipman sexual phenotypes will afford greater sound pressure sensitivity and higher frequency detection in females and type II males and facilitate the detection and localization of conspecifics in shallow water environments, like those in which midshipman breed and nest. © 2017 Wiley Periodicals, Inc.

  4. Average ambulatory measures of sound pressure level, fundamental frequency, and vocal dose do not differ between adult females with phonotraumatic lesions and matched control subjects

    PubMed Central

    Van Stan, Jarrad H.; Mehta, Daryush D.; Zeitels, Steven M.; Burns, James A.; Barbu, Anca M.; Hillman, Robert E.

    2015-01-01

    Objectives Clinical management of phonotraumatic vocal fold lesions (nodules, polyps) is based largely on assumptions that abnormalities in habitual levels of sound pressure level (SPL), fundamental frequency (f0), and/or amount of voice use play a major role in lesion development and chronic persistence. This study used ambulatory voice monitoring to evaluate if significant differences in voice use exist between patients with phonotraumatic lesions and normal matched controls. Methods Subjects were 70 adult females: 35 with vocal fold nodules or polyps and 35 age-, sex-, and occupation-matched normal individuals. Weeklong summary statistics of voice use were computed from anterior neck surface acceleration recorded using a smartphone-based ambulatory voice monitor. Results Paired t-tests and Kolmogorov-Smirnov tests resulted in no statistically significant differences between patients and matched controls regarding average measures of SPL, f0, vocal dose measures, and voicing/voice rest periods. Paired t-tests comparing f0 variability between the groups resulted in statistically significant differences with moderate effect sizes. Conclusions Individuals with phonotraumatic lesions did not exhibit differences in average ambulatory measures of vocal behavior when compared with matched controls. More refined characterizations of underlying phonatory mechanisms and other potentially contributing causes are warranted to better understand risk factors associated with phonotraumatic lesions. PMID:26024911

  5. a Middle-Ear Reverse Transfer Function Computed from Vibration Measurements of Otoacoustic Emissions on the Ear Drum of the Guinea PIG

    NASA Astrophysics Data System (ADS)

    Dalhoff, Ernst; Turcanu, Diana; Gummer, Anthony W.

    2009-02-01

    Using distortion products measured as vibration of the umbo and as sound pressure in the ear canal of guinea pigs, we calculated the corresponding reverse transfer function. We compare the measurements with a middle-ear model taken from the literature and adapted to the guinea pig. A reasonable fit could be achieved. We conclude that the reverse transfer function will be useful to aid fitting a middle-ear model to measured transfer functions of human subjects.

  6. Acoustoelasticity. [sound-structure interaction

    NASA Technical Reports Server (NTRS)

    Dowell, E. H.

    1977-01-01

    Sound or pressure variations inside bounded enclosures are investigated. Mathematical models are given for determining: (1) the interaction between the sound pressure field and the flexible wall of a Helmholtz resonator; (2) coupled fluid-structural motion of an acoustic cavity with a flexible and/or absorbing wall; (3) acoustic natural modes in multiple connected cavities; and (4) the forced response of a cavity with a flexible and/or absorbing wall. Numerical results are discussed.

  7. Evaluating The Relation of Trace Fracture Inclination and Sound Pressure Level and Time-of-flight QUS Parameters Using Computational Simulation

    NASA Astrophysics Data System (ADS)

    Rosa, P. T.; Fontes-Pereira, A. J.; Matusin, D. P.; von Krüger, M. A.; Pereira, W. C. A.

    Bone healing is a complex process that stars after the occurrence of a fracture to restore bone optimal conditions. The gold standards for bone status evaluation are the dual energy X-ray absorptiometry and the computerized tomography. Ultrasound-based technologies have some advantages as compared to X-ray technologies: nonionizing radiation, portability and lower cost among others. Quantitative ultrasound (QUS) has been proposed in literature as a new tool to follow up the fracture healing process. QUS relates the ultrasound propagation with the bone tissue condition (normal or pathological), so, a change in wave propagation may indicate a variation in tissue properties. The most used QUS parameters are time-of-flight (TOF) and sound pressure level (SPL) of the first arriving signal (FAS). In this work, the FAS is the well known lateral wave. The aim of this work is to evaluate the relation of the TOF and SPL of the FAS and fracture inclination trace in two stages of bone healing using computational simulations. Four fracture geometries were used: normal and oblique with 30, 45 and 60 degrees. The TOF average values were 63.23 μs, 63.14 μs, 63.03 μs 62.94 μs for normal, 30, 45 and 60 degrees respectively and average SPL values were -3.83 dB -4.32 dB, -4.78 dB, -6.19 dB for normal, 30, 45 and 60 degrees respectively. The results show an inverse pattern between the amplitude and time-of-flight. These values seem to be sensible to fracture inclination trace, and in future, can be used to characterize it.

  8. Abdominal sounds

    MedlinePlus

    ... intestines, or strangulation of the bowel and death ( necrosis ) of the bowel tissue. Very high-pitched bowel ... missing bowel sounds may be caused by: Blocked blood vessels prevent the intestines from getting proper blood flow. ...

  9. Constraints on the Properties of the Moon's Outer Core from High-Pressure Sound Velocity Measurements on Fe-S Liquids

    NASA Astrophysics Data System (ADS)

    Jing, Z.; Wang, Y.; Kono, Y.; Yu, T.; Sakamaki, T.; Park, C.; Rivers, M. L.; Sutton, S. R.; Shen, G.

    2013-12-01

    Geophysical observations based on lunar seismology and laser ranging strongly suggest that the Moon's iron core is partially molten. Similar to Earth and other terrestrial planets, light elements, such as sulfur, silicon, carbon, and oxygen, are likely present in the lunar core. Determining the light element concentration in the outer core is of vital importance to the understanding of the structure, dynamics, and chemical evolution of the Moon, as well as the enigmatic history of the lunar dynamo. Among the candidate elements, sulfur is the preferred major light element in the lunar outer due to its high abundance in the parent bodies of iron meteorites, its high solubility in liquid Fe at the lunar core pressure (~5 GPa), and its strong effects on reducing the density, velocity, and freezing temperature of the core. In this study, we conducted in-situ sound velocity measurements on liquid samples of four different compositions, including pure Fe, Fe-10wt%S, Fe-20wt%S, and Fe-27wt%S, at pressure and temperature conditions up to 8 GPa and 1973 K (encompassing the entire lunar depth range), using the Kawai-type multi-anvil device at the GSECARS beamline 13-ID-D and the Paris-Edinburgh cell at HPCAT beamline 16-BM-B. Our results show that the velocity of Fe-rich liquids increases upon compression, decreases with increasing sulfur content, and is nearly independent of temperature. Compared to the seismic velocity of the outer core, our velocity data constrain the sulfur content at 4×2 wt%, indicating a significantly denser (6.4×0.4 g/cm3) and hotter (1860×60 K) outer core than previously estimated. A new lunar structure model incorporating available geophysical observations points to a smaller core radius. Our model also suggests a top-down solidification scenario for the evolution of the lunar core. Such an 'iron snow' process may have been an important mechanism for the growth of the inner core.

  10. Atmospheric sound propagation

    NASA Technical Reports Server (NTRS)

    Cook, R. K.

    1969-01-01

    The propagation of sound waves at infrasonic frequencies (oscillation periods 1.0 - 1000 seconds) in the atmosphere is being studied by a network of seven stations separated geographically by distances of the order of thousands of kilometers. The stations measure the following characteristics of infrasonic waves: (1) the amplitude and waveform of the incident sound pressure, (2) the direction of propagation of the wave, (3) the horizontal phase velocity, and (4) the distribution of sound wave energy at various frequencies of oscillation. Some infrasonic sources which were identified and studied include the aurora borealis, tornadoes, volcanos, gravity waves on the oceans, earthquakes, and atmospheric instability waves caused by winds at the tropopause. Waves of unknown origin seem to radiate from several geographical locations, including one in the Argentine.

  11. Sound Guard

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Lubrication technology originally developed for a series of NASA satellites has produced a commercial product for protecting the sound fidelity of phonograph records. Called Sound Guard, the preservative is a spray-on fluid that deposits a microscopically thin protective coating which reduces friction and prevents the hard diamond stylus from wearing away the softer vinyl material of the disc. It is marketed by the Consumer Products Division of Ball Corporation, Muncie, Indiana. The lubricant technology on which Sound Guard is based originated with NASA's Orbiting Solar Observatory (OSO), an Earth-orbiting satellite designed and built by Ball Brothers Research Corporation, Boulder, Colorado, also a division of Ball Corporation. Ball Brothers engineers found a problem early in the OSO program: known lubricants were unsuitable for use on satellite moving parts that would be exposed to the vacuum of space for several months. So the company conducted research on the properties of materials needed for long life in space and developed new lubricants. They worked successfully on seven OSO flights and attracted considerable attention among other aerospace contractors. Ball Brothers now supplies its "Vac Kote" lubricants and coatings to both aerospace and non-aerospace industries and the company has produced several hundred variations of the original technology. Ball Corporation expanded its product line to include consumer products, of which Sound Guard is one of the most recent. In addition to protecting record grooves, Sound Guard's anti-static quality also retards particle accumulation on the stylus. During comparison study by a leading U.S. electronic laboratory, a record not treated by Sound Guard had to be cleaned after 50 plays and the stylus had collected a considerable number of small vinyl particles. The Sound Guard-treated disc was still clean after 100 plays, as was its stylus.

  12. Simultaneous Measurements of Ossicular Velocity and Intracochlear Pressure Leading to the Cochlear Input Impedance in Gerbil

    PubMed Central

    Decraemer, W. F.; Khanna, S. M.; Olson, E. S.

    2008-01-01

    Recent measurements of three-dimensional stapes motion in gerbil indicated that the piston component of stapes motion was the primary contributor to intracochlear pressure. In order to make a detailed correlation between stapes piston motion and intracochlear pressure behind the stapes, simultaneous pressure and motion measurements were undertaken. We found that the scala vestibuli pressure followed the piston component of the stapes velocity with high fidelity, reinforcing our previous finding that the piston motion of the stapes was the main stimulus to the cochlea. The present data allowed us to calculate cochlear input impedance and power flow into the cochlea. Both the amplitude and phase of the impedance were quite flat with frequency from 3 kHz to at least 30 kHz, with a phase that was primarily resistive. With constant stimulus pressure in the ear canal the intracochlear pressure at the stapes has been previously shown to be approximately flat with frequency through a wide range, and coupling that result with the present findings indicates that the power that flows into the cochlea is quite flat from about 3 to 30 kHz. The observed wide-band intracochlear pressure and power flow are consistent with the wide-band audiogram of the gerbil. PMID:18459001

  13. Contralateral routing of signals disrupts monaural level and spectral cues to sound localisation on the horizontal plane.

    PubMed

    Pedley, Adam J; Kitterick, Pádraig T

    2017-09-01

    Contra-lateral routing of signals (CROS) devices re-route sound between the deaf and hearing ears of unilaterally-deaf individuals. This rerouting would be expected to disrupt access to monaural level cues that can support monaural localisation in the horizontal plane. However, such a detrimental effect has not been confirmed by clinical studies of CROS use. The present study aimed to exercise strict experimental control over the availability of monaural cues to localisation in the horizontal plane and the fitting of the CROS device to assess whether signal routing can impair the ability to locate sources of sound and, if so, whether CROS selectively disrupts monaural level or spectral cues to horizontal location, or both. Unilateral deafness and CROS device use were simulated in twelve normal hearing participants. Monaural recordings of broadband white noise presented from three spatial locations (-60°, 0°, and +60°) were made in the ear canal of a model listener using a probe microphone with and without a CROS device. The recordings were presented to participants via an insert earphone placed in their right ear. The recordings were processed to disrupt either monaural level or spectral cues to horizontal sound location by roving presentation level or the energy across adjacent frequency bands, respectively. Localisation ability was assessed using a three-alternative forced-choice spatial discrimination task. Participants localised above chance levels in all conditions. Spatial discrimination accuracy was poorer when participants only had access to monaural spectral cues compared to when monaural level cues were available. CROS use impaired localisation significantly regardless of whether level or spectral cues were available. For both cues, signal re-routing had a detrimental effect on the ability to localise sounds originating from the side of the deaf ear (-60°). CROS use also impaired the ability to use level cues to localise sounds originating from

  14. Sound Solutions

    ERIC Educational Resources Information Center

    Starkman, Neal

    2007-01-01

    Poor classroom acoustics are impairing students' hearing and their ability to learn. However, technology has come up with a solution: tools that focus voices in a way that minimizes intrusive ambient noise and gets to the intended receiver--not merely amplifying the sound, but also clarifying and directing it. One provider of classroom audio…

  15. 49 CFR 227.5 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... doubling of the allowable exposure time to maintain the same noise dose. For purposes of this part, the... device or material, which is capable of being worn on the head, covering the ear canal or inserted in the ear canal; is designed wholly or in part to reduce the level of sound entering the ear; and has a...

  16. 49 CFR 227.5 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... doubling of the allowable exposure time to maintain the same noise dose. For purposes of this part, the... device or material, which is capable of being worn on the head, covering the ear canal or inserted in the ear canal; is designed wholly or in part to reduce the level of sound entering the ear; and has a...

  17. 49 CFR 227.5 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... doubling of the allowable exposure time to maintain the same noise dose. For purposes of this part, the... device or material, which is capable of being worn on the head, covering the ear canal or inserted in the ear canal; is designed wholly or in part to reduce the level of sound entering the ear; and has a...

  18. Chinchilla middle ear transmission matrix model and middle-ear flexibilitya)

    PubMed Central

    Ravicz, Michael E.; Rosowski, John J.

    2017-01-01

    The function of the middle ear (ME) in transforming ME acoustic inputs and outputs (sound pressures and volume velocities) can be described with an acoustic two-port transmission matrix. This description is independent of the load on the ME (cochlea or ear canal) and holds in either direction: forward (from ear canal to cochlea) or reverse (from cochlea to ear canal). A transmission matrix describing ME function in chinchilla, an animal commonly used in auditory research, is presented, computed from measurements of forward ME function: input admittance YTM, ME pressure gain GMEP, ME velocity transfer function HV, and cochlear input admittance YC, in the same set of ears [Ravicz and Rosowski (2012b). J. Acoust. Soc. Am. 132, 2437–2454; (2013a). J. Acoust. Soc. Am. 133, 2208–2223; (2013b). J. Acoust. Soc. Am. 134, 2852–2865]. Unlike previous estimates, these computations require no assumptions about the state of the inner ear, effectiveness of ME manipulations, or measurements of sound transmission in the reverse direction. These element values are generally consistent with physical constraints and the anatomical ME “transformer ratio.” Differences from a previous estimate in chinchilla [Songer and Rosowski (2007). J. Acoust. Soc. Am. 122, 932–942] may be due to a difference in ME flexibility between the two subject groups. PMID:28599566

  19. Chinchilla middle ear transmission matrix model and middle-ear flexibility.

    PubMed

    Ravicz, Michael E; Rosowski, John J

    2017-05-01

    The function of the middle ear (ME) in transforming ME acoustic inputs and outputs (sound pressures and volume velocities) can be described with an acoustic two-port transmission matrix. This description is independent of the load on the ME (cochlea or ear canal) and holds in either direction: forward (from ear canal to cochlea) or reverse (from cochlea to ear canal). A transmission matrix describing ME function in chinchilla, an animal commonly used in auditory research, is presented, computed from measurements of forward ME function: input admittance Y TM , ME pressure gain G MEP , ME velocity transfer function H V , and cochlear input admittance Y C , in the same set of ears [Ravicz and Rosowski (2012b). J. Acoust. Soc. Am. 132, 2437-2454; (2013a). J. Acoust. Soc. Am. 133, 2208-2223; (2013b). J. Acoust. Soc. Am. 134, 2852-2865]. Unlike previous estimates, these computations require no assumptions about the state of the inner ear, effectiveness of ME manipulations, or measurements of sound transmission in the reverse direction. These element values are generally consistent with physical constraints and the anatomical ME "transformer ratio." Differences from a previous estimate in chinchilla [Songer and Rosowski (2007). J. Acoust. Soc. Am. 122, 932-942] may be due to a difference in ME flexibility between the two subject groups.

  20. Transfer function for vital infrasound pressures between the carotid artery and the tympanic membrane.

    PubMed

    Furihata, Kenji; Yamashita, Masato

    2013-02-01

    While occupational injury is associated with numerous individual and work-related risk factors, including long working hours and short sleep duration, the complex mechanisms causing such injuries are not yet fully understood. The relationship between the infrasound pressures of the tympanic membrane [ear canal pressure (ECP)], detected using an earplug embedded with a low-frequency microphone, and the carotid artery [carotid artery pressure (CAP)], detected using a stethoscope fitted with the same microphone, can be quantitatively characterized using systems analysis. The transfer functions of 40 normal workers (19 to 57 years old) were characterized, involving the analysis of 446 data points. The ECP waveform exhibits a pulsatile character with a slow respiratory component, which is superimposed on a biphasic recording that is synchronous with the cardiac cycle. The respiratory ECP waveform correlates with the instantaneous heart rate. The results also revealed that various fatigue-related risk factors may affect the mean magnitudes of the measured pressures and the delay transfer functions between CAP and ECP in the study population; these factors include systolic blood pressure, salivary amylase activity, age, sleep duration, postural changes, chronic fatigue, and pulse rate.

  1. Active localization of virtual sounds

    NASA Technical Reports Server (NTRS)

    Loomis, Jack M.; Hebert, C.; Cicinelli, J. G.

    1991-01-01

    We describe a virtual sound display built around a 12 MHz 80286 microcomputer and special purpose analog hardware. The display implements most of the primary cues for sound localization in the ear-level plane. Static information about direction is conveyed by interaural time differences and, for frequencies above 1800 Hz, by head sound shadow (interaural intensity differences) and pinna sound shadow. Static information about distance is conveyed by variation in sound pressure (first power law) for all frequencies, by additional attenuation in the higher frequencies (simulating atmospheric absorption), and by the proportion of direct to reverberant sound. When the user actively locomotes, the changing angular position of the source occasioned by head rotations provides further information about direction and the changing angular velocity produced by head translations (motion parallax) provides further information about distance. Judging both from informal observations by users and from objective data obtained in an experiment on homing to virtual and real sounds, we conclude that simple displays such as this are effective in creating the perception of external sounds to which subjects can home with accuracy and ease.

  2. Continuous long-term measurements of the middle ear pressure in subjects without a history of ear disease.

    PubMed

    Tideholm, B; Carlborg, B; Jönsson, S; Bylander-Groth, A

    1998-06-01

    A new method was used for continuous measurement of the middle ear (ME) pressure during a 24-h period. In 10 subjects without a history of ear disease a small perforation was made through the tympanic membrane. A tight rubber stopper containing a small polyethylene tube was fitted into the external ear canal. Conventional tubal function tests were performed. The equipment was then carried by the subjects for 24 h of normal activity to monitor any slow or rapid dynamic pressure change in the ME. Body position was found to be the most important factor affecting ME pressure variation, during the 24-h continuous pressure measurements. A significant pressure rise occurred in the recumbent position in all but one subject. Few rapid pressure equilibrations were seen during the recordings, indicating few tubal openings. This implies that the pressure changes in the ME seen in this study were mainly the result of gas exchange over the mucosa. The investigation might be a base for reference when investigating different kinds of pathologic conditions in the ear.

  3. Influence of vortex core on wake vortex sound emission

    DOT National Transportation Integrated Search

    2006-05-08

    A consistent and presistent mechanism of sound emission from aircraft wake vortices has been identified. Both measurement data and theoretical results show that a dominant frequency of sound pressure matches the rotation frquency of a Kirchhoff vorte...

  4. Study of Noise-Certification Standards for Aircraft Engines. Volume 2. Procedures for Measuring Far Field Sound Pressure Levels around an Outdoor Jet-Engine Test Stand.

    DTIC Science & Technology

    1983-06-01

    60 References ........................................................... 79 AccesSqlon For NTIS rFA&I r"!’ TAU U: .,P Dist r A. -. S iv...separate exhaust nozzles for discharge of fan and turbine exhaust flows (e.g., JT15D, TFE731 , ALF-502, CF34, JT3D, CFM56, RB.211, CF6, JT9D, and PW2037...minimum radial distance from the effective source of sound at 40 Hz should then be approxi- mately 69 m. At 60 Hz, the minimum radial distance should be

  5. Bubble dynamics in a standing sound field: the bubble habitat.

    PubMed

    Koch, P; Kurz, T; Parlitz, U; Lauterborn, W

    2011-11-01

    Bubble dynamics is investigated numerically with special emphasis on the static pressure and the positional stability of the bubble in a standing sound field. The bubble habitat, made up of not dissolving, positionally and spherically stable bubbles, is calculated in the parameter space of the bubble radius at rest and sound pressure amplitude for different sound field frequencies, static pressures, and gas concentrations of the liquid. The bubble habitat grows with static pressure and shrinks with sound field frequency. The range of diffusionally stable bubble oscillations, found at positive slopes of the habitat-diffusion border, can be increased substantially with static pressure.

  6. Method of sound synthesis

    DOEpatents

    Miner, Nadine E.; Caudell, Thomas P.

    2004-06-08

    A sound synthesis method for modeling and synthesizing dynamic, parameterized sounds. The sound synthesis method yields perceptually convincing sounds and provides flexibility through model parameterization. By manipulating model parameters, a variety of related, but perceptually different sounds can be generated. The result is subtle changes in sounds, in addition to synthesis of a variety of sounds, all from a small set of models. The sound models can change dynamically according to changes in the simulation environment. The method is applicable to both stochastic (impulse-based) and non-stochastic (pitched) sounds.

  7. 33 CFR 67.10-40 - Sound signals authorized for use prior to January 1, 1973.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., and 67.10-10, if the sound signal has a minimum sound pressure level as specified in Table A of... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Sound signals authorized for use... STRUCTURES General Requirements for Sound signals § 67.10-40 Sound signals authorized for use prior to...

  8. [Respiratory sounds].

    PubMed

    Marquet, P

    1995-01-01

    After having invented the stethoscope, Laennec published his treatise on auscultation in 1819, describing the acoustic events generated by ventilation and linking them with anatomopathological findings. The weak points of his semiology lay in its subjective and interpretative character, expressed by an imprecise and picturesque nomenclature. Technical studies of breath sounds began in the middle of the twentieth century, and this enabled the American Thoracic Society to elaborate a new classification of adventitious noises based on a few physical characteristics. This terminology replaced that of Laennec or his translators (except in France). The waveforms of the different normal and adventitious noises have been well described. However, only the study of the time evolution of their tone (frequency-amplitude-time relationship) will enable a complete analysis of these phenomena. This approach has been undertaken by a few teams but much remains to be done, in particular in relation to discontinuous noises (crackles). Technology development raises hope for the design, in near future, of automatic processes for respiratory noise detection and classification. Systematic research into the production mechanisms and sites of these noises has progressed equally. It should, in time, reinforce their semiological value and give to auscultation, either instrumental or using the stethoscope or instrumentally, an increased diagnostic power and the status of respiratory function test.

  9. Wearable Sensing of In-Ear Pressure for Heart Rate Monitoring with a Piezoelectric Sensor

    PubMed Central

    Park, Jang-Ho; Jang, Dae-Geun; Park, Jung Wook; Youm, Se-Kyoung

    2015-01-01

    In this study, we developed a novel heart rate (HR) monitoring approach in which we measure the pressure variance of the surface of the ear canal. A scissor-shaped apparatus equipped with a piezoelectric film sensor and a hardware circuit module was designed for high wearability and to obtain stable measurement. In the proposed device, the film sensor converts in-ear pulse waves (EPW) into electrical current, and the circuit module enhances the EPW and suppresses noise. A real-time algorithm embedded in the circuit module performs morphological conversions to make the EPW more distinct and knowledge-based rules are used to detect EPW peaks. In a clinical experiment conducted using a reference electrocardiogram (ECG) device, EPW and ECG were concurrently recorded from 58 healthy subjects. The EPW intervals between successive peaks and their corresponding ECG intervals were then compared to each other. Promising results were obtained from the samples, specifically, a sensitivity of 97.25%, positive predictive value of 97.17%, and mean absolute difference of 0.62. Thus, highly accurate HR was obtained from in-ear pressure variance. Consequently, we believe that our proposed approach could be used to monitor vital signs and also utilized in diverse applications in the near future. PMID:26389912

  10. Corrigendum to "A semi-empirical airfoil stall noise model based on surface pressure measurements" [J. Sound Vib. 387 (2017) 127-162

    NASA Astrophysics Data System (ADS)

    Bertagnolio, Franck; Madsen, Helge Aa.; Fischer, Andreas; Bak, Christian

    2018-06-01

    In the above-mentioned paper, two model formulae were tuned to fit experimental data of surface pressure spectra measured in various wind tunnels. They correspond to high and low Reynolds number flow scalings, respectively. It turns out that there exist typographical errors in both formulae numbered (9) and (10) in the original paper. There, these formulae read:

  11. Influence of different boundary conditions at the tympanic annulus on finite element models of the human middle ear

    NASA Astrophysics Data System (ADS)

    Lobato, Lucas; Paul, Stephan; Cordioli, Júlio

    2018-05-01

    The tympanic annulus is a fibrocartilage ligament that supports the tympanic membrane in a sulcus at the end of the outer ear canal. Among many FE models of the middle ear found in literature, the effect of different boundary conditions at tympanic annulus on middle ear mechanics was not found. In order to investigate the influence of different representations of this detail in FE models, three different ways to connect the tympanic annulus to the outer ear canal were modelled in a reduced middle ear system. This reduced system includes tympanic membrane, tympanic annulus, manubrium, malleus and anterior ligament of malleus. The numerical frequency response function Humbo (umbo velocity vs sound pressure at tympanic membrane) was analyzed through the different boundary conditions and compared to numerical and experimental data from the literature. Also a numerical modal analysis was performed to improve the analysis. It was found that the boundary conditions used to represent the connection between Tympanic Annulus and Outer Ear Canal can change the global stiffness of the system and its natural frequencies as well as change the modal shape of high order modes.

  12. Inferring the high-pressure strength of copper by measurement of longitudinal sound speed in a symmetric impact and release experiment

    NASA Astrophysics Data System (ADS)

    Rothman, Stephen; Edwards, Rhys; Vogler, Tracy J.; Furnish, M. D.

    2012-03-01

    Velocity-time histories of free- or windowed-surfaces have been used to calculate wave speeds and hence deduce the shear moduli for materials at high pressure. This is important to high velocity impact phenomena, e.g. shaped-charge jets, long rod penetrators, and other projectile/armour interactions. Historically the shock overtake method has required several experiments with different depths of material to account for the effect of the surface on the arrival time of the release. A characteristics method, previously used for analysis of isentropic compression experiments, has been modified to account for the effect of the surface interactions, thus only one depth of material is required. This analysis has been applied to symmetric copper impacts performed at Sandia National Laboratory's Star Facility. A shear modulus of 200GPa, at a pressure of ~180GPa, has been estimated. These results are in broad agreement with previous work by Hayes et al.

  13. Inferring the High-Pressure Strength of Copper by Measurement of Longitudinal Sound Speed in a Symmetric Impact and Release Experiment

    NASA Astrophysics Data System (ADS)

    Rothman, Stephen; Edwards, Rhys; Vogler, Tracy; Furnish, Mike

    2011-06-01

    Velocity-time histories of free- or windowed-surfaces have been used to calculate wave speeds and hence deduce the shear modulus for materials at high pressure. This is important to high velocity impact phenomena, e.g. shaped-charge jets, long rod penetrators, and other projectile/armour interactions. Historically the shock overtake method has required several experiments with different depths of material to account for the effect of the surface on the arrival time of the release. A characteristics method, previously used for analysis of isentropic compression experiments, has been modified to account for the effect of the surface interactions, thus only one depth of material is required. This analysis has been applied to symmetric copper impacts performed at Sandia National Laboratory's Star Facility. A shear modulus of 200Gpa, at a pressure of ~180GPa, has been estimated. These results are in broad agreement with previous work by Hayes et al.

  14. Sound localization in the alligator.

    PubMed

    Bierman, Hilary S; Carr, Catherine E

    2015-11-01

    In early tetrapods, it is assumed that the tympana were acoustically coupled through the pharynx and therefore inherently directional, acting as pressure difference receivers. The later closure of the middle ear cavity in turtles, archosaurs, and mammals is a derived condition, and would have changed the ear by decoupling the tympana. Isolation of the middle ears would then have led to selection for structural and neural strategies to compute sound source localization in both archosaurs and mammalian ancestors. In the archosaurs (birds and crocodilians) the presence of air spaces in the skull provided connections between the ears that have been exploited to improve directional hearing, while neural circuits mediating sound localization are well developed. In this review, we will focus primarily on directional hearing in crocodilians, where vocalization and sound localization are thought to be ecologically important, and indicate important issues still awaiting resolution. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. The Sound of Science

    ERIC Educational Resources Information Center

    Merwade, Venkatesh; Eichinger, David; Harriger, Bradley; Doherty, Erin; Habben, Ryan

    2014-01-01

    While the science of sound can be taught by explaining the concept of sound waves and vibrations, the authors of this article focused their efforts on creating a more engaging way to teach the science of sound--through engineering design. In this article they share the experience of teaching sound to third graders through an engineering challenge…

  16. Sounds Exaggerate Visual Shape

    ERIC Educational Resources Information Center

    Sweeny, Timothy D.; Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes…

  17. Infra-sound cancellation and mitigation in wind turbines

    NASA Astrophysics Data System (ADS)

    Boretti, Albert; Ordys, Andrew; Al Zubaidy, Sarim

    2018-03-01

    The infra-sound spectra recorded inside homes located even several kilometres far from wind turbine installations is characterized by large pressure fluctuation in the low frequency range. There is a significant body of literature suggesting inaudible sounds at low frequency are sensed by humans and affect the wellbeing through different mechanisms. These mechanisms include amplitude modulation of heard sounds, stimulating subconscious pathways, causing endolymphatic hydrops, and possibly potentiating noise-induced hearing loss. We suggest the study of infra-sound active cancellation and mitigation to address the low frequency noise issues. Loudspeakers generate pressure wave components of same amplitude and frequency but opposite phase of the recorded infra sound. They also produce pressure wave components within the audible range reducing the perception of the infra-sound to minimize the sensing of the residual infra sound.

  18. Meteorological effects on long-range outdoor sound propagation

    NASA Technical Reports Server (NTRS)

    Klug, Helmut

    1990-01-01

    Measurements of sound propagation over distances up to 1000 m were carried out with an impulse sound source offering reproducible, short time signals. Temperature and wind speed at several heights were monitored simultaneously; the meteorological data are used to determine the sound speed gradients according to the Monin-Obukhov similarity theory. The sound speed profile is compared to a corresponding prediction, gained through the measured travel time difference between direct and ground reflected pulse (which depends on the sound speed gradient). Positive sound speed gradients cause bending of the sound rays towards the ground yielding enhanced sound pressure levels. The measured meteorological effects on sound propagation are discussed and illustrated by ray tracing methods.

  19. Selective attention reduces physiological noise in the external ear canals of humans. II: Visual attention

    PubMed Central

    Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis

    2014-01-01

    Human subjects performed in several behavioral conditions requiring, or not requiring, selective attention to visual stimuli. Specifically, the attentional task was to recognize strings of digits that had been presented visually. A nonlinear version of the stimulus-frequency otoacoustic emission (SFOAE), called the nSFOAE, was collected during the visual presentation of the digits. The segment of the physiological response discussed here occurred during brief silent periods immediately following the SFOAE-evoking stimuli. For all subjects tested, the physiological-noise magnitudes were substantially weaker (less noisy) during the tasks requiring the most visual attention. Effect sizes for the differences were >2.0. Our interpretation is that cortico-olivo influences adjusted the magnitude of efferent activation during the SFOAE-evoking stimulation depending upon the attention task in effect, and then that magnitude of efferent activation persisted throughout the silent period where it also modulated the physiological noise present. Because the results were highly similar to those obtained when the behavioral conditions involved auditory attention, similar mechanisms appear to operate both across modalities and within modalities. Supplementary measurements revealed that the efferent activation was spectrally global, as it was for auditory attention. PMID:24732070

  20. Measurements of acoustic impedance at the input to the occluded ear canal.

    PubMed

    Larson, V D; Nelson, J A; Cooper, W A; Egolf, D P

    1993-01-01

    Multi-frequency (multi-component) acoustic impedance measurements may evolve into a sensitive technique for the remote detection of aural pathologies. Such data are also relevant to models used in hearing aid design and could be an asset to the hearing aid prescription and fitting process. This report describes the development and use of a broad-band procedure which acquires impedance data in 20 Hz intervals and describes a comparison of data collected at two sites by different investigators. Mean data were in excellent agreement, and an explanation for a single case of extreme normal variability is presented.

  1. Preventing Continuous Positive Airway Pressure Failure: Evidence-Based and Physiologically Sound Practices from Delivery Room to the Neonatal Intensive Care Unit.

    PubMed

    Wright, Clyde J; Sherlock, Laurie G; Sahni, Rakesh; Polin, Richard A

    2018-06-01

    Routine use of continuous positive airway pressure (CPAP) to support preterm infants with respiratory distress is an evidenced-based strategy to decrease incidence of bronchopulmonary dysplasia. However, rates of CPAP failure remain unacceptably high in very premature neonates, who are at high risk for developing bronchopulmonary dysplasia. Using the GRADE framework to assess the quality of available evidence, this article reviews strategies aimed at decreasing CPAP failure, starting with delivery room interventions and followed through to system-based efforts in the neonatal intensive care unit. Despite best efforts, some very premature neonates fail CPAP. Also reviewed are predictors of CPAP failure in this vulnerable population. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Types of Hearing Aids

    MedlinePlus

    ... hearing impairment. Most hearing aids share several similar electronic components, including a microphone that picks up sound; ... the ear canal; and batteries that power the electronic parts. Hearing aids differ by: design technology used ...

  3. Genetics Home Reference: Treacher Collins syndrome

    MedlinePlus

    ... defects of the three small bones in the middle ear, which transmit sound, or by underdevelopment of the ear canal. People with Treacher Collins syndrome usually have normal intelligence. Related Information What does ...

  4. Early sound symbolism for vowel sounds.

    PubMed

    Spector, Ferrinne; Maurer, Daphne

    2013-01-01

    Children and adults consistently match some words (e.g., kiki) to jagged shapes and other words (e.g., bouba) to rounded shapes, providing evidence for non-arbitrary sound-shape mapping. In this study, we investigated the influence of vowels on sound-shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat) and four rounded-jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko) rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba). Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01). The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  5. Sound wave transmission (image)

    MedlinePlus

    When sounds waves reach the ear, they are translated into nerve impulses. These impulses then travel to the brain where they are interpreted by the brain as sound. The hearing mechanisms within the inner ear, can ...

  6. Possibilities of psychoacoustics to determine sound quality

    NASA Astrophysics Data System (ADS)

    Genuit, Klaus

    For some years, acoustic engineers have increasingly become aware of the importance of analyzing and minimizing noise problems not only with regard to the A-weighted sound pressure level, but to design sound quality. It is relatively easy to determine the A-weighted SPL according to international standards. However, the objective evaluation to describe subjectively perceived sound quality - taking into account psychoacoustic parameters such as loudness, roughness, fluctuation strength, sharpness and so forth - is more difficult. On the one hand, the psychoacoustic measurement procedures which are known so far have yet not been standardized. On the other hand, they have only been tested in laboratories by means of listening tests in the free-field and one sound source and simple signals. Therefore, the results achieved cannot be transferred to complex sound situations with several spatially distributed sound sources without difficulty. Due to the directional hearing and selectivity of human hearing, individual sound events can be selected among many. Already in the late seventies a new binaural Artificial Head Measurement System was developed which met the requirements of the automobile industry in terms of measurement technology. The first industrial application of the Artificial Head Measurement System was in 1981. Since that time the system was further developed, particularly by the cooperation between HEAD acoustics and Mercedes-Benz. In addition to a calibratable Artificial Head Measurement System which is compatible with standard measurement technologies and has transfer characteristics comparable to human hearing, a Binaural Analysis System is now also available. This system permits the analysis of binaural signals regarding physical and psychoacoustic procedures. Moreover, the signals to be analyzed can be simultaneously monitored through headphones and manipulated in the time and frequency domain so that those signal components being responsible for noise

  7. Priming Gestures with Sounds

    PubMed Central

    Lemaitre, Guillaume; Heller, Laurie M.; Navolio, Nicole; Zúñiga-Peñaranda, Nicolas

    2015-01-01

    We report a series of experiments about a little-studied type of compatibility effect between a stimulus and a response: the priming of manual gestures via sounds associated with these gestures. The goal was to investigate the plasticity of the gesture-sound associations mediating this type of priming. Five experiments used a primed choice-reaction task. Participants were cued by a stimulus to perform response gestures that produced response sounds; those sounds were also used as primes before the response cues. We compared arbitrary associations between gestures and sounds (key lifts and pure tones) created during the experiment (i.e. no pre-existing knowledge) with ecological associations corresponding to the structure of the world (tapping gestures and sounds, scraping gestures and sounds) learned through the entire life of the participant (thus existing prior to the experiment). Two results were found. First, the priming effect exists for ecological as well as arbitrary associations between gestures and sounds. Second, the priming effect is greatly reduced for ecologically existing associations and is eliminated for arbitrary associations when the response gesture stops producing the associated sounds. These results provide evidence that auditory-motor priming is mainly created by rapid learning of the association between sounds and the gestures that produce them. Auditory-motor priming is therefore mediated by short-term associations between gestures and sounds that can be readily reconfigured regardless of prior knowledge. PMID:26544884

  8. Suppression of sound radiation to far field of near-field acoustic communication system using evanescent sound field

    NASA Astrophysics Data System (ADS)

    Fujii, Ayaka; Wakatsuki, Naoto; Mizutani, Koichi

    2016-01-01

    A method of suppressing sound radiation to the far field of a near-field acoustic communication system using an evanescent sound field is proposed. The amplitude of the evanescent sound field generated from an infinite vibrating plate attenuates exponentially with increasing a distance from the surface of the vibrating plate. However, a discontinuity of the sound field exists at the edge of the finite vibrating plate in practice, which broadens the wavenumber spectrum. A sound wave radiates over the evanescent sound field because of broadening of the wavenumber spectrum. Therefore, we calculated the optimum distribution of the particle velocity on the vibrating plate to reduce the broadening of the wavenumber spectrum. We focused on a window function that is utilized in the field of signal analysis for reducing the broadening of the frequency spectrum. The optimization calculation is necessary for the design of window function suitable for suppressing sound radiation and securing a spatial area for data communication. In addition, a wide frequency bandwidth is required to increase the data transmission speed. Therefore, we investigated a suitable method for calculating the sound pressure level at the far field to confirm the variation of the distribution of sound pressure level determined on the basis of the window shape and frequency. The distribution of the sound pressure level at a finite distance was in good agreement with that obtained at an infinite far field under the condition generating the evanescent sound field. Consequently, the window function was optimized by the method used to calculate the distribution of the sound pressure level at an infinite far field using the wavenumber spectrum on the vibrating plate. According to the result of comparing the distributions of the sound pressure level in the cases with and without the window function, it was confirmed that the area whose sound pressure level was reduced from the maximum level to -50 dB was

  9. Sound field inside acoustically levitated spherical drop

    NASA Astrophysics Data System (ADS)

    Xie, W. J.; Wei, B.

    2007-05-01

    The sound field inside an acoustically levitated small spherical water drop (radius of 1mm) is studied under different incident sound pressures (amplitude p0=2735-5643Pa). The transmitted pressure ptr in the drop shows a plane standing wave, which varies mainly in the vertical direction, and distributes almost uniformly in the horizontal direction. The maximum of ptr is always located at the lowermost point of the levitated drop. Whereas the secondary maximum appears at the uppermost point if the incident pressure amplitude p0 is higher than an intermediate value (3044Pa), in which there exists a pressure nodal surface in the drop interior. The value of the maximum ptr lies in a narrow range of 2489-3173Pa, which has a lower limit of 2489Pa when p0=3044Pa. The secondary maximum of ptr is rather small and only remarkable at high incident pressures.

  10. Blood pressure reprogramming adapter assists signal recording

    NASA Technical Reports Server (NTRS)

    Vick, H. A.

    1967-01-01

    Blood pressure reprogramming adapter separates the two components of a blood pressure signal, a dc pressure signal and an ac Korotkoff sounds signal, so that the Korotkoff sounds are recorded on one channel as received while the dc pressure signal is converted to FM and recorded on a second channel.

  11. Some sound transmission loss characteristics of typical general aviation structural materials

    NASA Technical Reports Server (NTRS)

    Roskam, J.; Van Dam, C.; Grosveld, F.; Durenberger, D.

    1978-01-01

    Experimentally measured sound transmission loss characteristics of flat aluminum panels with and without damping and stiffness treatment are presented and discussed. The effect of pressurization on sound transmission loss of flat aluminum panels is shown to be significant.

  12. Loudness of steady sounds - A new theory

    NASA Technical Reports Server (NTRS)

    Howes, W. L.

    1979-01-01

    A new mathematical theory for calculating the loudness of steady sounds from power summation and frequency interaction, based on psychoacoustic and physiological information, assuems that loudness is a subjective measure of the electrical energy transmitted along the auditory nerve to the central nervous system. The auditory system consists of the mechanical part modeled by a bandpass filter with a transfer function dependent on the sound pressure, and the electrical part where the signal is transformed into a half-wave reproduction represented by the electrical power in impulsive discharges transmitted along neurons comprising the auditory nerve. In the electrical part the neurons are distributed among artificial parallel channels with frequency bandwidths equal to 'critical bandwidths for loudness', within which loudness is constant for constant sound pressure. The total energy transmitted to the central nervous system is the sum of the energy transmitted in all channels, and the loudness is proportional to the square root of the total filtered sound energy distributed over all channels. The theory explains many psychoacoustic phenomena such as audible beats resulting from closely spaced tones, interaction of sound stimuli which affect the same neurons affecting loudness, and of individually subliminal sounds becoming audible if they lie within the same critical band.

  13. Graphene-on-paper sound source devices.

    PubMed

    Tian, He; Ren, Tian-Ling; Xie, Dan; Wang, Yu-Feng; Zhou, Chang-Jian; Feng, Ting-Ting; Fu, Di; Yang, Yi; Peng, Ping-Gang; Wang, Li-Gang; Liu, Li-Tian

    2011-06-28

    We demonstrate an interesting phenomenon that graphene can emit sound. The application of graphene can be expanded in the acoustic field. Graphene-on-paper sound source devices are made by patterning graphene on paper substrates. Three graphene sheet samples with the thickness of 100, 60, and 20 nm were fabricated. Sound emission from graphene is measured as a function of power, distance, angle, and frequency in the far-field. The theoretical model of air/graphene/paper/PCB board multilayer structure is established to analyze the sound directivity, frequency response, and efficiency. Measured sound pressure level (SPL) and efficiency are in good agreement with theoretical results. It is found that graphene has a significant flat frequency response in the wide ultrasound range 20-50 kHz. In addition, the thinner graphene sheets can produce higher SPL due to its lower heat capacity per unit area (HCPUA). The infrared thermal images reveal that a thermoacoustic effect is the working principle. We find that the sound performance mainly depends on the HCPUA of the conductor and the thermal properties of the substrate. The paper-based graphene sound source devices have highly reliable, flexible, no mechanical vibration, simple structure and high performance characteristics. It could open wide applications in multimedia, consumer electronics, biological, medical, and many other areas.

  14. Brief report: sound output of infant humidifiers.

    PubMed

    Royer, Allison K; Wilson, Paul F; Royer, Mark C; Miyamoto, Richard T

    2015-06-01

    The sound pressure levels (SPLs) of common infant humidifiers were determined to identify the likely sound exposure to infants and young children. This primary investigative research study was completed at a tertiary-level academic medical center otolaryngology and audiology laboratory. Five commercially available humidifiers were obtained from brick-and-mortar infant supply stores. Sound levels were measured at 20-, 100-, and 150-cm distances at all available humidifier settings. Two of 5 (40%) humidifiers tested had SPL readings greater than the recommended hospital infant nursery levels (50 dB) at distances up to 100 cm. In this preliminary study, it was demonstrated that humidifiers marketed for infant nurseries may produce appreciably high decibel levels. Further characterization of the effect of humidifier design on SPLs and further elucidation of ambient sound levels associated with hearing risk are necessary before definitive conclusions and recommendations can be made. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  15. Physics of thermo-acoustic sound generation

    NASA Astrophysics Data System (ADS)

    Daschewski, M.; Boehm, R.; Prager, J.; Kreutzbruck, M.; Harrer, A.

    2013-09-01

    We present a generalized analytical model of thermo-acoustic sound generation based on the analysis of thermally induced energy density fluctuations and their propagation into the adjacent matter. The model provides exact analytical prediction of the sound pressure generated in fluids and solids; consequently, it can be applied to arbitrary thermal power sources such as thermophones, plasma firings, laser beams, and chemical reactions. Unlike existing approaches, our description also includes acoustic near-field effects and sound-field attenuation. Analytical results are compared with measurements of sound pressures generated by thermo-acoustic transducers in air for frequencies up to 1 MHz. The tested transducers consist of titanium and indium tin oxide coatings on quartz glass and polycarbonate substrates. The model reveals that thermo-acoustic efficiency increases linearly with the supplied thermal power and quadratically with thermal excitation frequency. Comparison of the efficiency of our thermo-acoustic transducers with those of piezoelectric-based airborne ultrasound transducers using impulse excitation showed comparable sound pressure values. The present results show that thermo-acoustic transducers can be applied as broadband, non-resonant, high-performance ultrasound sources.

  16. Early sound symbolism for vowel sounds

    PubMed Central

    Spector, Ferrinne; Maurer, Daphne

    2013-01-01

    Children and adults consistently match some words (e.g., kiki) to jagged shapes and other words (e.g., bouba) to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat) and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko) rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba). Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01). The results suggest that there may be naturally biased correspondences between vowel sound and shape. PMID:24349684

  17. Automatic blood pressure measuring system (M092)

    NASA Technical Reports Server (NTRS)

    Nolte, R. W.

    1977-01-01

    The Blood Pressure Measuring System is described. It measures blood pressure by the noninvasive Korotkoff sound technique on a continual basis as physical stress is imposed during experiment M092, Lower Body Negative Pressure, and experiment M171, Metabolic Activity.

  18. A mechanism study of sound wave-trapping barriers.

    PubMed

    Yang, Cheng; Pan, Jie; Cheng, Li

    2013-09-01

    The performance of a sound barrier is usually degraded if a large reflecting surface is placed on the source side. A wave-trapping barrier (WTB), with its inner surface covered by wedge-shaped structures, has been proposed to confine waves within the area between the barrier and the reflecting surface, and thus improve the performance. In this paper, the deterioration in performance of a conventional sound barrier due to the reflecting surface is first explained in terms of the resonance effect of the trapped modes. At each resonance frequency, a strong and mode-controlled sound field is generated by the noise source both within and in the vicinity outside the region bounded by the sound barrier and the reflecting surface. It is found that the peak sound pressures in the barrier's shadow zone, which correspond to the minimum values in the barrier's insertion loss, are largely determined by the resonance frequencies and by the shapes and losses of the trapped modes. These peak pressures usually result in high sound intensity component impinging normal to the barrier surface near the top. The WTB can alter the sound wave diffraction at the top of the barrier if the wavelengths of the sound wave are comparable or smaller than the dimensions of the wedge. In this case, the modified barrier profile is capable of re-organizing the pressure distribution within the bounded domain and altering the acoustic properties near the top of the sound barrier.

  19. Standing Sound Waves in Air with DataStudio

    ERIC Educational Resources Information Center

    Kraftmakher, Yaakov

    2010-01-01

    Two experiments related to standing sound waves in air are adapted for using the ScienceWorkshop data-acquisition system with the DataStudio software from PASCO scientific. First, the standing waves are created by reflection from a plane reflector. The distribution of the sound pressure along the standing wave is measured. Second, the resonance…

  20. Distress sounds of thorny catfishes emitted underwater and in air: characteristics and potential significance.

    PubMed

    Knight, Lisa; Ladich, Friedrich

    2014-11-15

    Thorny catfishes produce stridulation (SR) sounds using their pectoral fins and drumming (DR) sounds via a swimbladder mechanism in distress situations when hand held in water and in air. It has been argued that SR and DR sounds are aimed at different receivers (predators) in different media. The aim of this study was to analyse and compare sounds emitted in both air and water in order to test different hypotheses on the functional significance of distress sounds. Five representatives of the family Doradidae were investigated. Fish were hand held and sounds emitted in air and underwater were recorded (number of sounds, sound duration, dominant and fundamental frequency, sound pressure level and peak-to-peak amplitudes). All species produced SR sounds in both media, but DR sounds could not be recorded in air for two species. Differences in sound characteristics between media were small and mainly limited to spectral differences in SR. The number of sounds emitted decreased over time, whereas the duration of SR sounds increased. The dominant frequency of SR and the fundamental frequency of DR decreased and sound pressure level of SR increased with body size across species. The hypothesis that catfish produce more SR sounds in air and more DR sounds in water as a result of different predation pressure (birds versus fish) could not be confirmed. It is assumed that SR sounds serve as distress sounds in both media, whereas DR sounds might primarily be used as intraspecific communication signals in water in species possessing both mechanisms. © 2014. Published by The Company of Biologists Ltd.

  1. School Sound Level Study.

    ERIC Educational Resources Information Center

    California State Dept. of Education, Sacramento.

    California has conducted on-site sound surveys of 36 different schools to determine the degree of noise, and thus disturbance, within the learning environment. This report provides the methodology and results of the survey, including descriptive charts and graphs illustrating typical desirable and undesirable sound levels. Results are presented…

  2. The Bosstown Sound.

    ERIC Educational Resources Information Center

    Burns, Gary

    Based on the argument that (contrary to critical opinion) the musicians in the various bands associated with Bosstown Sound were indeed talented, cohesive individuals and that the bands' lack of renown was partially a result of ill-treatment by record companies and the press, this paper traces the development of the Bosstown Sound from its…

  3. The sounds of nanotechnology

    NASA Astrophysics Data System (ADS)

    Campbell, Norah; Deane, Cormac; Murphy, Padraig

    2017-07-01

    Public perceptions of nanotechnology are shaped by sound in surprising ways. Our analysis of the audiovisual techniques employed by nanotechnology stakeholders shows that well-chosen sounds can help to win public trust, create value and convey the weird reality of objects on the nanoscale.

  4. Breaking the Sound Barrier

    ERIC Educational Resources Information Center

    Brown, Tom; Boehringer, Kim

    2007-01-01

    Students in a fourth-grade class participated in a series of dynamic sound learning centers followed by a dramatic capstone event--an exploration of the amazing Trashcan Whoosh Waves. It's a notoriously difficult subject to teach, but this hands-on, exploratory approach ignited student interest in sound, promoted language acquisition, and built…

  5. Exploring Noise: Sound Pollution.

    ERIC Educational Resources Information Center

    Rillo, Thomas J.

    1979-01-01

    Part one of a three-part series about noise pollution and its effects on humans. This section presents the background information for teachers who are preparing a unit on sound. The next issues will offer learning activities for measuring the effects of sound and some references. (SA)

  6. Neonatal incubators: a toxic sound environment for the preterm infant?*.

    PubMed

    Marik, Paul E; Fuller, Christopher; Levitov, Alexander; Moll, Elizabeth

    2012-11-01

    High sound pressure levels may be harmful to the maturing newborn. Current guidelines suggest that the sound pressure levels within a neonatal intensive care unit should not exceed 45 dB(A). It is likely that environmental noise as well as the noise generated by the incubator fan and respiratory equipment may contribute to the total sound pressure levels. Knowledge of the contribution of each component and source is important to develop effective strategies to reduce noise within the incubator. The objectives of this study were to determine the sound levels, sound spectra, and major sources of sound within a modern neonatal incubator (Giraffe Omnibed; GE Healthcare, Helsinki, Finland) using a sound simulation study to replicate the conditions of a preterm infant undergoing high-frequency jet ventilation (Life Pulse, Bunnell, UT). Using advanced sound data acquisition and signal processing equipment, we measured and analyzed the sound level at a dummy infant's ear and at the head level outside the enclosure. The sound data time histories were digitally acquired and processed using a digital Fast Fourier Transform algorithm to provide spectra of the sound and cumulative sound pressure levels (dBA). The simulation was done with the incubator cooling fan and ventilator switched on or off. In addition, tests were carried out with the enclosure sides closed and hood down and then with the enclosure sides open and the hood up to determine the importance of interior incubator reverberance on the interior sound levels With all the equipment off and the hood down, the sound pressure levels were 53 dB(A) inside the incubator. The sound pressure levels increased to 68 dB(A) with all equipment switched on (approximately 10 times louder than recommended). The sound intensity was 6.0 × 10(-8) watts/m(2); this sound level is roughly comparable with that generated by a kitchen exhaust fan on high. Turning the ventilator off reduced the overall sound pressure levels to 64 dB(A) and

  7. The sound manifesto

    NASA Astrophysics Data System (ADS)

    O'Donnell, Michael J.; Bisnovatyi, Ilia

    2000-11-01

    Computing practice today depends on visual output to drive almost all user interaction. Other senses, such as audition, may be totally neglected, or used tangentially, or used in highly restricted specialized ways. We have excellent audio rendering through D-A conversion, but we lack rich general facilities for modeling and manipulating sound comparable in quality and flexibility to graphics. We need coordinated research in several disciplines to improve the use of sound as an interactive information channel. Incremental and separate improvements in synthesis, analysis, speech processing, audiology, acoustics, music, etc. will not alone produce the radical progress that we seek in sonic practice. We also need to create a new central topic of study in digital audio research. The new topic will assimilate the contributions of different disciplines on a common foundation. The key central concept that we lack is sound as a general-purpose information channel. We must investigate the structure of this information channel, which is driven by the cooperative development of auditory perception and physical sound production. Particular audible encodings, such as speech and music, illuminate sonic information by example, but they are no more sufficient for a characterization than typography is sufficient for characterization of visual information. To develop this new conceptual topic of sonic information structure, we need to integrate insights from a number of different disciplines that deal with sound. In particular, we need to coordinate central and foundational studies of the representational models of sound with specific applications that illuminate the good and bad qualities of these models. Each natural or artificial process that generates informative sound, and each perceptual mechanism that derives information from sound, will teach us something about the right structure to attribute to the sound itself. The new Sound topic will combine the work of computer

  8. Non-ossicular signal transmission in human middle ears: Experimental assessment of the “acoustic route” with perforated tympanic membranes

    PubMed Central

    Voss, Susan E.; Rosowski, John J.; Merchant, Saumil N.; Peake, William T.

    2008-01-01

    Direct acoustic stimulation of the cochlea by the sound-pressure difference between the oval and round windows (called the “acoustic route”) has been thought to contribute to hearing in some pathological conditions, along with the normally dominant “ossicular route.” To determine the efficacy of this acoustic route and its constituent mechanisms in human ears, sound pressures were measured at three locations in cadaveric temporal bones [with intact and perforated tympanic membranes (TMs)]: (1) in the external ear canal lateral to the TM, PTM; (2) in the tympanic cavity lateral to the oval window, POW; and (3) near the round window, PRW. Sound transmission via the acoustic route is described by two concatenated processes: (1) coupling of sound pressure from ear canal to middle-ear cavity, HPCAV≡PCAV/PTM, where PCAV represents the middle-ear cavity pressure, and (2) sound-pressure difference between the windows, HWPD≡(POW−PRW)/PCAV. Results show that: HPCAV depends on perforation size but not perforation location; HWPD depends on neither perforation size nor location. The results (1) provide a description of the window pressures based on measurements, (2) refute the common otological view that TM perforation location affects the “relative phase of the pressures at the oval and round windows,” and (3) show with an intact ossicular chain that acoustic-route transmission is substantially below ossicular-route transmission except for low frequencies with large perforations. Thus, hearing loss from TM perforations results primarily from reduction in sound coupling via the ossicular route. Some features of the frequency dependence of HPCAV and HWPD can be interpreted in terms of a structure-based lumped-element acoustic model of the perforation and middle-ear cavities. PMID:17902851

  9. Sound as artifact

    NASA Astrophysics Data System (ADS)

    Benjamin, Jeffrey L.

    A distinguishing feature of the discipline of archaeology is its reliance upon sensory dependant investigation. As perceived by all of the senses, the felt environment is a unique area of archaeological knowledge. It is generally accepted that the emergence of industrial processes in the recent past has been accompanied by unprecedented sonic extremes. The work of environmental historians has provided ample evidence that the introduction of much of this unwanted sound, or "noise" was an area of contestation. More recent research in the history of sound has called for more nuanced distinctions than the noisy/quiet dichotomy. Acoustic archaeology tends to focus upon a reconstruction of sound producing instruments and spaces with a primary goal of ascertaining intentionality. Most archaeoacoustic research is focused on learning more about the sonic world of people within prehistoric timeframes while some research has been done on historic sites. In this thesis, by way of a meditation on industrial sound and the physical remains of the Quincy Mining Company blacksmith shop (Hancock, MI) in particular, I argue for an acceptance and inclusion of sound as artifact in and of itself. I am introducing the concept of an individual sound-form, or sonifact , as a reproducible, repeatable, representable physical entity, created by tangible, perhaps even visible, host-artifacts. A sonifact is a sound that endures through time, with negligible variability. Through the piecing together of historical and archaeological evidence, in this thesis I present a plausible sonifactual assemblage at the blacksmith shop in April 1916 as it may have been experienced by an individual traversing the vicinity on foot: an 'historic soundwalk.' The sensory apprehension of abandoned industrial sites is multi-faceted. In this thesis I hope to make the case for an acceptance of sound as a primary heritage value when thinking about the industrial past, and also for an increased awareness and acceptance

  10. GPS Sounding Rocket Developments

    NASA Technical Reports Server (NTRS)

    Bull, Barton

    1999-01-01

    Sounding rockets are suborbital launch vehicles capable of carrying scientific payloads several hundred miles in altitude. These missions return a variety of scientific data including; chemical makeup and physical processes taking place in the atmosphere, natural radiation surrounding the Earth, data on the Sun, stars, galaxies and many other phenomena. In addition, sounding rockets provide a reasonably economical means of conducting engineering tests for instruments and devices used on satellites and other spacecraft prior to their use in more expensive activities. This paper addresses the NASA Wallops Island history of GPS Sounding Rocket experience since 1994 and the development of highly accurate and useful system.

  11. Heart sounds analysis via esophageal stethoscope system in beagles.

    PubMed

    Park, Sang Hi; Shin, Young Duck; Bae, Jin Ho; Kwon, Eun Jung; Lee, Tae-Soo; Shin, Ji-Yun; Kim, Yeong-Cheol; Min, Gyeong-Deuk; Kim, Myoung hwan

    2013-10-01

    Esophageal stethoscope is less invasive and easy to handling. And it gives a lot of information. The purpose of this study is to investigate the correlation of blood pressure and heart sound as measured by esophageal stethoscope. Four male beagles weighing 10 to 12 kg were selected as experimental subjects. After general anesthesia, the esophageal stethoscope was inserted. After connecting the microphone, the heart sounds were visualized and recorded through a self-developed equipment and program. The amplitudes of S1 and S2 were monitored real-time to examine changes as the blood pressure increased and decreased. The relationship between the ratios of S1 to S2 (S1/S2) and changes in blood pressure due to ephedrine was evaluated. The same experiment was performed with different concentration of isoflurane. From S1 and S2 in the inotropics experiment, a high correlation appeared with change in blood pressure in S1. The relationship between S1/S2 and change in blood pressure showed a positive correlation in each experimental subject. In the volatile anesthetics experiment, the heart sounds decreased as MAC increased. Heart sounds were analyzed successfully with the esophageal stethoscope through the self-developed program and equipment. A proportional change in heart sounds was confirmed when blood pressure was changed using inotropics or volatile anesthetics. The esophageal stethoscope can achieve the closest proximity to the heart to hear sounds in a non-invasive manner.

  12. Multichannel sound reinforcement systems at work in a learning environment

    NASA Astrophysics Data System (ADS)

    Malek, John; Campbell, Colin

    2003-04-01

    Many people have experienced the entertaining benefits of a surround sound system, either in their own home or in a movie theater, but another application exists for multichannel sound that has for the most part gone unused. This is the application of multichannel sound systems to the learning environment. By incorporating a 7.1 surround processor and a touch panel interface programmable control system, the main lecture hall at the University of Michigan Taubman College of Architecture and Urban Planning has been converted from an ordinary lecture hall to a working audiovisual laboratory. The multichannel sound system is used in a wide variety of experiments, including exposure to sounds to test listeners' aural perception of the tonal characteristics of varying pitch, reverberation, speech transmission index, and sound-pressure level. The touch panel's custom interface allows a variety of user groups to control different parts of the AV system and provides preset capability that allows for numerous system configurations.

  13. Light aircraft sound transmission studies - Noise reduction model

    NASA Technical Reports Server (NTRS)

    Atwal, Mahabir S.; Heitman, Karen E.; Crocker, Malcolm J.

    1987-01-01

    Experimental tests conducted on the fuselage of a single-engine Piper Cherokee light aircraft suggest that the cabin interior noise can be reduced by increasing the transmission loss of the dominant sound transmission paths and/or by increasing the cabin interior sound absorption. The validity of using a simple room equation model to predict the cabin interior sound-pressure level for different fuselage and exterior sound field conditions is also presented. The room equation model is based on the sound power flow balance for the cabin space and utilizes the measured transmitted sound intensity data. The room equation model predictions were considered good enough to be used for preliminary acoustical design studies.

  14. AVE/VAS 3: 25-mb sounding data

    NASA Technical Reports Server (NTRS)

    Sienkiewicz, M. E.

    1982-01-01

    The rawinsonde sounding program for the AVE/VAS 3 experiment is described. Tabulated data are presented at 25-mb intervals for the 24 National Weather Service stations and 14 special stations participating in the experiment. Soundings were taken at 3-hr intervals, beginning at 1200 GMT on March 27, 1982, and ending at 0600 GMT on March 28, 1982 (7 sounding times). An additional sounding was taken at the National Weather Service stations at 1200 GMT on March 28, 1982, at the normal synoptic observation time. The method of processing soundings is briefly discussed, estimates of the RMS errors in the data are presented, and an example of contact data is given. Termination pressures of soundings taken in the mesos-beta-scale network are tabulated, as are observations of ground temperature at a depth of 2 cm.

  15. Sound Visualization and Holography

    ERIC Educational Resources Information Center

    Kock, Winston E.

    1975-01-01

    Describes liquid surface holograms including their application to medicine. Discusses interference and diffraction phenomena using sound wave scanning techniques. Compares focussing by zone plate to holographic image development. (GH)

  16. Sound source measurement by using a passive sound insulation and a statistical approach

    NASA Astrophysics Data System (ADS)

    Dragonetti, Raffaele; Di Filippo, Sabato; Mercogliano, Francesco; Romano, Rosario A.

    2015-10-01

    This paper describes a measurement technique developed by the authors that allows carrying out acoustic measurements inside noisy environments reducing background noise effects. The proposed method is based on the integration of a traditional passive noise insulation system with a statistical approach. The latter is applied to signals picked up by usual sensors (microphones and accelerometers) equipping the passive sound insulation system. The statistical approach allows improving of the sound insulation given only by the passive sound insulation system at low frequency. The developed measurement technique has been validated by means of numerical simulations and measurements carried out inside a real noisy environment. For the case-studies here reported, an average improvement of about 10 dB has been obtained in a frequency range up to about 250 Hz. Considerations on the lower sound pressure level that can be measured by applying the proposed method and the measurement error related to its application are reported as well.

  17. 33 CFR 86.05 - Sound signal intensity and range of audibility.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... direction of the forward axis of the whistle and at a distance of 1 meter from it, a sound pressure level in... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Sound signal intensity and range... HOMELAND SECURITY INLAND NAVIGATION RULES ANNEX III: TECHNICAL DETAILS OF SOUND SIGNAL APPLIANCES Whistles...

  18. Statistical Analysis for Subjective and Objective Evaluations of Dental Drill Sounds

    PubMed Central

    Yamada, Tomomi; Kuwano, Sonoko; Ebisu, Shigeyuki; Hayashi, Mikako

    2016-01-01

    The sound produced by a dental air turbine handpiece (dental drill) can markedly influence the sound environment in a dental clinic. Indeed, many patients report that the sound of a dental drill elicits an unpleasant feeling. Although several manufacturers have attempted to reduce the sound pressure levels produced by dental drills during idling based on ISO 14457, the sound emitted by such drills under active drilling conditions may negatively influence the dental clinic sound environment. The physical metrics related to the unpleasant impressions associated with dental drill sounds have not been determined. In the present study, psychological measurements of dental drill sounds were conducted with the aim of facilitating improvement of the sound environment at dental clinics. Specifically, we examined the impressions elicited by the sounds of 12 types of dental drills in idling and drilling conditions using a semantic differential. The analysis revealed that the impressions of dental drill sounds varied considerably between idling and drilling conditions and among the examined drills. This finding suggests that measuring the sound of a dental drill in idling conditions alone may be insufficient for evaluating the effects of the sound. We related the results of the psychological evaluations to those of measurements of the physical metrics of equivalent continuous A-weighted sound pressure levels (LAeq) and sharpness. Factor analysis indicated that impressions of the dental drill sounds consisted of two factors: “metallic and unpleasant” and “powerful”. LAeq had a strong relationship with “powerful impression”, calculated sharpness was positively related to “metallic impression”, and “unpleasant impression” was predicted by the combination of both LAeq and calculated sharpness. The present analyses indicate that, in addition to a reduction in sound pressure level, refining the frequency components of dental drill sounds is important for creating

  19. Statistical Analysis for Subjective and Objective Evaluations of Dental Drill Sounds.

    PubMed

    Yamada, Tomomi; Kuwano, Sonoko; Ebisu, Shigeyuki; Hayashi, Mikako

    2016-01-01

    The sound produced by a dental air turbine handpiece (dental drill) can markedly influence the sound environment in a dental clinic. Indeed, many patients report that the sound of a dental drill elicits an unpleasant feeling. Although several manufacturers have attempted to reduce the sound pressure levels produced by dental drills during idling based on ISO 14457, the sound emitted by such drills under active drilling conditions may negatively influence the dental clinic sound environment. The physical metrics related to the unpleasant impressions associated with dental drill sounds have not been determined. In the present study, psychological measurements of dental drill sounds were conducted with the aim of facilitating improvement of the sound environment at dental clinics. Specifically, we examined the impressions elicited by the sounds of 12 types of dental drills in idling and drilling conditions using a semantic differential. The analysis revealed that the impressions of dental drill sounds varied considerably between idling and drilling conditions and among the examined drills. This finding suggests that measuring the sound of a dental drill in idling conditions alone may be insufficient for evaluating the effects of the sound. We related the results of the psychological evaluations to those of measurements of the physical metrics of equivalent continuous A-weighted sound pressure levels (LAeq) and sharpness. Factor analysis indicated that impressions of the dental drill sounds consisted of two factors: "metallic and unpleasant" and "powerful". LAeq had a strong relationship with "powerful impression", calculated sharpness was positively related to "metallic impression", and "unpleasant impression" was predicted by the combination of both LAeq and calculated sharpness. The present analyses indicate that, in addition to a reduction in sound pressure level, refining the frequency components of dental drill sounds is important for creating a comfortable sound

  20. Technology, Sound and Popular Music.

    ERIC Educational Resources Information Center

    Jones, Steve

    The ability to record sound is power over sound. Musicians, producers, recording engineers, and the popular music audience often refer to the sound of a recording as something distinct from the music it contains. Popular music is primarily mediated via electronics, via sound, and not by means of written notes. The ability to preserve or modify…

  1. Nearshore Birds in Puget Sound

    DTIC Science & Technology

    2006-05-01

    Published by Seattle District, U.S. Army Corps of Engineers, Seattle, Washington. Kriete, B. 2007. Orcas in Puget Sound . Puget Sound Near- shore...Technical Report 2006-05 Puget Sound Nearshore Partnership I Nearshore Birds in Puget Sound Prepared in...support of the Puget Sound Nearshore Partnership Joseph B. Buchanan Washington Department of Fish and Wildlife Technical Report 2006-05 ii

  2. Auditory performance in an open sound field

    NASA Astrophysics Data System (ADS)

    Fluitt, Kim F.; Letowski, Tomasz; Mermagen, Timothy

    2003-04-01

    Detection and recognition of acoustic sources in an open field are important elements of situational awareness on the battlefield. They are affected by many technical and environmental conditions such as type of sound, distance to a sound source, terrain configuration, meteorological conditions, hearing capabilities of the listener, level of background noise, and the listener's familiarity with the sound source. A limited body of knowledge about auditory perception of sources located over long distances makes it difficult to develop models predicting auditory behavior on the battlefield. The purpose of the present study was to determine the listener's abilities to detect, recognize, localize, and estimate distances to sound sources from 25 to 800 m from the listing position. Data were also collected for meteorological conditions (wind direction and strength, temperature, atmospheric pressure, humidity) and background noise level for each experimental trial. Forty subjects (men and women, ages 18 to 25) participated in the study. Nine types of sounds were presented from six loudspeakers in random order; each series was presented four times. Partial results indicate that both detection and recognition declined at distances greater than approximately 200 m and distance estimation was grossly underestimated by listeners. Specific results will be presented.

  3. Meteor fireball sounds identified

    NASA Technical Reports Server (NTRS)

    Keay, Colin

    1992-01-01

    Sounds heard simultaneously with the flight of large meteor fireballs are electrical in origin. Confirmation that Extra/Very Low Frequency (ELF/VLF) electromagnetic radiation is produced by the fireball was obtained by Japanese researchers. Although the generation mechanism is not fully understood, studies of the Meteorite Observation and Recovery Project (MORP) and other fireball data indicate that interaction with the atmosphere is definitely responsible and the cut-off magnitude of -9 found for sustained electrophonic sounds is supported by theory. Brief bursts of ELF/VLF radiation may accompany flares or explosions of smaller fireballs, producing transient sounds near favorably placed observers. Laboratory studies show that mundane physical objects can respond to electrical excitation and produce audible sounds. Reports of electrophonic sounds should no longer be discarded. A catalog of over 300 reports relating to electrophonic phenomena associated with meteor fireballs, aurorae, and lightning was assembled. Many other reports have been cataloged in Russian. These may assist the full solution of the similar long-standing and contentious mystery of audible auroral displays.

  4. GPS Sounding Rocket Developments

    NASA Technical Reports Server (NTRS)

    Bull, Barton

    1999-01-01

    Sounding rockets are suborbital launch vehicles capable of carrying scientific payloads several hundred miles in altitude. These missions return a variety of scientific data including; chemical makeup and physical processes taking place In the atmosphere, natural radiation surrounding the Earth, data on the Sun, stars, galaxies and many other phenomena. In addition, sounding rockets provide a reasonably economical means of conducting engineering tests for instruments and devices used on satellites and other spacecraft prior to their use in more expensive activities. The NASA Sounding Rocket Program is managed by personnel from Goddard Space Flight Center Wallops Flight Facility (GSFC/WFF) in Virginia. Typically around thirty of these rockets are launched each year, either from established ranges at Wallops Island, Virginia, Poker Flat Research Range, Alaska; White Sands Missile Range, New Mexico or from Canada, Norway and Sweden. Many times launches are conducted from temporary launch ranges in remote parts of the world requi6ng considerable expense to transport and operate tracking radars. An inverse differential GPS system has been developed for Sounding Rocket. This paper addresses the NASA Wallops Island history of GPS Sounding Rocket experience since 1994 and the development of a high accurate and useful system.

  5. Theoretical Modelling of Sound Radiation from Plate

    NASA Astrophysics Data System (ADS)

    Zaman, I.; Rozlan, S. A. M.; Yusoff, A.; Madlan, M. A.; Chan, S. W.

    2017-01-01

    Recently the development of aerospace, automotive and building industries demands the use of lightweight materials such as thin plates. However, the plates can possibly add to significant vibration and sound radiation, which eventually lead to increased noise in the community. So, in this study, the fundamental concept of sound pressure radiated from a simply-supported thin plate (SSP) was analyzed using the derivation of mathematical equations and numerical simulation of ANSYS®. The solution to mathematical equations of sound radiated from a SSP was visualized using MATLAB®. The responses of sound pressure level were measured at far field as well as near field in the frequency range of 0-200 Hz. Result shows that there are four resonance frequencies; 12 Hz, 60 Hz, 106 Hz and 158 Hz were identified which represented by the total number of the peaks in the frequency response function graph. The outcome also indicates that the mathematical derivation correlated well with the simulation model of ANSYS® in which the error found is less than 10%. It can be concluded that the obtained model is reliable and can be applied for further analysis such as to reduce noise emitted from a vibrating thin plate.

  6. Spectral Characteristics of Wake Vortex Sound During Roll-Up

    NASA Technical Reports Server (NTRS)

    Booth, Earl R., Jr. (Technical Monitor); Zhang, Yan; Wang, Frank Y.; Hardin, Jay C.

    2003-01-01

    This report presents an analysis of the sound spectra generated by a trailing aircraft vortex during its rolling-up process. The study demonstrates that a rolling-up vortex could produce low frequency (less than 100 Hz) sound with very high intensity (60 dB above threshold of human hearing) at a distance of 200 ft from the vortex core. The spectrum then drops o rapidly thereafter. A rigorous analytical approach has been adopted in this report to derive the spectrum of vortex sound. First, the sound pressure was solved from an alternative treatment of the Lighthill s acoustic analogy approach [1]. After the application of Green s function for free space, a tensor analysis was applied to permit the removal of the source term singularity of the wave equation in the far field. Consequently, the sound pressure is expressed in terms of the retarded time that indicates the time history and spacial distribution of the sound source. The Fourier transformation is then applied to the sound pressure to compute its spectrum. As a result, the Fourier transformation greatly simplifies the expression of the vortex sound pressure involving the retarded time, so that the numerical computation is applicable with ease for axisymmetric line vortices during the rolling-up process. The vortex model assumes that the vortex circulation is proportional to the time and the core radius is a constant. In addition, the velocity profile is assumed to be self-similar along the aircraft flight path, so that a benchmark vortex velocity profile can be devised to obtain a closed form solution, which is then used to validate the numerical calculations for other more realistic vortex profiles for which no closed form solutions are available. The study suggests that acoustic sensors operating at low frequency band could be profitably deployed for detecting the vortex sound during the rolling-up process.

  7. Monaural sound localization revisited.

    PubMed

    Wightman, F L; Kistler, D J

    1997-02-01

    Research reported during the past few decades has revealed the importance for human sound localization of the so-called "monaural spectral cues." These cues are the result of the direction-dependent filtering of incoming sound waves accomplished by the pinnae. One point of view about how these cues are extracted places great emphasis on the spectrum of the received sound at each ear individually. This leads to the suggestion that an effective way of studying the influence of these cues is to measure the ability of listeners to localize sounds when one of their ears is plugged. Numerous studies have appeared using this monaural localization paradigm. Three experiments are described here which are intended to clarify the results of the previous monaural localization studies and provide new data on how monaural spectral cues might be processed. Virtual sound sources are used in the experiments in order to manipulate and control the stimuli independently at the two ears. Two of the experiments deal with the consequences of the incomplete monauralization that may have contaminated previous work. The results suggest that even very low sound levels in the occluded ear provide access to interaural localization cues. The presence of these cues complicates the interpretation of the results of nominally monaural localization studies. The third experiment concerns the role of prior knowledge of the source spectrum, which is required if monaural cues are to be useful. The results of this last experiment demonstrate that extraction of monaural spectral cues can be severely disrupted by trial-to-trial fluctuations in the source spectrum. The general conclusion of the experiments is that, while monaural spectral cues are important, the monaural localization paradigm may not be the most appropriate way to study their role.

  8. Monaural Sound Localization Revisited

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Kistler, Doris J.

    1997-01-01

    Research reported during the past few decades has revealed the importance for human sound localization of the so-called 'monaural spectral cues.' These cues are the result of the direction-dependent filtering of incoming sound waves accomplished by the pinnae. One point of view about how these cues are extracted places great emphasis on the spectrum of the received sound at each ear individually. This leads to the suggestion that an effective way of studying the influence of these cues is to measure the ability of listeners to localize sounds when one of their ears is plugged. Numerous studies have appeared using this monaural localization paradigm. Three experiments are described here which are intended to clarify the results of the previous monaural localization studies and provide new data on how monaural spectral cues might be processed. Virtual sound sources are used in the experiments in order to manipulate and control the stimuli independently at the two ears. Two of the experiments deal with the consequences of the incomplete monauralization that may have contaminated previous work. The results suggest that even very low sound levels in the occluded ear provide access to interaural localization cues. The presence of these cues complicates the interpretation of the results of nominally monaural localization studies. The third experiment concerns the role of prior knowledge of the source spectrum, which is required if monaural cues are to be useful. The results of this last experiment demonstrate that extraction of monaural spectral cues can be severely disrupted by trial-to-trial fluctuations in the source spectrum. The general conclusion of the experiments is that, while monaural spectral cues are important, the monaural localization paradigm may not be the most appropriate way to study their role.

  9. Sounding Equipment Studies,

    DTIC Science & Technology

    1967-11-06

    considered: 1. Single sounding head per craft r2. Multiple sounding heads per craft (paravanes I; or bar) 3. Mother craft-manned daughter boats 4... Mother craft-unmanned daughter boats S.... 5. Craft refueling at mother ship 6. Craft refueling (and crew change) by logistics boat. 4 - śI 7. Various...sensor costs, then are simply C = KmCs L/Ls msnh sn (27)i n where L = Useful life of sensorsn KM = 1.0 plus fraction of cost allocated to repair

  10. The heart sound preprocessor

    NASA Technical Reports Server (NTRS)

    Chen, W. T.

    1972-01-01

    Technology developed for signal and data processing was applied to diagnostic techniques in the area of phonocardiography (pcg), the graphic recording of the sounds of the heart generated by the functioning of the aortic and ventricular valves. The relatively broad bandwidth of the PCG signal (20 to 2000 Hz) was reduced to less than 100 Hz by the use of a heart sound envelope. The process involves full-wave rectification of the PCG signal, envelope detection of the rectified wave, and low pass filtering of the resultant envelope.

  11. The Imagery of Sound

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Automated Analysis Corporation's COMET is a suite of acoustic analysis software for advanced noise prediction. It analyzes the origin, radiation, and scattering of noise, and supplies information on how to achieve noise reduction and improve sound characteristics. COMET's Structural Acoustic Foam Engineering (SAFE) module extends the sound field analysis capability of foam and other materials. SAFE shows how noise travels while airborne, how it travels within a structure, and how these media interact to affect other aspects of the transmission of noise. The COMET software reduces design time and expense while optimizing a final product's acoustical performance. COMET was developed through SBIR funding and Langley Research Center for Automated Analysis Corporation.

  12. Sounding rockets in Antarctica

    NASA Technical Reports Server (NTRS)

    Alford, G. C.; Cooper, G. W.; Peterson, N. E.

    1982-01-01

    Sounding rockets are versatile tools for scientists studying the atmospheric region which is located above balloon altitudes but below orbital satellite altitudes. Three NASA Nike-Tomahawk sounding rockets were launched from Siple Station in Antarctica in an upper atmosphere physics experiment in the austral summer of 1980-81. The 110 kg payloads were carried to 200 km apogee altitudes in a coordinated project with Arcas rocket payloads and instrumented balloons. This Siple Station Expedition demonstrated the feasibility of launching large, near 1,000 kg, rocket systems from research stations in Antarctica. The remoteness of research stations in Antarctica and the severe environment are major considerations in planning rocket launching expeditions.

  13. Detection and generation of first sound in4He by vibrating superleak transducers

    NASA Astrophysics Data System (ADS)

    Giordano, N.; Edison, N.

    1986-07-01

    Measurement is made of the first-sound generation and detection efficiencies of vibrating superleak transducers (VSTs) operated in superfluid4He. This is accomplished by using an ordinary pressure transducer to generate first sound with a VST as the detector, and by using a pressure transducer to detect the sound generated by a VST. The results are in reasonably good agreement with the current theory of VST operation.

  14. Detection and generation of first sound in /sup 4/He by vibrating superleak transducers

    SciT

    Giordano, N.; Edison, N.

    Measurement is made of the first-sound generation and detection efficiencies of vibrating superleak transducers (VSTs) operated in superfluid /sup 4/He. This is accomplished by using an ordinary pressure transducer to generate first sound with a VST as the detector, and by using a pressure transducer to detect the sound generated by a VST. The results are in reasonably good agreement with the current theory of VST operation.

  15. Investigation of genesis of gallop sounds in dogs by quantitative phonocardiography and digital frequency analysis.

    PubMed

    Aubert, A E; Denys, B G; Meno, F; Reddy, P S

    1985-05-01

    Several investigators have noted external gallop sounds to be of higher amplitude than their corresponding internal sounds (S3 and S4). In this study we hoped to determine if S3 and S4 are transmitted in the same manner as S1. In 11 closed-chest dogs, external (apical) and left ventricular pressures and sounds were recorded simultaneously with transducers with identical sensitivity and frequency responses. Volume and pressure overload and positive and negative inotropic drugs were used to generate gallop sounds. Recordings were made in the control state and after the various interventions. S3 and S4 were recorded in 17 experiments each. The amplitude of the external S1 was uniformly higher than that of internal S1 and internal gallop sounds were inconspicuous. With use of Fourier transforms, the gain function was determined by comparing internal to external S1. By inverse transform, the amplitude of the internal gallop sounds was predicted from external sounds. The internal sounds of significant amplitude were predicted in many instances, but the actual recordings showed no conspicuous sounds. The absence of internal gallop sounds of expected amplitude as calculated from the external gallop sounds and the gain function derived from the comparison of internal and external S1 make it very unlikely that external gallop sounds are derived from internal sounds.

  16. A Patient-Centered, Provider-Facilitated Approach to the Refinement of Nonlinear Frequency Compression Parameters Based on Subjective Preference Ratings of Amplified Sound Quality.

    PubMed

    Johnson, Earl E; Light, Keri C

    2015-09-01

    equal preference between EC1 and EC2 perhaps, in part, because EC2 showed no objective improvement in audibility for six of the 14 participants (42%). Thirteen of the 14 participants showed no preference between NAL-NL2 and EC3, but all participants had an objective improvement in audibility. With more NFC than EC3, more and more participants preferred the other EC with less NFC in the paired comparison. By referencing the recommended sensation levels of amplitude compression (e.g., NAL-NL2) in the ear canal of hearing aid wearers, the targeting of NFC parameters can likely be optimized with respect to improvements in effective audibility that may contribute to speech recognition without adversely impacting sound quality. After targeting of NFC parameters, providers can facilitate decisions about the use of NFC parameters (strengths of processing) via sound quality preference judgments using paired comparisons. American Academy of Audiology.

  17. Effect of the spectrum of a high-intensity sound source on the sound-absorbing properties of a resonance-type acoustic lining

    NASA Astrophysics Data System (ADS)

    Ipatov, M. S.; Ostroumov, M. N.; Sobolev, A. F.

    2012-07-01

    Experimental results are presented on the effect of both the sound pressure level and the type of spectrum of a sound source on the impedance of an acoustic lining. The spectra under study include those of white noise, a narrow-band signal, and a signal with a preset waveform. It is found that, to obtain reliable data on the impedance of an acoustic lining from the results of interferometric measurements, the total sound pressure level of white noise or the maximal sound pressure level of a pure tone (at every oscillation frequency) needs to be identical to the total sound pressure level of the actual source at the site of acoustic lining on the channel wall.

  18. Using the Real-Ear-to-Coupler Difference within the American Academy of Audiology Pediatric Amplification Guideline: Protocols for Applying and Predicting Earmold RECDs.

    PubMed

    Moodie, Sheila; Pietrobon, Jonathan; Rall, Eileen; Lindley, George; Eiten, Leisha; Gordey, Dave; Davidson, Lisa; Moodie, K Shane; Bagatto, Marlene; Haluschak, Meredith Magathan; Folkeard, Paula; Scollie, Susan

    2016-03-01

    Real-ear-to-coupler difference (RECD) measurements are used for the purposes of estimating degree and configuration of hearing loss (in dB SPL ear canal) and predicting hearing aid output from coupler-based measures. Accurate measurements of hearing threshold, derivation of hearing aid fitting targets, and predictions of hearing aid output in the ear canal assume consistent matching of RECD coupling procedure (i.e., foam tip or earmold) with that used during assessment and in verification of the hearing aid fitting. When there is a mismatch between these coupling procedures, errors are introduced. The goal of this study was to quantify the systematic difference in measured RECD values obtained when using a foam tip versus an earmold with various tube lengths. Assuming that systematic errors exist, the second goal was to investigate the use of a foam tip to earmold correction for the purposes of improving fitting accuracy when mismatched RECD coupling conditions occur (e.g., foam tip at assessment, earmold at verification). Eighteen adults and 17 children (age range: 3-127 mo) participated in this study. Data were obtained using simulated ears of various volumes and earmold tubing lengths and from patients using their own earmolds. Derived RECD values based on simulated ear measurements were compared with RECD values obtained for adult and pediatric ears for foam tip and earmold coupling. Results indicate that differences between foam tip and earmold RECDs are consistent across test ears for adults and children which support the development of a correction between foam tip and earmold couplings for RECDs that can be applied across individuals. The foam tip to earmold correction values developed in this study can be used to provide improved estimations of earmold RECDs. This may support better accuracy in acoustic transforms related to transforming thresholds and/or hearing aid coupler responses to ear canal sound pressure level for the purposes of fitting behind

  19. Exploring Sound with Insects

    ERIC Educational Resources Information Center

    Robertson, Laura; Meyer, John R.

    2010-01-01

    Differences in insect morphology and movement during singing provide a fascinating opportunity for students to investigate insects while learning about the characteristics of sound. In the activities described here, students use a free online computer software program to explore the songs of the major singing insects and experiment with making…

  20. Creative Sound Dramatics

    ERIC Educational Resources Information Center

    Hendrix, Rebecca; Eick, Charles

    2014-01-01

    Sound propagation is not easy for children to understand because of its abstract nature, often best represented by models such as wave drawings and particle dots. Teachers Rebecca Hendrix and Charles Eick wondered how science inquiry, when combined with an unlikely discipline like drama, could produce a better understanding among their…

  1. Creating A Choral Sound.

    ERIC Educational Resources Information Center

    Leenman, Tracy E.

    1996-01-01

    Covers a variety of strategies for creating a unique and identifiable choral sound. Provides specific instructions for developing singing in unison and recommends a standing arrangement of soprano, alto, tenor, and bass quartets. Provides other tips for instrumentation, sight reading, and quality rehearsal time. (MJP)

  2. Photoacoustic Sounds from Meteors.

    SciT

    Spalding, Richard E.; Tencer, John; Sweatt, William C.

    2015-03-01

    High-speed photometric observations of meteor fireballs have shown that they often produce high-amplitude light oscillations with frequency components in the kHz range, and in some cases exhibit strong millisecond flares. We built a light source with similar characteristics and illuminated various materials in the laboratory, generating audible sounds. Models suggest that light oscillations and pulses can radiatively heat dielectric materials, which in turn conductively heats the surrounding air on millisecond timescales. The sound waves can be heard if the illuminated material is sufficiently close to the observer’s ears. The mechanism described herein may explain many reports of meteors that appearmore » to be audible while they are concurrently visible in the sky and too far away for sound to have propagated to the observer. This photoacoustic (PA) explanation provides an alternative to electrophonic (EP) sounds hypothesized to arise from electromagnetic coupling of plasma oscillation in the meteor wake to natural antennas in the vicinity of an observer.« less

  3. Making Sense of Sound

    ERIC Educational Resources Information Center

    Menon, Deepika; Lankford, Deanna

    2016-01-01

    From the earliest days of their lives, children are exposed to all kinds of sound, from soft, comforting voices to the frightening rumble of thunder. Consequently, children develop their own naïve explanations largely based upon their experiences with phenomena encountered every day. When new information does not support existing conceptions,…

  4. Second sound tracking system

    NASA Astrophysics Data System (ADS)

    Yang, Jihee; Ihas, Gary G.; Ekdahl, Dan

    2017-10-01

    It is common that a physical system resonates at a particular frequency, whose frequency depends on physical parameters which may change in time. Often, one would like to automatically track this signal as the frequency changes, measuring, for example, its amplitude. In scientific research, one would also like to utilize the standard methods, such as lock-in amplifiers, to improve the signal to noise ratio. We present a complete He ii second sound system that uses positive feedback to generate a sinusoidal signal of constant amplitude via automatic gain control. This signal is used to produce temperature/entropy waves (second sound) in superfluid helium-4 (He ii). A lock-in amplifier limits the oscillation to a desirable frequency and demodulates the received sound signal. Using this tracking system, a second sound signal probed turbulent decay in He ii. We present results showing that the tracking system is more reliable than those of a conventional fixed frequency method; there is less correlation with temperature (frequency) fluctuation when the tracking system is used.

  5. Frequency Dynamics of the First Heart Sound

    NASA Astrophysics Data System (ADS)

    Wood, John Charles

    Cardiac auscultation is a fundamental clinical tool but first heart sound origins and significance remain controversial. Previous clinical studies have implicated resonant vibrations of both the myocardium and the valves. Accordingly, the goals of this thesis were threefold, (1) to characterize the frequency dynamics of the first heart sound, (2) to determine the relative contribution of the myocardium and the valves in determining first heart sound frequency, and (3) to develop new tools for non-stationary signal analysis. A resonant origin for first heart sound generation was tested through two studies in an open-chest canine preparation. Heart sounds were recorded using ultralight acceleration transducers cemented directly to the epicardium. The first heart sound was observed to be non-stationary and multicomponent. The most dominant feature was a powerful, rapidly-rising frequency component that preceded mitral valve closure. Two broadband components were observed; the first coincided with mitral valve closure while the second significantly preceded aortic valve opening. The spatial frequency of left ventricular vibrations was both high and non-stationary which indicated that the left ventricle was not vibrating passively in response to intracardiac pressure fluctuations but suggested instead that the first heart sound is a propagating transient. In the second study, regional myocardial ischemia was induced by left coronary circumflex arterial occlusion. Acceleration transducers were placed on the ischemic and non-ischemic myocardium to determine whether ischemia produced local or global changes in first heart sound amplitude and frequency. The two zones exhibited disparate amplitude and frequency behavior indicating that the first heart sound is not a resonant phenomenon. To objectively quantify the presence and orientation of signal components, Radon transformation of the time -frequency plane was performed and found to have considerable potential for pattern

  6. Radiometric sounding system

    SciT

    Whiteman, C.D.; Anderson, G.A.; Alzheimer, J.M.

    1995-04-01

    Vertical profiles of solar and terrestrial radiative fluxes are key research needs for global climate change research. These fluxes are expected to change as radiatively active trace gases are emitted to the earth`s atmosphere as a consequence of energy production and industrial and other human activities. Models suggest that changes in the concentration of such gases will lead to radiative flux divergences that will produce global warming of the earth`s atmosphere. Direct measurements of the vertical variation of solar and terrestrial radiative fluxes that lead to these flux divergences have been largely unavailable because of the expense of making suchmore » measurements from airplanes. These measurements are needed to improve existing atmospheric radiative transfer models, especially under the cloudy conditions where the models have not been adequately tested. A tethered-balloon-borne Radiometric Sounding System has been developed at Pacific Northwest Laboratory to provide an inexpensive means of making routine vertical soundings of radiative fluxes in the earth`s atmospheric boundary layer to altitudes up to 1500 m above ground level. Such vertical soundings would supplement measurements being made from aircraft and towers. The key technical challenge in the design of the Radiometric Sounding System is to develop a means of keeping the radiometers horizontal while the balloon ascends and descends in a turbulent atmospheric environment. This problem has been addressed by stabilizing a triangular radiometer-carrying platform that is carried on the tetherline of a balloon sounding system. The platform, carried 30 m or more below the balloon to reduce the balloon`s effect on the radiometric measurements, is leveled by two automatic control loops that activate motors, gears and pulleys when the platform is off-level. The sensitivity of the automatic control loops to oscillatory motions of various frequencies and amplitudes can be adjusted using filters.« less

  7. About sound mufflers sound-absorbing panels aircraft engine

    NASA Astrophysics Data System (ADS)

    Dudarev, A. S.; Bulbovich, R. V.; Svirshchev, V. I.

    2016-10-01

    The article provides a formula for calculating the frequency of sound absorbed panel with a perforated wall. And although the sound absorbing structure is a set of resonators Helmholtz, not individual resonators should be considered in acoustic calculations, and all the perforated wall panel. The analysis, showing how the parameters affect the size and sound-absorbing structures in the absorption rate.

  8. Sounds of the Ancient Universe

    2013-03-21

    Tones represents sound waves that traveled through the early universe, and were later heard by ESA Planck space telescope. The primordial sound waves have been translated into frequencies we can hear.

  9. On Sound Reflection in Superfluid

    NASA Astrophysics Data System (ADS)

    Melnikovsky, L. A.

    2008-02-01

    We consider reflection of first and second sound waves by a rigid flat wall in superfluid. A nontrivial dependence of the reflection coefficients on the angle of incidence is obtained. Sound conversion is predicted at slanted incidence.

  10. Explanatory model for sound amplification in a stethoscope

    NASA Astrophysics Data System (ADS)

    Eshach, H.; Volfson, A.

    2015-01-01

    In the present paper we suggest an original physical explanatory model that explains the mechanism of the sound amplification process in a stethoscope. We discuss the amplification of a single pulse, a continuous wave of certain frequency, and finally we address the resonant frequencies. It is our belief that this model may provide students with opportunities to not only better understand the amplification mechanism of a stethoscope, but also to strengthen their understanding of sound, pressure, waves, resonance modes, etc.

  11. Optimization of Sound Absorbers Number and Placement in an Enclosed Room by Finite Element Simulation

    NASA Astrophysics Data System (ADS)

    Lau, S. F.; Zainulabidin, M. H.; Yahya, M. N.; Zaman, I.; Azmir, N. A.; Madlan, M. A.; Ismon, M.; Kasron, M. Z.; Ismail, A. E.

    2017-10-01

    Giving a room proper acoustic treatment is both art and science. Acoustic design brings comfort in the built environment and reduces noise level by using sound absorbers. There is a need to give a room acoustic treatment by installing absorbers in order to decrease the reverberant sound. However, they are usually high in price which cost much for installation and there is no system to locate the optimum number and placement of sound absorbers. It would be a waste if the room is overly treated with absorbers or cause improper treatment if the room is treated with insufficient absorbers. This study aims to determine the amount of sound absorbers needed and optimum location of sound absorbers placement in order to reduce the overall sound pressure level in specified room by using ANSYS APDL software. The size of sound absorbers needed is found to be 11 m 2 by using Sabine equation and different unit sets of absorbers are applied on walls, each with the same total areas to investigate the best configurations. All three sets (single absorber, 11 absorbers and 44 absorbers) has successfully treating the room by reducing the overall sound pressure level. The greatest reduction in overall sound pressure level is that of 44 absorbers evenly distributed around the walls, which has reduced as much as 24.2 dB and the least effective configuration is single absorber whereby it has reduced the overall sound pressure level by 18.4 dB.

  12. Just How Does Sound Wave?

    ERIC Educational Resources Information Center

    Shipman, Bob

    2006-01-01

    When children first hear the term "sound wave" perhaps they might associate it with the way a hand waves or perhaps the squiggly line image on a television monitor when sound recordings are being made. Research suggests that children tend to think sound somehow travels as a discrete package, a fast-moving invisible thing, and not something that…

  13. Sounds Alive: A Noise Workbook.

    ERIC Educational Resources Information Center

    Dickman, Donna McCord

    Sarah Screech, Danny Decibel, Sweetie Sound and Neil Noisy describe their experiences in the world of sound and noise to elementary students. Presented are their reports, games and charts which address sound measurement, the effects of noise on people, methods of noise control, and related areas. The workbook is intended to stimulate students'…

  14. THE SOUND PATTERN OF ENGLISH.

    ERIC Educational Resources Information Center

    CHOMSKY, NOAM; HALLE, MORRIS

    "THE SOUND PATTERN OF ENGLISH" PRESENTS A THEORY OF SOUND STRUCTURE AND A DETAILED ANALYSIS OF THE SOUND STRUCTURE OF ENGLISH WITHIN THE FRAMEWORK OF GENERATIVE GRAMMAR. IN THE PREFACE TO THIS BOOK THE AUTHORS STATE THAT THEIR "WORK IN THIS AREA HAS REACHED A POINT WHERE THE GENERAL OUTLINES AND MAJOR THEORETICAL PRINCIPLES ARE FAIRLY CLEAR" AND…

  15. Data sonification and sound visualization.

    SciT

    Kaper, H. G.; Tipei, S.; Wiebel, E.

    1999-07-01

    Sound can help us explore and analyze complex data sets in scientific computing. The authors describe a digital instrument for additive sound synthesis (Diass) and a program to visualize sounds in a virtual reality environment (M4Cave). Both are part of a comprehensive music composition environment that includes additional software for computer-assisted composition and automatic music notation.

  16. Wood for sound.

    PubMed

    Wegst, Ulrike G K

    2006-10-01

    The unique mechanical and acoustical properties of wood and its aesthetic appeal still make it the material of choice for musical instruments and the interior of concert halls. Worldwide, several hundred wood species are available for making wind, string, or percussion instruments. Over generations, first by trial and error and more recently by scientific approach, the most appropriate species were found for each instrument and application. Using material property charts on which acoustic properties such as the speed of sound, the characteristic impedance, the sound radiation coefficient, and the loss coefficient are plotted against one another for woods. We analyze and explain why spruce is the preferred choice for soundboards, why tropical species are favored for xylophone bars and woodwind instruments, why violinists still prefer pernambuco over other species as a bow material, and why hornbeam and birch are used in piano actions.

  17. Environmentally sound manufacturing

    NASA Technical Reports Server (NTRS)

    Caddy, Larry A.; Bowman, Ross; Richards, Rex A.

    1994-01-01

    The NASA/Thiokol/industry team has developed and started implementation of an environmentally sound manufacturing plan for the continued production of solid rocket motors. They have worked with other industry representatives and the U.S. Environmental Protection Agency to prepare a comprehensive plan to eliminate all ozone depleting chemicals from manufacturing processes and to reduce the use of other hazardous materials used to produce the space shuttle reusable solid rocket motors. The team used a classical approach for problem solving combined with a creative synthesis of new approaches to attack this problem. As our ability to gather data on the state of the Earth's environmental health increases, environmentally sound manufacturing must become an integral part of the business decision making process.

  18. Sound exposure during outdoor music festivals.

    PubMed

    Tronstad, Tron V; Gelderblom, Femke B

    2016-01-01

    Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival's duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization's recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization's recommendations. The results also show that front-of-house measurements reliably predict participant exposure.

  19. Sound Exposure During Outdoor Music Festivals

    PubMed Central

    Tronstad, Tron V.; Gelderblom, Femke B.

    2016-01-01

    Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival's duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization's recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization's recommendations. The results also show that front-of-house measurements reliably predict participant exposure. PMID:27569410

  20. The Sounds of Space

    NASA Astrophysics Data System (ADS)

    Gurnett, Donald

    2009-11-01

    The popular concept of space is that it is a vacuum, with nothing of interest between the stars, planets, moons and other astronomical objects. In fact most of space is permeated by plasma, sometimes quite dense, as in the solar corona and planetary ionospheres, and sometimes quite tenuous, as is in planetary radiation belts. Even less well known is that these space plasmas support and produce an astonishing large variety of waves, the ``sounds of space.'' In this talk I will give you a tour of these space sounds, starting with the very early discovery of ``whistlers'' nearly a century ago, and proceeding through my nearly fifty years of research on space plasma waves using spacecraft-borne instrumentation. In addition to being of scientific interest, some of these sounds can even be described as ``musical,'' and have served as the basis for various musical compositions, including a production called ``Sun Rings,'' written by the well-known composer Terry Riley, that has been performed by the Kronos Quartet to audiences all around the world.

  1. Effect of Diving and Diving Hoods on the Bacterial Flora of the External Ear Canal and Skin

    DTIC Science & Technology

    1982-05-01

    in parentheses itidicate number of sites tested. b One strain isolated from skin laceration exposed to water. "Diver developed external otitis media 5... otitis media (11), skin infections skin of wearing diving hoods in and out of the (6), and diarrheal diseases (10). One aspect of water. We

  2. A "Goldilocks" Approach to Hearing Aid Self-Fitting: Ear-Canal Output and Speech Intelligibility Index.

    PubMed

    Mackersie, Carol; Boothroyd, Arthur; Lithgow, Alexandra

    2018-06-11

    The objective was to determine self-adjusted output response and speech intelligibility index (SII) in individuals with mild to moderate hearing loss and to measure the effects of prior hearing aid experience. Thirteen hearing aid users and 13 nonusers, with similar group-mean pure-tone thresholds, listened to prerecorded and preprocessed sentences spoken by a man. Starting with a generic level and spectrum, participants adjusted (1) overall level, (2) high-frequency boost, and (3) low-frequency cut. Participants took a speech perception test after an initial adjustment before making a final adjustment. The three self-selected parameters, along with individual thresholds and real-ear-to-coupler differences, were used to compute output levels and SIIs for the starting and two self-adjusted conditions. The values were compared with an NAL second nonlinear threshold-based prescription (NAL-NL2) and, for the hearing aid users, performance of their existing hearing aids. All participants were able to complete the self-adjustment process. The generic starting condition provided outputs (between 2 and 8 kHz) and SIIs that were significantly below those prescribed by NAL-NL2. Both groups increased SII to values that were not significantly different from prescription. The hearing aid users, but not the nonusers, increased high-frequency output and SII significantly after taking the speech perception test. Seventeen of the 26 participants (65%) met an SII criterion of 60% under the generic starting condition. The proportion increased to 23 out of 26 (88%) after the final self-adjustment. Of the 13 hearing aid users, 8 (62%) met the 60% criterion with their existing hearing aids. With the final self-adjustment, 12 out of 13 (92%) met this criterion. The findings support the conclusion that user self-adjustment of basic amplification characteristics can be both feasible and effective with or without prior hearing aid experience.

  3. Effects of a sensory branch to the posterior external ear canal: coughing, pain, Ramsay Hunt's syndrome and Hitselberger's sign.

    PubMed

    Mulazimoglu, S; Flury, R; Kapila, S; Linder, T

    2017-04-01

    A distinct nerve innervating the external auditory canal can often be identified in close relation to the facial nerve when gradually thinning the posterior canal wall. This nerve has been attributed to coughing during cerumen removal, neuralgic pain, Hitselberger's sign and vesicular eruptions described in Ramsay Hunt's syndrome. This study aimed to demonstrate the origin and clinical impact of this nerve. In patients with intractable otalgia or severe coughing whilst inserting a hearing aid, who responded temporarily to local anaesthesia, the symptoms could be resolved by sectioning a sensory branch to the posterior canal. In a temporal bone specimen, it was revealed that this nerve is predominantly a continuation of Arnold's nerve, also receiving fibres from the glossopharyngeal nerve and facial nerve. Histologically, the communicating branch from the facial nerve was confirmed. Surgeons should be aware of the posterior auricular sensory branch and its clinical implications.

  4. Judging sound rotation when listeners and sounds rotate: Sound source localization is a multisystem process.

    PubMed

    Yost, William A; Zhong, Xuan; Najam, Anbar

    2015-11-01

    In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.

  5. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 1 2013-10-01 2013-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  6. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 1 2011-10-01 2011-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  7. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 1 2014-10-01 2014-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  8. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 1 2012-10-01 2012-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  9. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  10. Characteristic sounds facilitate visual search.

    PubMed

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  11. Characteristic sounds facilitate visual search

    PubMed Central

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2009-01-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing “meow” did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds. PMID:18567253

  12. Blood Pressure Checker

    NASA Technical Reports Server (NTRS)

    1979-01-01

    An estimated 30 million people in the United States have high blood pressure, or hypertension. But a great many of them are unaware of it because hypertension, in its initial stages, displays no symptoms. Thus, the simply-operated blood pressure checking devices now widely located in public places are useful health aids. The one pictured above, called -Medimax 30, is a direct spinoff from NASA technology developed to monitor astronauts in space. For manned space flights, NASA wanted a compact, highly-reliable, extremely accurate method of checking astronauts' blood pressure without the need for a physician's interpretive skill. NASA's Johnson Space Center and Technology, Inc., a contractor, developed an electronic sound processor that automatically analyzes blood flow sounds to get both systolic (contracting arteries) and diastolic (expanding arteries) blood pressure measurements. NASA granted a patent license for this technology to Advanced Life Sciences, Inc., New York City, manufacturers of Medimax 30.

  13. Sound production on a "coaxial saxophone".

    PubMed

    Doc, J-B; Vergez, C; Guillemain, P; Kergomard, J

    2016-11-01

    Sound production on a "coaxial saxophone" is investigated experimentally. The coaxial saxophone is a variant of the cylindrical saxophone made up of two tubes mounted in parallel, which can be seen as a low-frequency analogy of a truncated conical resonator with a mouthpiece. Initially developed for the purposes of theoretical analysis, an experimental verification of the analogy between conical and cylindrical saxophones has never been reported. The present paper explains why the volume of the cylindrical saxophone mouthpiece limits the achievement of a good playability. To limit the mouthpiece volume, a coaxial alignment of pipes is proposed and a prototype of coaxial saxophone is built. An impedance model of coaxial resonator is proposed and validated by comparison with experimental data. Sound production is also studied through experiments with a blowing machine. The playability of the prototype is then assessed and proven for several values of the blowing pressure, of the embouchure parameter, and of the instrument's geometrical parameters.

  14. Method of fan sound mode structure determination

    NASA Technical Reports Server (NTRS)

    Pickett, G. F.; Sofrin, T. G.; Wells, R. W.

    1977-01-01

    A method for the determination of fan sound mode structure in the Inlet of turbofan engines using in-duct acoustic pressure measurements is presented. The method is based on the simultaneous solution of a set of equations whose unknowns are modal amplitude and phase. A computer program for the solution of the equation set was developed. An additional computer program was developed which calculates microphone locations the use of which results in an equation set that does not give rise to numerical instabilities. In addition to the development of a method for determination of coherent modal structure, experimental and analytical approaches are developed for the determination of the amplitude frequency spectrum of randomly generated sound models for use in narrow annulus ducts. Two approaches are defined: one based on the use of cross-spectral techniques and the other based on the use of an array of microphones.

  15. Popcorn: critical temperature, jump and sound.

    PubMed

    Virot, Emmanuel; Ponomarenko, Alexandre

    2015-03-06

    Popcorn bursts open, jumps and emits a 'pop' sound in some hundredths of a second. The physical origin of these three observations remains unclear in the literature. We show that the critical temperature 180°C at which almost all of popcorn pops is consistent with an elementary pressure vessel scenario. We observe that popcorn jumps with a 'leg' of starch which is compressed on the ground. As a result, popcorn is midway between two categories of moving systems: explosive plants using fracture mechanisms and jumping animals using muscles. By synchronizing video recordings with acoustic recordings, we propose that the familiar 'pop' sound of the popcorn is caused by the release of water vapour. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  16. Popcorn: critical temperature, jump and sound

    PubMed Central

    Virot, Emmanuel; Ponomarenko, Alexandre

    2015-01-01

    Popcorn bursts open, jumps and emits a ‘pop’ sound in some hundredths of a second. The physical origin of these three observations remains unclear in the literature. We show that the critical temperature 180°C at which almost all of popcorn pops is consistent with an elementary pressure vessel scenario. We observe that popcorn jumps with a ‘leg’ of starch which is compressed on the ground. As a result, popcorn is midway between two categories of moving systems: explosive plants using fracture mechanisms and jumping animals using muscles. By synchronizing video recordings with acoustic recordings, we propose that the familiar ‘pop’ sound of the popcorn is caused by the release of water vapour. PMID:25673298

  17. Sound from apollo rockets in space.

    PubMed

    Cotten, D; Donn, W L

    1971-02-12

    Low-frequency sound has been recorded on at least two occasions in Bermuda with the passage of Apollo rocket vehicles 188 kilometers aloft. The signals, which are reminiscent of N-waves from sonic booms, are (i) horizontally coherent; (ii) have extremely high (supersonic) trace velocities across the tripartite arrays; (iii) have nearly identical appearance and frequencies; (iv) have essentially identical arrival times after rocket launch; and (v) are the only coherent signals recorded over many hours. These observations seem to establish that the recorded sound comes from the rockets at high elevation. Despite this high elevation, the values of surface pressure appear to be explainable on the basis of a combination of a kinetic theory approach to shock formation in rarefied atmospheres with established gas-dynamics shock theory.

  18. Cell type-specific suppression of mechanosensitive genes by audible sound stimulation.

    PubMed

    Kumeta, Masahiro; Takahashi, Daiji; Takeyasu, Kunio; Yoshimura, Shige H

    2018-01-01

    Audible sound is a ubiquitous environmental factor in nature that transmits oscillatory compressional pressure through the substances. To investigate the property of the sound as a mechanical stimulus for cells, an experimental system was set up using 94.0 dB sound which transmits approximately 10 mPa pressure to the cultured cells. Based on research on mechanotransduction and ultrasound effects on cells, gene responses to the audible sound stimulation were analyzed by varying several sound parameters: frequency, wave form, composition, and exposure time. Real-time quantitative PCR analyses revealed a distinct suppressive effect for several mechanosensitive and ultrasound-sensitive genes that were triggered by sounds. The effect was clearly observed in a wave form- and pressure level-specific manner, rather than the frequency, and persisted for several hours. At least two mechanisms are likely to be involved in this sound response: transcriptional control and RNA degradation. ST2 stromal cells and C2C12 myoblasts exhibited a robust response, whereas NIH3T3 cells were partially and NB2a neuroblastoma cells were completely insensitive, suggesting a cell type-specific response to sound. These findings reveal a cell-level systematic response to audible sound and uncover novel relationships between life and sound.

  19. Cell type-specific suppression of mechanosensitive genes by audible sound stimulation

    PubMed Central

    Takahashi, Daiji; Takeyasu, Kunio; Yoshimura, Shige H.

    2018-01-01

    Audible sound is a ubiquitous environmental factor in nature that transmits oscillatory compressional pressure through the substances. To investigate the property of the sound as a mechanical stimulus for cells, an experimental system was set up using 94.0 dB sound which transmits approximately 10 mPa pressure to the cultured cells. Based on research on mechanotransduction and ultrasound effects on cells, gene responses to the audible sound stimulation were analyzed by varying several sound parameters: frequency, wave form, composition, and exposure time. Real-time quantitative PCR analyses revealed a distinct suppressive effect for several mechanosensitive and ultrasound-sensitive genes that were triggered by sounds. The effect was clearly observed in a wave form- and pressure level-specific manner, rather than the frequency, and persisted for several hours. At least two mechanisms are likely to be involved in this sound response: transcriptional control and RNA degradation. ST2 stromal cells and C2C12 myoblasts exhibited a robust response, whereas NIH3T3 cells were partially and NB2a neuroblastoma cells were completely insensitive, suggesting a cell type-specific response to sound. These findings reveal a cell-level systematic response to audible sound and uncover novel relationships between life and sound. PMID:29385174

  20. A Flexible 360-Degree Thermal Sound Source Based on Laser Induced Graphene

    PubMed Central

    Tao, Lu-Qi; Liu, Ying; Ju, Zhen-Yi; Tian, He; Xie, Qian-Yi; Yang, Yi; Ren, Tian-Ling

    2016-01-01

    A flexible sound source is essential in a whole flexible system. It’s hard to integrate a conventional sound source based on a piezoelectric part into a whole flexible system. Moreover, the sound pressure from the back side of a sound source is usually weaker than that from the front side. With the help of direct laser writing (DLW) technology, the fabrication of a flexible 360-degree thermal sound source becomes possible. A 650-nm low-power laser was used to reduce the graphene oxide (GO). The stripped laser induced graphene thermal sound source was then attached to the surface of a cylindrical bottle so that it could emit sound in a 360-degree direction. The sound pressure level and directivity of the sound source were tested, and the results were in good agreement with the theoretical results. Because of its 360-degree sound field, high flexibility, high efficiency, low cost, and good reliability, the 360-degree thermal acoustic sound source will be widely applied in consumer electronics, multi-media systems, and ultrasonic detection and imaging. PMID:28335239

  1. Enhancing maximum measurable sound reduction index using sound intensity method and strong receiving room absorption.

    PubMed

    Hongisto, V; Lindgren, M; Keränen, J

    2001-01-01

    The sound intensity method is usually recommended instead of the pressure method in the presence of strong flanking transmission. Especially when small and/or heavy specimens are tested, the flanking often causes problems in laboratories practicing only the pressure method. The purpose of this study was to determine experimentally the difference between the maximum sound reduction indices obtained by the intensity method, RI,max, and by the pressure method, Rmax. In addition, the influence of adding room absorption to the receiving room was studied. The experiments were carried out in an ordinary two-room test laboratory. The exact value of RI,max was estimated by applying a fitting equation to the measured data points. The fitting equation involved the dependence of the pressure-intensity indicator on measured acoustical parameters. In an empty receiving room, the difference between RI,max and Rmax was 4-15 dB, depending on frequency. When the average reverberation time was reduced from 3.5 to 0.6 s, the values of RI,max increased by 2-10 dB compared to the results in the empty room. Thus, it is possible to measure wall structures having 9-22 dB better sound reduction index using the intensity method than with the pressure method. This facilitates the measurements of small and/or heavy specimens in the presence of flanking. Moreover, when new laboratories are designed, the intensity method is an alternative to the pressure method which presupposes expensive isolation structures between the rooms.

  2. Sound Levels and Risk Perceptions of Music Students During Classes.

    PubMed

    Rodrigues, Matilde A; Amorim, Marta; Silva, Manuela V; Neves, Paula; Sousa, Aida; Inácio, Octávio

    2015-01-01

    It is well recognized that professional musicians are at risk of hearing damage due to the exposure to high sound pressure levels during music playing. However, it is important to recognize that the musicians' exposure may start early in the course of their training as students in the classroom and at home. Studies regarding sound exposure of music students and their hearing disorders are scarce and do not take into account important influencing variables. Therefore, this study aimed to describe sound level exposures of music students at different music styles, classes, and according to the instrument played. Further, this investigation attempted to analyze the perceptions of students in relation to exposure to loud music and consequent health risks, as well as to characterize preventive behaviors. The results showed that music students are exposed to high sound levels in the course of their academic activity. This exposure is potentiated by practice outside the school and other external activities. Differences were found between music style, instruments, and classes. Tinnitus, hyperacusis, diplacusis, and sound distortion were reported by the students. However, students were not entirely aware of the health risks related to exposure to high sound pressure levels. These findings reflect the importance of starting intervention in relation to noise risk reduction at an early stage, when musicians are commencing their activity as students.

  3. Speed of Sound and Ultrasound Absorption in Ionic Liquids.

    PubMed

    Dzida, Marzena; Zorębski, Edward; Zorębski, Michał; Żarska, Monika; Geppert-Rybczyńska, Monika; Chorążewski, Mirosław; Jacquemin, Johan; Cibulka, Ivan

    2017-03-08

    A complete review of the literature data on the speed of sound and ultrasound absorption in pure ionic liquids (ILs) is presented. Apart of the analysis of data published to date, the significance of the speed of sound in ILs is regarded. An analysis of experimental methods described in the literature to determine the speed of sound in ILs as a function of temperature and pressure is reported, and the relevance of ultrasound absorption in acoustic investigations is discussed. Careful attention was paid to highlight possible artifacts, and side phenomena related to the absorption and relaxation present in such measurements. Then, an overview of existing data is depicted to describe the temperature and pressure dependences on the speed of sound in ILs, as well as the impact of impurities in ILs on this property. A relation between ions structure and speeds of sound is presented by highlighting existing correlation and evaluative methods described in the literature. Importantly, a critical analysis of speeds of sound in ILs vs those in classical molecular solvents is presented to compare these two classes of compounds. The last part presents the importance of acoustic investigations for chemical engineering design and possible industrial applications of ILs.

  4. Comprehensive measures of sound exposures in cinemas using smart phones.

    PubMed

    Huth, Markus E; Popelka, Gerald R; Blevins, Nikolas H

    2014-01-01

    Sensorineural hearing loss from sound overexposure has a considerable prevalence. Identification of sound hazards is crucial, as prevention, due to a lack of definitive therapies, is the sole alternative to hearing aids. One subjectively loud, yet little studied, potential sound hazard is movie theaters. This study uses smart phones to evaluate their applicability as a widely available, validated sound pressure level (SPL) meter. Therefore, this study measures sound levels in movie theaters to determine whether sound levels exceed safe occupational noise exposure limits and whether sound levels in movie theaters differ as a function of movie, movie theater, presentation time, and seat location within the theater. Six smart phones with an SPL meter software application were calibrated with a precision SPL meter and validated as an SPL meter. Additionally, three different smart phone generations were measured in comparison to an integrating SPL meter. Two different movies, an action movie and a children's movie, were measured six times each in 10 different venues (n = 117). To maximize representativeness, movies were selected focusing on large release productions with probable high attendance. Movie theaters were selected in the San Francisco, CA, area based on whether they screened both chosen movies and to represent the largest variety of theater proprietors. Measurements were analyzed in regard to differences between theaters, location within the theater, movie, as well as presentation time and day as indirect indicator of film attendance. The smart phone measurements demonstrated high accuracy and reliability. Overall, sound levels in movie theaters do not exceed safe exposure limits by occupational standards. Sound levels vary significantly across theaters and demonstrated statistically significant higher sound levels and exposures in the action movie compared to the children's movie. Sound levels decrease with distance from the screen. However, no influence on

  5. Sound of photosynthesis

    SciT

    Amato, I.

    1989-01-01

    The beauty of photosynthesis runs deep into its physicochemical details, many of which continue to elude scientific understanding. One of the big unsolved mysteries of photosynthesis is how the oxygen molecules are made, remarks David Mauzerall, a biophysicist at Rockefeller University in New York City. He and his colleagues, Ora Canaani and Shmuel Malkin, both biochemists at the Weizmann Institute of Science in Rehovot, Israel, are shining some light on this mystery. Using a technique called pulsed photoacoustic spectroscopy, the three researchers have eavesdropped on some of the intimate details of oxygen evolution. You can now hear the sound ofmore » oxygen coming out of the leaves, Mauzerall said in an interview. Mauzerall and co-workers reported their work last summer in the Proceedings of the National Academy of Sciences. Did he say, hear oxygen. As its name implies, photoacoustic spectroscopy is a sound-from-light technique. It is especially suited for getting spectra from samples like leaves that mess up the incident badly that even scattering or reflection-based spectroscopic methods usually can't reveal too much about the plant's chemical personality.« less

  6. A Numerical Experiment on the Role of Surface Shear Stress in the Generation of Sound

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Wang, Meng; Merriam, Marshal (Technical Monitor)

    1996-01-01

    The sound generated due to a localized flow over an infinite flat surface is considered. It is known that the unsteady surface pressure, while appearing in a formal solution to the Lighthill equation, does not constitute a source of sound but rather represents the effect of image quadrupoles. The question of whether a similar surface shear stress term constitutes a true source of dipole sound is less settled. Some have boldly assumed it is a true source while others have argued that, like the surface pressure, it depends on the sound field (via an acoustic boundary layer) and is therefore not a true source. A numerical experiment based on the viscous, compressible Navier-Stokes equations was undertaken to investigate the issue. A small region of a wall was oscillated tangentially. The directly computed sound field was found to to agree with an acoustic analogy based calculation which regards the surface shear as an acoustically compact dipole source of sound.

  7. Sounds like Team Spirit

    NASA Technical Reports Server (NTRS)

    Hoffman, Edward

    2002-01-01

    I recently accompanied my son Dan to one of his guitar lessons. As I sat in a separate room, I focused on the music he was playing and the beautiful, robust sound that comes from a well-played guitar. Later that night, I woke up around 3 am. I tend to have my best thoughts at this hour. The trouble is I usually roll over and fall back asleep. This time I was still awake an hour later, so I got up and jotted some notes down in my study. I was thinking about the pure, honest sound of a well-played instrument. From there my mind wandered into the realm of high-performance teams and successful projects. (I know this sounds weird, but this is the sort of thing I think about at 3 am. Maybe you have your own weird thoughts around that time.) Consider a team in relation to music. It seems to me that a crack team can achieve a beautiful, perfect unity in the same way that a band of brilliant musicians can when they're in harmony with one another. With more than a little satisfaction I have to admit, I started to think about the great work performed for you by the Knowledge Sharing team, including this magazine you are reading. Over the past two years I personally have received some of my greatest pleasures as the APPL Director from the Knowledge Sharing activities - the Masters Forums, NASA Center visits, ASK Magazine. The Knowledge Sharing team expresses such passion for their work, just like great musicians convey their passion in the music they play. In the case of Knowledge Sharing, there are many factors that have made this so enjoyable (and hopefully worthwhile for NASA). Three ingredients come to mind -- ingredients that have produced a signature sound. First, through the crazy, passionate playing of Alex Laufer, Michelle Collins, Denise Lee, and Todd Post, I always know that something startling and original is going to come out of their activities. This team has consistently done things that are unique and innovative. For me, best of all is that they are always

  8. Analysis of environmental sounds

    NASA Astrophysics Data System (ADS)

    Lee, Keansub

    Environmental sound archives - casual recordings of people's daily life - are easily collected by MPS players or camcorders with low cost and high reliability, and shared in the web-sites. There are two kinds of user generated recordings we would like to be able to handle in this thesis: Continuous long-duration personal audio and Soundtracks of short consumer video clips. These environmental recordings contain a lot of useful information (semantic concepts) related with activity, location, occasion and content. As a consequence, the environment archives present many new opportunities for the automatic extraction of information that can be used in intelligent browsing systems. This thesis proposes systems for detecting these interesting concepts on a collection of these real-world recordings. The first system is to segment and label personal audio archives - continuous recordings of an individual's everyday experiences - into 'episodes' (relatively consistent acoustic situations lasting a few minutes or more) using the Bayesian Information Criterion and spectral clustering. The second system is for identifying regions of speech or music in the kinds of energetic and highly-variable noise present in this real-world sound. Motivated by psychoacoustic evidence that pitch is crucial in the perception and organization of sound, we develop a noise-robust pitch detection algorithm to locate speech or music-like regions. To avoid false alarms resulting from background noise with strong periodic components (such as air-conditioning), a new scheme is added in order to suppress these noises in the domain of autocorrelogram. In addition, the third system is to automatically detect a large set of interesting semantic concepts; which we chose for being both informative and useful to users, as well as being technically feasible. These 25 concepts are associated with people's activities, locations, occasions, objects, scenes and sounds, and are based on a large collection of

  9. Sound therapies for tinnitus management.

    PubMed

    Jastreboff, Margaret M

    2007-01-01

    Many people with bothersome (suffering) tinnitus notice that their tinnitus changes in different acoustical surroundings, it is more intrusive in silence and less profound in the sound enriched environments. This observation led to the development of treatment methods for tinnitus utilizing sound. Many of these methods are still under investigation in respect to their specific protocol and effectiveness and only some have been objectively evaluated in clinical trials. This chapter will review therapies for tinnitus using sound stimulation.

  10. A cochlear-bone wave can yield a hearing sensation as well as otoacoustic emission

    PubMed Central

    Tchumatchenko, Tatjana; Reichenbach, Tobias

    2014-01-01

    A hearing sensation arises when the elastic basilar membrane inside the cochlea vibrates. The basilar membrane is typically set into motion through airborne sound that displaces the middle ear and induces a pressure difference across the membrane. A second, alternative pathway exists, however: stimulation of the cochlear bone vibrates the basilar membrane as well. This pathway, referred to as bone conduction, is increasingly used in headphones that bypass the ear canal and the middle ear. Furthermore, otoacoustic emissions, sounds generated inside the cochlea and emitted therefrom, may not involve the usual wave on the basilar membrane, suggesting that additional cochlear structures are involved in their propagation. Here we describe a novel propagation mode within the cochlea that emerges through deformation of the cochlear bone. Through a mathematical and computational approach we demonstrate that this propagation mode can explain bone conduction as well as numerous properties of otoacoustic emissions. PMID:24954736

  11. Peripheral and central auditory specialization in a gliding marsupial, the feathertail glider, Acrobates pygmaeus.

    PubMed

    Aitkin, L M; Nelson, J E

    1989-01-01

    Two specialized features are described in the auditory system of Acrobates pygmaeus, a small gliding marsupial. Firstly, the ear canal includes a transverse disk of bone that partly occludes the canal near the eardrum. The resultant narrow-necked chamber above the eardrum appears to attenuate sound across a broad frequency range, except at 27-29 kHz at which a net gain of sound pressure occurs. Secondly, the lateral medulla is hypertrophied at the level of the cochlear nucleus, forming a massive lateral lobe comprised of multipolar cells and granule cells. This lobe has connections with the auditory nerve and the cerebellum. Speculations are advanced about the functions of these structures in gliding behaviour and predator avoidance.

  12. Discovery of Sound in the Sea (DOSITS) Website Development

    DTIC Science & Technology

    2013-03-04

    life affect ocean sound levels? • Science of Sound > Sounds in the Sea > How will ocean acidification affect ocean sound levels? • Science of Sound...Science of Sound > Sounds in the Sea > How does shipping affect ocean sound levels? • Science of Sound > Sounds in the Sea > How does marine

  13. Musical Sound, Instruments, and Equipment

    NASA Astrophysics Data System (ADS)

    Photinos, Panos

    2017-12-01

    'Musical Sound, Instruments, and Equipment' offers a basic understanding of sound, musical instruments and music equipment, geared towards a general audience and non-science majors. The book begins with an introduction of the fundamental properties of sound waves, and the perception of the characteristics of sound. The relation between intensity and loudness, and the relation between frequency and pitch are discussed. The basics of propagation of sound waves, and the interaction of sound waves with objects and structures of various sizes are introduced. Standing waves, harmonics and resonance are explained in simple terms, using graphics that provide a visual understanding. The development is focused on musical instruments and acoustics. The construction of musical scales and the frequency relations are reviewed and applied in the description of musical instruments. The frequency spectrum of selected instruments is explored using freely available sound analysis software. Sound amplification and sound recording, including analog and digital approaches, are discussed in two separate chapters. The book concludes with a chapter on acoustics, the physical factors that affect the quality of the music experience, and practical ways to improve the acoustics at home or small recording studios. A brief technical section is provided at the end of each chapter, where the interested reader can find the relevant physics and sample calculations. These quantitative sections can be skipped without affecting the comprehension of the basic material. Questions are provided to test the reader's understanding of the material. Answers are given in the appendix.

  14. Accuracy of assessing the level of impulse sound from distant sources.

    PubMed

    Wszołek, Tadeusz; Kłaczyński, Maciej

    2007-01-01

    Impulse sound events are characterised by ultra high pressures and low frequencies. Lower frequency sounds are generally less attenuated over a given distance in the atmosphere than higher frequencies. Thus, impulse sounds can be heard over greater distances and will be more affected by the environment. To calculate a long-term average immission level it is necessary to apply weighting factors like the probability of the occurrence of each weather condition during the relevant time period. This means that when measuring impulse noise at a long distance it is necessary to follow environmental parameters in many points along the way sound travels and also to have a database of sound transfer functions in the long term. The paper analyses the uncertainty of immission measurement results of impulse sound from cladding and destroying explosive materials. The influence of environmental conditions on the way sound travels is the focus of this paper.

  15. Application of a finite-element model to low-frequency sound insulation in dwellings.

    PubMed

    Maluski, S P; Gibbs, B M

    2000-10-01

    The sound transmission between adjacent rooms has been modeled using a finite-element method. Predicted sound-level difference gave good agreement with experimental data using a full-scale and a quarter-scale model. Results show that the sound insulation characteristics of a party wall at low frequencies strongly depend on the modal characteristics of the sound field of both rooms and of the partition. The effect of three edge conditions of the separating wall on the sound-level difference at low frequencies was examined: simply supported, clamped, and a combination of clamped and simply supported. It is demonstrated that a clamped partition provides greater sound-level difference at low frequencies than a simply supported. It also is confirmed that the sound-pressure level difference is lower in equal room than in unequal room configurations.

  16. Common sole larvae survive high levels of pile-driving sound in controlled exposure experiments.

    PubMed

    Bolle, Loes J; de Jong, Christ A F; Bierman, Stijn M; van Beek, Pieter J G; van Keeken, Olvin A; Wessels, Peter W; van Damme, Cindy J G; Winter, Hendrik V; de Haan, Dick; Dekeling, René P A

    2012-01-01

    In view of the rapid extension of offshore wind farms, there is an urgent need to improve our knowledge on possible adverse effects of underwater sound generated by pile-driving. Mortality and injuries have been observed in fish exposed to loud impulse sounds, but knowledge on the sound levels at which (sub-)lethal effects occur is limited for juvenile and adult fish, and virtually non-existent for fish eggs and larvae. A device was developed in which fish larvae can be exposed to underwater sound. It consists of a rigid-walled cylindrical chamber driven by an electro-dynamical sound projector. Samples of up to 100 larvae can be exposed simultaneously to a homogeneously distributed sound pressure and particle velocity field. Recorded pile-driving sounds could be reproduced accurately in the frequency range between 50 and 1000 Hz, at zero to peak pressure levels up to 210 dB re 1µPa(2) (zero to peak pressures up to 32 kPa) and single pulse sound exposure levels up to 186 dB re 1µPa(2)s. The device was used to examine lethal effects of sound exposure in common sole (Solea solea) larvae. Different developmental stages were exposed to various levels and durations of pile-driving sound. The highest cumulative sound exposure level applied was 206 dB re 1µPa(2)s, which corresponds to 100 strikes at a distance of 100 m from a typical North Sea pile-driving site. The results showed no statistically significant differences in mortality between exposure and control groups at sound exposure levels which were well above the US interim criteria for non-auditory tissue damage in fish. Although our findings cannot be extrapolated to fish larvae in general, as interspecific differences in vulnerability to sound exposure may occur, they do indicate that previous assumptions and criteria may need to be revised.

  17. Common Sole Larvae Survive High Levels of Pile-Driving Sound in Controlled Exposure Experiments

    PubMed Central

    Bolle, Loes J.; de Jong, Christ A. F.; Bierman, Stijn M.; van Beek, Pieter J. G.; van Keeken, Olvin A.; Wessels, Peter W.; van Damme, Cindy J. G.; Winter, Hendrik V.; de Haan, Dick; Dekeling, René P. A.

    2012-01-01

    In view of the rapid extension of offshore wind farms, there is an urgent need to improve our knowledge on possible adverse effects of underwater sound generated by pile-driving. Mortality and injuries have been observed in fish exposed to loud impulse sounds, but knowledge on the sound levels at which (sub-)lethal effects occur is limited for juvenile and adult fish, and virtually non-existent for fish eggs and larvae. A device was developed in which fish larvae can be exposed to underwater sound. It consists of a rigid-walled cylindrical chamber driven by an electro-dynamical sound projector. Samples of up to 100 larvae can be exposed simultaneously to a homogeneously distributed sound pressure and particle velocity field. Recorded pile-driving sounds could be reproduced accurately in the frequency range between 50 and 1000 Hz, at zero to peak pressure levels up to 210 dB re 1µPa2 (zero to peak pressures up to 32 kPa) and single pulse sound exposure levels up to 186 dB re 1µPa2s. The device was used to examine lethal effects of sound exposure in common sole (Solea solea) larvae. Different developmental stages were exposed to various levels and durations of pile-driving sound. The highest cumulative sound exposure level applied was 206 dB re 1µPa2s, which corresponds to 100 strikes at a distance of 100 m from a typical North Sea pile-driving site. The results showed no statistically significant differences in mortality between exposure and control groups at sound exposure levels which were well above the US interim criteria for non-auditory tissue damage in fish. Although our findings cannot be extrapolated to fish larvae in general, as interspecific differences in vulnerability to sound exposure may occur, they do indicate that previous assumptions and criteria may need to be revised. PMID:22431996

  18. Thermoacoustic sound projector: exceeding the fundamental efficiency of carbon nanotubes.

    PubMed

    Aliev, Ali E; Codoluto, Daniel; Baughman, Ray H; Ovalle-Robles, Raquel; Inoue, Kanzan; Romanov, Stepan A; Nasibulin, Albert G; Kumar, Prashant; Priya, Shashank; Mayo, Nathanael K; Blottman, John B

    2018-08-10

    The combination of smooth, continuous sound spectra produced by a sound source having no vibrating parts, a nanoscale thickness of a flexible active layer and the feasibility of creating large, conformal projectors provoke interest in thermoacoustic phenomena. However, at low frequencies, the sound pressure level (SPL) and the sound generation efficiency of an open carbon nanotube sheet (CNTS) is low. In addition, the nanoscale thickness of fragile heating elements, their high sensitivity to the environment and the high surface temperatures practical for thermoacoustic sound generation necessitate protective encapsulation of a freestanding CNTS in inert gases. Encapsulation provides the desired increase of sound pressure towards low frequencies. However, the protective enclosure restricts heat dissipation from the resistively heated CNTS and the interior of the encapsulated device. Here, the heat dissipation issue is addressed by short pulse excitations of the CNTS. An overall increase of energy conversion efficiency by more than four orders (from 10 -5 to 0.1) and the SPL of 120 dB re 20 μPa @ 1 m in air and 170 dB re 1 μPa @ 1 m in water were demonstrated. The short pulse excitation provides a stable linear increase of output sound pressure with substantially increased input power density (>2.5 W cm -2 ). We provide an extensive experimental study of pulse excitations in different thermodynamic regimes for freestanding CNTSs with varying thermal inertias (single-walled and multiwalled with varying diameters and numbers of superimposed sheet layers) in vacuum and in air. The acoustical and geometrical parameters providing further enhancement of energy conversion efficiency are discussed.

  19. Interpolated Sounding and Gridded Sounding Value-Added Products

    SciT

    Toto, T.; Jensen, M.

    Standard Atmospheric Radiation Measurement (ARM) Climate Research Facility sounding files provide atmospheric state data in one dimension of increasing time and height per sonde launch. Many applications require a quick estimate of the atmospheric state at higher time resolution. The INTERPOLATEDSONDE (i.e., Interpolated Sounding) Value-Added Product (VAP) transforms sounding data into continuous daily files on a fixed time-height grid, at 1-minute time resolution, on 332 levels, from the surface up to a limit of approximately 40 km. The grid extends that high so the full height of soundings can be captured; however, most soundings terminate at an altitude between 25more » and 30 km, above which no data is provided. Between soundings, the VAP linearly interpolates atmospheric state variables in time for each height level. In addition, INTERPOLATEDSONDE provides relative humidity scaled to microwave radiometer (MWR) observations.The INTERPOLATEDSONDE VAP, a continuous time-height grid of relative humidity-corrected sounding data, is intended to provide input to higher-order products, such as the Merged Soundings (MERGESONDE; Troyan 2012) VAP, which extends INTERPOLATEDSONDE by incorporating model data. The INTERPOLATEDSONDE VAP also is used to correct gaseous attenuation of radar reflectivity in products such as the KAZRCOR VAP.« less

  20. Automated Blood Pressure Measurement

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The Vital-2 unit pictured is a semi-automatic device that permits highly accurate blood pressure measurement, even by untrained personnel. Developed by Meditron Instrument Corporation, Milford, New Hampshire, it is based in part on NASA technology found in a similar system designed for automatic monitoring of astronauts' blood pressure. Vital-2 is an advancement over the familiar arm cuff, dial and bulb apparatus customarily used for blood pressure checks. In that method, the physician squeezes the bulb to inflate the arm cuff, which restricts the flow of blood through the arteries. As he eases the pressure on the arm, he listens, through a stethoscope, to the sounds of resumed blood flow as the arteries expand and contract. Taking dial readings related to sound changes, he gets the systolic (contracting) and diastolic (expanding) blood pressure measurements. The accuracy of the method depends on the physician's skill in interpreting the sounds. Hospitals sometimes employ a more accurate procedure, but it is "invasive," involving insertion of a catheter in the artery.

  1. The monster sound pipe

    NASA Astrophysics Data System (ADS)

    Ruiz, Michael J.; Perkins, James

    2017-03-01

    Producing a deep bass tone by striking a large 3 m (10 ft) flexible corrugated drainage pipe immediately grabs student attention. The fundamental pitch of the corrugated tube is found to be a semitone lower than a non-corrugated smooth pipe of the same length. A video (https://youtu.be/FU7a9d7N60Y) of the demonstration is included, which illustrates how an Internet keyboard can be used to estimate the fundamental pitches of each pipe. Since both pipes have similar end corrections, the pitch discrepancy between the smooth pipe and drainage tube is due to the corrugations, which lower the speed of sound inside the flexible tube, dropping its pitch a semitone.

  2. The sounds of science

    NASA Astrophysics Data System (ADS)

    Carlowicz, Michael

    As scientists carefully study some aspects of the ocean environment, are they unintentionally distressing others? That is a question to be answered by Robert Benson and his colleagues in the Center for Bioacoustics at Texas A&M University.With help from a 3-year, $316,000 grant from the U.S. Office of Naval Research, Benson will study how underwater noise produced by naval operations and other sources may affect marine mammals. In Benson's study, researchers will generate random sequences of low-frequency, high-intensity (180-decibel) sounds in the Gulf of Mexico, working at an approximate distance of 1 km from sperm whale herds. Using an array of hydrophones, the scientists will listen to the characteristic clicks and whistles of the sperm whales to detect changes in the animals' direction, speed, and depth, as derived from fluctuations in their calls.

  3. Sounds Clear Enough

    NASA Technical Reports Server (NTRS)

    Zak, Alan

    2004-01-01

    I'm a vice president at Line6, where we produce electronics for musical instruments. My company recently developed a guitar that can be programmed to sound like twenty-five different classic guitars - everything from a 1928 National 'Tricone' to a 1970 Martin. It is quite an amazing piece of technology. The guitar started as a research project because we needed to know if the technology was going to be viable and if the guitar design was going to be practical. I've been in this business for about twenty years now, and I still enjoy starting up projects whenever the opportunity presents itself. During the research phase, I headed up the project myself. Once we completed our preliminary research and made the decision to move into development, that's when I handed the project off - and that's where this story really begins.

  4. Study of the occlusion effect induced by an earplug: Numerical modelling and experimental validation

    NASA Astrophysics Data System (ADS)

    Brummund, Martin

    Despite existing limits for occupational noise exposure, professional hearing loss remains a high priority problem both in Quebec and worldwide. Several approaches exist to protect workers from harmful noise levels. The most frequently employed short term solution includes the distribution of hearing protection devices (HPD) such as earplugs and ear muffs. While HPDs offer an inexpensive (e.g. direct cost) and efficient means of protection workers often only tend to wear HPDs for limited amounts of time and, thus, remain at risk of developing professional hearing loss. Discomfort while using HPDs contributes to HPD underutilization and non-use. Two more general categories of discomfort can be distinguished. The category physical discomfort includes, for instance, problems such as heating of the ear and irritation of the ear canal that that occur upon earplug insertion. The category auditory discomfort refers to alterations in the auditory perception of sounds and one's own voice as well as hindered workplace communications. One important auditory discomfort that promotes HPD non-use is the occlusion effect. The occlusion effect occurs upon earplug insertion and describes sound amplification phenomena in the occluded ear canal at the low frequencies. The sound amplification is both perceivable and measurable (e.g., open and occluded sound pressure levels, hearing threshold shift). Additionally, the occlusion effect causes the HPD wearer to perceive his/her own voice as being distorted (e.g. hollow sounding) and physiological noises (e.g. respiration, blood circulation) are amplified also subsequent to earplug insertion. Reducing the occlusion effect has the potential to increase the auditory comfort of HPDs and could help preventing occupational hearing loss in the future. In order to improve this and other shortcomings observed with currently existing HPDs a large research collaboration between the Robert-Sauve research institute in occupational health and safety

  5. Scattering of sound by atmospheric turbulence predictions in a refractive shadow zone

    NASA Technical Reports Server (NTRS)

    Mcbride, Walton E.; Bass, Henry E.; Raspet, Richard; Gilbert, Kenneth E.

    1990-01-01

    According to ray theory, regions exist in an upward refracting atmosphere where no sound should be present. Experiments show, however, that appreciable sound levels penetrate these so-called shadow zones. Two mechanisms contribute to sound in the shadow zone: diffraction and turbulent scattering of sound. Diffractive effects can be pronounced at lower frequencies but are small at high frequencies. In the short wavelength limit, then, scattering due to turbulence should be the predominant mechanism involved in producing the sound levels measured in shadow zones. No existing analytical method includes turbulence effects in the prediction of sound pressure levels in upward refractive shadow zones. In order to obtain quantitative average sound pressure level predictions, a numerical simulation of the effect of atmospheric turbulence on sound propagation is performed. The simulation is based on scattering from randomly distributed scattering centers ('turbules'). Sound pressure levels are computed for many realizations of a turbulent atmosphere. Predictions from the numerical simulation are compared with existing theories and experimental data.

  6. Aerodynamic sound generation of flapping wing.

    PubMed

    Bae, Youngmin; Moon, Young J

    2008-07-01

    The unsteady flow and acoustic characteristics of the flapping wing are numerically investigated for a two-dimensional model of Bombus terrestris bumblebee at hovering and forward flight conditions. The Reynolds number Re, based on the maximum translational velocity of the wing and the chord length, is 8800 and the Mach number M is 0.0485. The computational results show that the flapping wing sound is generated by two different sound generation mechanisms. A primary dipole tone is generated at wing beat frequency by the transverse motion of the wing, while other higher frequency dipole tones are produced via vortex edge scattering during a tangential motion. It is also found that the primary tone is directional because of the torsional angle in wing motion. These features are only distinct for hovering, while in forward flight condition, the wing-vortex interaction becomes more prominent due to the free stream effect. Thereby, the sound pressure level spectrum is more broadband at higher frequencies and the frequency compositions become similar in all directions.

  7. Transfer of knowledge from sound quality measurement to noise impact evaluation

    NASA Astrophysics Data System (ADS)

    Genuit, Klaus

    2004-05-01

    It is well known that the measurement and analysis of sound quality requires a complex procedure with consideration of the physical, psychoacoustical and psychological aspects of sound. Sound quality cannot be described only by a simple value based on A-weighted sound pressure level measurements. The A-weighted sound pressure level is sufficient to predict the probabilty that the human ear could be damaged by sound but the A-weighted level is not the correct descriptor for the annoyance of a complex sound situation given by several different sound events at different and especially moving positions (soundscape). On the one side, the consideration of the spectral distribution and the temporal pattern (psychoacoustics) is requested and, on the other side, the subjective attitude with respect to the sound situation, the expectation and experience of the people (psychology) have to be included in context with the complete noise impact evaluation. This paper describes applications of the newest methods of sound quality measurements-as it is well introduced at the car manufacturers-based on artifical head recordings and signal processing comparable to the human hearing used in noisy environments like community/traffic noise.

  8. Nonlinear behavior of the tarka flute's distinctive sounds.

    PubMed

    Gérard, Arnaud; Yapu-Quispe, Luis; Sakuma, Sachiko; Ghezzi, Flavio; Ramírez-Ávila, Gonzalo Marcelo

    2016-09-01

    The Andean tarka flute generates multiphonic sounds. Using spectral techniques, we verify two distinctive musical behaviors and the nonlinear nature of the tarka. Through nonlinear time series analysis, we determine chaotic and hyperchaotic behavior. Experimentally, we observe that by increasing the blow pressure on different fingerings, peculiar changes from linear to nonlinear patterns are produced, leading ultimately to quenching.

  9. Nonlinear behavior of the tarka flute's distinctive sounds

    NASA Astrophysics Data System (ADS)

    Gérard, Arnaud; Yapu-Quispe, Luis; Sakuma, Sachiko; Ghezzi, Flavio; Ramírez-Ávila, Gonzalo Marcelo

    2016-09-01

    The Andean tarka flute generates multiphonic sounds. Using spectral techniques, we verify two distinctive musical behaviors and the nonlinear nature of the tarka. Through nonlinear time series analysis, we determine chaotic and hyperchaotic behavior. Experimentally, we observe that by increasing the blow pressure on different fingerings, peculiar changes from linear to nonlinear patterns are produced, leading ultimately to quenching.

  10. Tuning, Validation, and Uncertainty Estimates for a Sound Exposure Model

    DTIC Science & Technology

    2011-09-01

    to swell height. This phenomenon is described in ―Observations of Fluctuation of Transmitted Sound in Shallow Water‖ ( Urick 1969). Mean wave...newport/usrdiv/Transducers/G34.pdf] 40 Saunders, P. M., 1981: Practical Conversion of Pressure to Depth. J. Phys. Oceanogr., 11, 573–574. Urick , R

  11. Advanced Systems for Monitoring Underwater Sounds

    NASA Technical Reports Server (NTRS)

    Lane, Michael; Van Meter, Steven; Gilmore, Richard Grant; Sommer, Keith

    2007-01-01

    data acquired by instrumentation systems other than PAMS. A PAMS is packaged as a battery-powered unit, mated with external sensors, that can operate in the ocean at any depth from 2 m to 1 km. A PAMS includes a pressure housing, a deep-sea battery, a hydrophone (which is one of the mating external sensors), and an external monitor and keyboard box. In addition to acoustic transducers, external sensors can include temperature probes and, potentially, underwater cameras. The pressure housing contains a computer that includes a hard drive, DC-to- DC power converters, a post-amplifier board, a sound card, and a universal serial bus (USB) 4-port hub.

  12. Sound Standards for Schools "Unsound."

    ERIC Educational Resources Information Center

    Davis, Don

    2002-01-01

    Criticizes new classroom sound standard proposed by the American National Standards Institute that sets maximum background sound level at 35 decibels (described as "a whisper at 2 meters"). Argues that new standard is too costly for schools to implement, is not recommended by the medical community, and cannot be achieved by construction…

  13. Designing a Sound Reducing Wall

    ERIC Educational Resources Information Center

    Erk, Kendra; Lumkes, John; Shambach, Jill; Braile, Larry; Brickler, Anne; Matthys, Anna

    2015-01-01

    Acoustical engineers use their knowledge of sound to design quiet environments (e.g., classrooms and libraries) as well as to design environments that are supposed to be loud (e.g., concert halls and football stadiums). They also design sound barriers, such as the walls along busy roadways that decrease the traffic noise heard by people in…

  14. Sound attenuation apparatus

    NASA Technical Reports Server (NTRS)

    Shepherd, Kevin P. (Inventor); Grosveld, Ferdinand M. W. A. (Inventor)

    1991-01-01

    An apparatus is disclosed for reducing acoustic transmission from mechanical or acoustic sources by means of a double wall partition, within which an acoustic pressure field is generated by at least one secondary acoustic source. The secondary acoustic source is advantageously placed within the partition, around its edges, or it may be an integral part of a wall of the partition.

  15. Prediction on the Enhancement of the Impact Sound Insulation to a Floating Floor with Resilient Interlayer

    NASA Astrophysics Data System (ADS)

    Huang, Xianfeng; Meng, Yao; Huang, Riming

    2017-10-01

    This paper describes a theoretical method for predicting the improvement of the impact sound insulation to a floating floor with the resilient interlayer. Statistical energy analysis (SEA) model, which is skilful in calculating the floor impact sound, is set up for calculating the reduction in impact sound pressure level in downstairs room. The sound transmission paths which include direct path and flanking paths are analyzed to find the dominant one; the factors that affect impact sound reduction for a floating floor are explored. Then, the impact sound level in downstairs room is determined and comparisons between predicted and measured data are conducted. It is indicated that for the impact sound transmission across a floating floor, the flanking path impact sound level contribute tiny influence on overall sound level in downstairs room, and a floating floor with low stiffness interlayer exhibits favorable sound insulation on direct path. The SEA approach applies to the floating floors with resilient interlayers, which are experimentally verified, provides a guidance in sound insulation design.

  16. Acoustic transistor: Amplification and switch of sound by sound

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Kan, Wei-wei; Zou, Xin-ye; Yin, Lei-lei; Cheng, Jian-chun

    2014-08-01

    We designed an acoustic transistor to manipulate sound in a manner similar to the manipulation of electric current by its electrical counterpart. The acoustic transistor is a three-terminal device with the essential ability to use a small monochromatic acoustic signal to control a much larger output signal within a broad frequency range. The output and controlling signals have the same frequency, suggesting the possibility of cascading the structure to amplify an acoustic signal. Capable of amplifying and switching sound by sound, acoustic transistors have various potential applications and may open the way to the design of conceptual devices such as acoustic logic gates.

  17. Perfect sound insulation property of reclaimed waste tire rubber

    NASA Astrophysics Data System (ADS)

    Ubaidillah, Harjana, Yahya, Iwan; Kristiani, Restu; Muqowi, Eki; Mazlan, Saiful Amri

    2016-03-01

    This article reports an experimental investigation of sound insulation and absorption performance of a materials made of reclaimed ground tire rubber which is known as un-recyclable thermoset. The bulk waste tire is processed using single step recycling methods namely high-pressure high-temperature sintering (HPHTS). The bulk waste tire is simply placed into a mold and then a pressure load of 3 tons and a heating temperature of 200°C are applied to the mold. The HPHTS conducted for an hour and then it is cooled in room temperature. The resulted product is then evaluated the acoustical properties namely sound transmission loss (STL) and sound absorption coefficient using B&K Tube Kit Type 4206-T based on ISO 10534-2, ASTM E1050 and ASTM E2611. The sound absorption coefficient is found about 0.04 until 0.08 while STL value ranges between 50 to 60 dB. The sound absorption values are found to be very low (<0.1), while the average STL is higher than other elastomeric matrix found in previous work. The reclaimed tire rubber through HPHTS technique gives good soundproof characteristic.

  18. 75 FR 76079 - Sound Incentive Compensation Guidance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-07

    ... DEPARTMENT OF THE TREASURY Office of Thrift Supervision Sound Incentive Compensation Guidance... on the following information collection. Title of Proposal: Sound Incentive Compensation Guidance... Sound Compensation Practices adopted by the Financial Stability Board (FSB) in April 2009, as well as...

  19. The isolation of low frequency impact sounds in hotel construction

    NASA Astrophysics Data System (ADS)

    LoVerde, John J.; Dong, David W.

    2002-11-01

    One of the design challenges in the acoustical design of hotels is reducing low frequency sounds from footfalls occurring on both carpeted and hard-surfaced floors. Research on low frequency impact noise [W. Blazier and R. DuPree, J. Acoust. Soc. Am. 96, 1521-1532 (1994)] resulted in a conclusion that in wood construction low frequency impact sounds were clearly audible and that feasible control methods were not available. The results of numerous FIIC (Field Impact Insulation Class) measurements performed in accordance with ASTM E1007 indicate the lack of correlation between FIIC ratings and the reaction of occupants in the room below. The measurements presented include FIIC ratings and sound pressure level measurements below the ASTM E1007 low frequency limit of 100 Hertz, and reveal that excessive sound levels in the frequency range of 63 to 100 Hertz correlate with occupant complaints. Based upon this history, a tentative criterion for maximum impact sound level in the low frequency range is presented. The results presented of modifying existing constructions to reduce the transmission of impact sounds at low frequencies indicate that there may be practical solutions to this longstanding problem.

  20. EUVS Sounding Rocket Payload

    NASA Technical Reports Server (NTRS)

    Stern, Alan S.

    1996-01-01

    During the first half of this year (CY 1996), the EUVS project began preparations of the EUVS payload for the upcoming NASA sounding rocket flight 36.148CL, slated for launch on July 26, 1996 to observe and record a high-resolution (approx. 2 A FWHM) EUV spectrum of the planet Venus. These preparations were designed to improve the spectral resolution and sensitivity performance of the EUVS payload as well as prepare the payload for this upcoming mission. The following is a list of the EUVS project activities that have taken place since the beginning of this CY: (1) Applied a fresh, new SiC optical coating to our existing 2400 groove/mm grating to boost its reflectivity; (2) modified the Ranicon science detector to boost its detective quantum efficiency with the addition of a repeller grid; (3) constructed a new entrance slit plane to achieve 2 A FWHM spectral resolution; (4) prepared and held the Payload Initiation Conference (PIC) with the assigned NASA support team from Wallops Island for the upcoming 36.148CL flight (PIC held on March 8, 1996; see Attachment A); (5) began wavelength calibration activities of EUVS in the laboratory; (6) made arrangements for travel to WSMR to begin integration activities in preparation for the July 1996 launch; (7) paper detailing our previous EUVS Venus mission (NASA flight 36.117CL) published in Icarus (see Attachment B); and (8) continued data analysis of the previous EUVS mission 36.137CL (Spica occultation flight).

  1. Relations among pure-tone sound stimuli, neural activity, and the loudness sensation

    NASA Technical Reports Server (NTRS)

    Howes, W. L.

    1972-01-01

    Both the physiological and psychological responses to pure-tone sound stimuli are used to derive formulas which: (1) relate the loudness, loudness level, and sound-pressure level of pure tones; (2) apply continuously over most of the acoustic regime, including the loudness threshold; and (3) contain no undetermined coefficients. Some of the formulas are fundamental for calculating the loudness of any sound. Power-law formulas relating the pure-tone sound stimulus, neural activity, and loudness are derived from published data.

  2. The shock formation distance in a bounded sound beam of finite amplitude.

    PubMed

    Tao, Chao; Ma, Jian; Zhu, Zhemin; Du, Gonghuan; Ping, Zihong

    2003-07-01

    This paper investigates the shock formation distance in a bounded sound beam of finite amplitude by solving the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation using frequency-domain numerical method. Simulation results reveal that, besides the nonlinearity and absorption, the diffraction is another important factor that affects the shock formation of a bounded sound beam. More detailed discussions of the shock formation in a bounded sound beam, such as the waveform of sound pressure and the spatial distribution of shock formation, are also presented and compared for different parameters.

  3. Sound Clocks and Sonic Relativity

    NASA Astrophysics Data System (ADS)

    Todd, Scott L.; Menicucci, Nicolas C.

    2017-10-01

    Sound propagation within certain non-relativistic condensed matter models obeys a relativistic wave equation despite such systems admitting entirely non-relativistic descriptions. A natural question that arises upon consideration of this is, "do devices exist that will experience the relativity in these systems?" We describe a thought experiment in which `acoustic observers' possess devices called sound clocks that can be connected to form chains. Careful investigation shows that appropriately constructed chains of stationary and moving sound clocks are perceived by observers on the other chain as undergoing the relativistic phenomena of length contraction and time dilation by the Lorentz factor, γ , with c the speed of sound. Sound clocks within moving chains actually tick less frequently than stationary ones and must be separated by a shorter distance than when stationary to satisfy simultaneity conditions. Stationary sound clocks appear to be length contracted and time dilated to moving observers due to their misunderstanding of their own state of motion with respect to the laboratory. Observers restricted to using sound clocks describe a universe kinematically consistent with the theory of special relativity, despite the preferred frame of their universe in the laboratory. Such devices show promise in further probing analogue relativity models, for example in investigating phenomena that require careful consideration of the proper time elapsed for observers.

  4. Dimensions of vehicle sounds perception.

    PubMed

    Wagner, Verena; Kallus, K Wolfgang; Foehl, Ulrich

    2017-10-01

    Vehicle sounds play an important role concerning customer satisfaction and can show another differentiating factor of brands. With an online survey of 1762 German and American customers, the requirement characteristics of high-quality vehicle sounds were determined. On the basis of these characteristics, a requirement profile was generated for every analyzed sound. These profiles were investigated in a second study with 78 customers using real vehicles. The assessment results of the vehicle sounds can be represented using the dimensions "timbre", "loudness", and "roughness/sharpness". The comparison of the requirement profiles and the assessment results show that the sounds which are perceived as pleasant and high-quality, more often correspond to the requirement profile. High-quality sounds are characterized by the fact that they are rather gentle, soft and reserved, rich, a bit dark and not too rough. For those sounds which are assessed worse by the customers, recommendations for improvements can be derived. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Pitch features of environmental sounds

    NASA Astrophysics Data System (ADS)

    Yang, Ming; Kang, Jian

    2016-07-01

    A number of soundscape studies have suggested the need for suitable parameters for soundscape measurement, in addition to the conventional acoustic parameters. This paper explores the applicability of pitch features that are often used in music analysis and their algorithms to environmental sounds. Based on the existing alternative pitch algorithms for simulating the perception of the auditory system and simplified algorithms for practical applications in the areas of music and speech, the applicable algorithms have been determined, considering common types of sound in everyday soundscapes. Considering a number of pitch parameters, including pitch value, pitch strength, and percentage of audible pitches over time, different pitch characteristics of various environmental sounds have been shown. Among the four sound categories, i.e. water, wind, birdsongs, and urban sounds, generally speaking, both water and wind sounds have low pitch values and pitch strengths; birdsongs have high pitch values and pitch strengths; and urban sounds have low pitch values and a relatively wide range of pitch strengths.

  6. Simulation of low pressure water hammer

    NASA Astrophysics Data System (ADS)

    Himr, D.; Habán, V.

    2010-08-01

    Numerical solution of water hammer is presented in this paper. The contribution is focused on water hammer in the area of low pressure, which is completely different than high pressure case. Little volume of air and influence of the pipe are assumed in water, which cause sound speed change due to pressure alterations. Computation is compared with experimental measurement.

  7. Sound source identification and sound radiation modeling in a moving medium using the time-domain equivalent source method.

    PubMed

    Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang

    2015-05-01

    Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.

  8. Time frequency analysis of sound from a maneuvering rotorcraft

    NASA Astrophysics Data System (ADS)

    Stephenson, James H.; Tinney, Charles E.; Greenwood, Eric; Watts, Michael E.

    2014-10-01

    The acoustic signatures produced by a full-scale, Bell 430 helicopter during steady-level-flight and transient roll-right maneuvers are analyzed by way of time-frequency analysis. The roll-right maneuvers comprise both a medium and a fast roll rate. Data are acquired using a single ground based microphone that are analyzed by way of the Morlet wavelet transform to extract the spectral properties and sound pressure levels as functions of time. The findings show that during maneuvering operations of the helicopter, both the overall sound pressure level and the blade-vortex interaction sound pressure level are greatest when the roll rate of the vehicle is at its maximum. The reduced inflow in the region of the rotor disk where blade-vortex interaction noise originates is determined to be the cause of the increase in noise. A local decrease in inflow reduces the miss distance of the tip vortex and thereby increases the BVI noise signature. Blade loading and advance ratios are also investigated as possible mechanisms for increased sound production, but are shown to be fairly constant throughout the maneuvers.

  9. The influence of crowd density on the sound environment of commercial pedestrian streets.

    PubMed

    Meng, Qi; Kang, Jian

    2015-04-01

    Commercial pedestrian streets are very common in China and Europe, with many situated in historic or cultural centres. The environments of these streets are important, including their sound environments. The objective of this study is to explore the relationships between the crowd density and the sound environments of commercial pedestrian streets. On-site measurements were performed at the case study site in Harbin, China, and a questionnaire was administered. The sound pressure measurements showed that the crowd density has an insignificant effect on sound pressure below 0.05 persons/m2, whereas when the crowd density is greater than 0.05 persons/m2, the sound pressure increases with crowd density. The sound sources were analysed, showing that several typical sound sources, such as traffic noise, can be masked by the sounds resulting from dense crowds. The acoustic analysis showed that crowd densities outside the range of 0.10 to 0.25 persons/m2 exhibited lower acoustic comfort evaluation scores. In terms of audiovisual characteristics, the subjective loudness increases with greater crowd density, while the acoustic comfort decreases. The results for an indoor underground shopping street are also presented for comparison. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Behavioral responses of a harbor porpoise (Phocoena phocoena) to playbacks of broadband pile driving sounds.

    PubMed

    Kastelein, Ronald A; van Heerden, Dorianne; Gransier, Robin; Hoek, Lean

    2013-12-01

    The high under-water sound pressure levels (SPLs) produced during pile driving to build offshore wind turbines may affect harbor porpoises. To estimate the discomfort threshold of pile driving sounds, a porpoise in a quiet pool was exposed to playbacks (46 strikes/min) at five SPLs (6 dB steps: 130-154 dB re 1 μPa). The spectrum of the impulsive sound resembled the spectrum of pile driving sound at tens of kilometers from the pile driving location in shallow water such as that found in the North Sea. The animal's behavior during test and baseline periods was compared. At and above a received broadband SPL of 136 dB re 1 μPa [zero-peak sound pressure level: 151 dB re 1 μPa; t90: 126 ms; sound exposure level of a single strike (SELss): 127 dB re 1 μPa(2) s] the porpoise's respiration rate increased in response to the pile driving sounds. At higher levels, he also jumped out of the water more often. Wild porpoises are expected to move tens of kilometers away from offshore pile driving locations; response distances will vary with context, the sounds' source level, parameters influencing sound propagation, and background noise levels. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Cognitive Bias for Learning Speech Sounds From a Continuous Signal Space Seems Nonlinguistic

    PubMed Central

    de Boer, Bart

    2015-01-01

    When learning language, humans have a tendency to produce more extreme distributions of speech sounds than those observed most frequently: In rapid, casual speech, vowel sounds are centralized, yet cross-linguistically, peripheral vowels occur almost universally. We investigate whether adults’ generalization behavior reveals selective pressure for communication when they learn skewed distributions of speech-like sounds from a continuous signal space. The domain-specific hypothesis predicts that the emergence of sound categories is driven by a cognitive bias to make these categories maximally distinct, resulting in more skewed distributions in participants’ reproductions. However, our participants showed more centered distributions, which goes against this hypothesis, indicating that there are no strong innate linguistic biases that affect learning these speech-like sounds. The centralization behavior can be explained by a lack of communicative pressure to maintain categories. PMID:27648212

  12. Cognitive Bias for Learning Speech Sounds From a Continuous Signal Space Seems Nonlinguistic.

    PubMed

    van der Ham, Sabine; de Boer, Bart

    2015-10-01

    When learning language, humans have a tendency to produce more extreme distributions of speech sounds than those observed most frequently: In rapid, casual speech, vowel sounds are centralized, yet cross-linguistically, peripheral vowels occur almost universally. We investigate whether adults' generalization behavior reveals selective pressure for communication when they learn skewed distributions of speech-like sounds from a continuous signal space. The domain-specific hypothesis predicts that the emergence of sound categories is driven by a cognitive bias to make these categories maximally distinct, resulting in more skewed distributions in participants' reproductions. However, our participants showed more centered distributions, which goes against this hypothesis, indicating that there are no strong innate linguistic biases that affect learning these speech-like sounds. The centralization behavior can be explained by a lack of communicative pressure to maintain categories.

  13. Sound Wave Energy Resulting from the Impact of Water Drops on the Soil Surface

    PubMed Central

    Ryżak, Magdalena; Bieganowski, Andrzej; Korbiel, Tomasz

    2016-01-01

    The splashing of water drops on a soil surface is the first step of water erosion. There have been many investigations into splashing–most are based on recording and analysing images taken with high-speed cameras, or measuring the mass of the soil moved by splashing. Here, we present a new aspect of the splash phenomenon’s characterization the measurement of the sound pressure level and the sound energy of the wave that propagates in the air. The measurements were carried out for 10 consecutive water drop impacts on the soil surface. Three soils were tested (Endogleyic Umbrisol, Fluvic Endogleyic Cambisol and Haplic Chernozem) with four initial moisture levels (pressure heads: 0.1 kPa, 1 kPa, 3.16 kPa and 16 kPa). We found that the values of the sound pressure and sound wave energy were dependent on the particle size distribution of the soil, less dependent on the initial pressure head, and practically the same for subsequent water drops (from the first to the tenth drop). The highest sound pressure level (and the greatest variability) was for Endogleyic Umbrisol, which had the highest sand fraction content. The sound pressure for this soil increased from 29 dB to 42 dB with the next incidence of drops falling on the sample The smallest (and the lowest variability) was for Fluvic Endogleyic Cambisol which had the highest clay fraction. For all experiments the sound pressure level ranged from ~27 to ~42 dB and the energy emitted in the form of sound waves was within the range of 0.14 μJ to 5.26 μJ. This was from 0.03 to 1.07% of the energy of the incident drops. PMID:27388276

  14. Sound Wave Energy Resulting from the Impact of Water Drops on the Soil Surface.

    PubMed

    Ryżak, Magdalena; Bieganowski, Andrzej; Korbiel, Tomasz

    2016-01-01

    The splashing of water drops on a soil surface is the first step of water erosion. There have been many investigations into splashing-most are based on recording and analysing images taken with high-speed cameras, or measuring the mass of the soil moved by splashing. Here, we present a new aspect of the splash phenomenon's characterization the measurement of the sound pressure level and the sound energy of the wave that propagates in the air. The measurements were carried out for 10 consecutive water drop impacts on the soil surface. Three soils were tested (Endogleyic Umbrisol, Fluvic Endogleyic Cambisol and Haplic Chernozem) with four initial moisture levels (pressure heads: 0.1 kPa, 1 kPa, 3.16 kPa and 16 kPa). We found that the values of the sound pressure and sound wave energy were dependent on the particle size distribution of the soil, less dependent on the initial pressure head, and practically the same for subsequent water drops (from the first to the tenth drop). The highest sound pressure level (and the greatest variability) was for Endogleyic Umbrisol, which had the highest sand fraction content. The sound pressure for this soil increased from 29 dB to 42 dB with the next incidence of drops falling on the sample The smallest (and the lowest variability) was for Fluvic Endogleyic Cambisol which had the highest clay fraction. For all experiments the sound pressure level ranged from ~27 to ~42 dB and the energy emitted in the form of sound waves was within the range of 0.14 μJ to 5.26 μJ. This was from 0.03 to 1.07% of the energy of the incident drops.

  15. Concerns of the Institute of Transport Study and Research for reducing the sound level inside completely repaired buses. [noise and vibration control

    NASA Technical Reports Server (NTRS)

    Groza, A.; Calciu, J.; Nicola, I.; Ionasek, A.

    1974-01-01

    Sound level measurements on noise sources on buses are used to observe the effects of attenuating acoustic pressure levels inside the bus by sound-proofing during complete repair. A spectral analysis of the sound level as a function of motor speed, bus speed along the road, and the category of the road is reported.

  16. Statistics of natural binaural sounds.

    PubMed

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  17. Statistics of Natural Binaural Sounds

    PubMed Central

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction. PMID:25285658

  18. The hearing threshold of a harbor porpoise (Phocoena phocoena) for impulsive sounds (L).

    PubMed

    Kastelein, Ronald A; Gransier, Robin; Hoek, Lean; de Jong, Christ A F

    2012-08-01

    The distance at which harbor porpoises can hear underwater detonation sounds is unknown, but depends, among other factors, on the hearing threshold of the species for impulsive sounds. Therefore, the underwater hearing threshold of a young harbor porpoise for an impulsive sound, designed to mimic a detonation pulse, was quantified by using a psychophysical technique. The synthetic exponential pulse with a 5 ms time constant was produced and transmitted by an underwater projector in a pool. The resulting underwater sound, though modified by the response of the projection system and by the pool, exhibited the characteristic features of detonation sounds: A zero to peak sound pressure level of at least 30 dB (re 1 s(-1)) higher than the sound exposure level, and a short duration (34 ms). The animal's 50% detection threshold for this impulsive sound occurred at a received unweighted broadband sound exposure level of 60 dB re 1 μPa(2)s. It is shown that the porpoise's audiogram for short-duration tonal signals [Kastelein et al., J. Acoust. Soc. Am. 128, 3211-3222 (2010)] can be used to estimate its hearing threshold for impulsive sounds.

  19. Assessment of sound levels in a neonatal intensive care unit in tabriz, iran.

    PubMed

    Valizadeh, Sousan; Bagher Hosseini, Mohammad; Alavi, Nasrinsadat; Asadollahi, Malihe; Kashefimehr, Siamak

    2013-03-01

    High levels of sound have several negative effects, such as noise-induced hearing loss and delayed growth and development, on premature infants in neonatal intensive care units (NICUs). In order to reduce sound levels, they should first be measured. This study was performed to assess sound levels and determine sources of noise in the NICU of Alzahra Teaching Hospital (Tabriz, Iran). In a descriptive study, 24 hours in 4 workdays were randomly selected. Equivalent continuous sound level (Leq), sound level that is exceeded only 10% of the time (L10), maximum sound level (Lmax), and peak instantaneous sound pressure level (Lzpeak) were measured by CEL-440 sound level meter (SLM) at 6 fixed locations in the NICU. Data was collected using a questionnaire. SPSS13 was then used for data analysis. Mean values of Leq, L10, and Lmax were determined as 63.46 dBA, 65.81 dBA, and 71.30 dBA, respectively. They were all higher than standard levels (Leq < 45 dB, L10 ≤50 dB, and Lmax ≤65 dB). The highest Leq was measured at the time of nurse rounds. Leq was directly correlated with the number of staff members present in the ward. Finally, sources of noise were ordered based on their intensity. Considering that sound levels were higher than standard levels in our studied NICU, it is necessary to adopt policies to reduce sound.

  20. Assessment of Sound Levels in a Neonatal Intensive Care Unit in Tabriz, Iran

    PubMed Central

    Valizadeh, Sousan; Bagher Hosseini, Mohammad; Alavi, Nasrinsadat; Asadollahi, Malihe; Kashefimehr, Siamak

    2013-01-01

    Introduction: High levels of sound have several negative effects, such as noise-induced hearing loss and delayed growth and development, on premature infants in neonatal intensive care units (NICUs). In order to reduce sound levels, they should first be measured. This study was performed to assess sound levels and determine sources of noise in the NICU of Alzahra Teaching Hospital (Tabriz, Iran). Methods: In a descriptive study, 24 hours in 4 workdays were randomly selected. Equivalent continuous sound level (Leq), sound level that is exceeded only 10% of the time (L10), maximum sound level (Lmax), and peak instantaneous sound pressure level (Lzpeak) were measured by CEL-440 sound level meter (SLM) at 6 fixed locations in the NICU. Data was collected using a questionnaire. SPSS13 was then used for data analysis. Results: Mean values of Leq, L10, and Lmax were determined as 63.46 dBA, 65.81 dBA, and 71.30 dBA, respectively. They were all higher than standard levels (Leq < 45 dB, L10 ≤50 dB, and Lmax ≤65 dB). The highest Leq was measured at the time of nurse rounds. Leq was directly correlated with the number of staff members present in the ward. Finally, sources of noise were ordered based on their intensity. Conclusion: Considering that sound levels were higher than standard levels in our studied NICU, it is necessary to adopt policies to reduce sound. PMID:25276706

  1. Dynamic Pressure Microphones

    NASA Astrophysics Data System (ADS)

    Werner, E.

    In 1876, Alexander Graham Bell described his first telephone with a microphone using magnetic induction to convert the voice input into an electric output signal. The basic principle led to a variety of designs optimized for different needs, from hearing impaired users to singers or broadcast announcers. From the various sound pressure versions, only the moving coil design is still in mass production for speech and music application.

  2. Experimental Investigations on Two Potential Sound Diffuseness Measures in Enclosures

    NASA Astrophysics Data System (ADS)

    Bai, Xin

    This study investigates two different approaches to measure sound field diffuseness in enclosures from monophonic room impulse responses. One approach quantifies sound field diffuseness in enclosures by calculating the kurtosis of the pressure samples of room impulse responses. Kurtosis is a statistical measure that is known to describe the peakedness or tailedness of the distribution of a set of data. High kurtosis indicates low diffuseness of the sound field of interest. The other one relies on multifractal detrended fluctuation analysis which is a way to evaluate the statistical self-affinity of a signal to measure diffuseness. To test these two approaches, room impulse responses are obtained under varied room-acoustic diffuseness configurations, achieved by using varied degrees of diffusely reflecting interior surfaces. This paper will analyze experimentally measured monophonic room impulse responses, and discuss results from these two approaches.

  3. Sound speed measurements in liquid oxygen-liquid nitrogen mixtures

    NASA Technical Reports Server (NTRS)

    Zuckerwar, A. J.; Mazel, D. S.

    1985-01-01

    The sound speed in liquid oxygen (LOX), liquid nitrogen (LN2), and five LOX-LN2 mixtures was measured by an ultrasonic pulse-echo technique at temperatures in the vicinity of -195.8C, the boiling point of N2 at a pressure of I atm. Under these conditions, the measurements yield the following relationship between sound speed in meters per second and LN2 content M in mole percent: c = 1009.05-1.8275M+0.0026507 M squared. The second speeds of 1009.05 m/sec plus or minus 0.25 percent for pure LOX and 852.8 m/sec plus or minus 0.32 percent for pure LN2 are compared with those reported by past investigators. Measurement of sound speed should prove an effective means for monitoring the contamination of LOX by Ln2.

  4. Time dependent wave envelope finite difference analysis of sound propagation

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.

    1984-01-01

    A transient finite difference wave envelope formulation is presented for sound propagation, without steady flow. Before the finite difference equations are formulated, the governing wave equation is first transformed to a form whose solution tends not to oscillate along the propagation direction. This transformation reduces the required number of grid points by an order of magnitude. Physically, the transformed pressure represents the amplitude of the conventional sound wave. The derivation for the wave envelope transient wave equation and appropriate boundary conditions are presented as well as the difference equations and stability requirements. To illustrate the method, example solutions are presented for sound propagation in a straight hard wall duct and in a two dimensional straight soft wall duct. The numerical results are in good agreement with exact analytical results.

  5. Realization of an omnidirectional source of sound using parametric loudspeakers.

    PubMed

    Sayin, Umut; Artís, Pere; Guasch, Oriol

    2013-09-01

    Parametric loudspeakers are often used in beam forming applications where a high directivity is required. Withal, in this paper it is proposed to use such devices to build an omnidirectional source of sound. An initial prototype, the omnidirectional parametric loudspeaker (OPL), consisting of a sphere with hundreds of ultrasonic transducers placed on it has been constructed. The OPL emits audible sound thanks to the parametric acoustic array phenomenon, and the close proximity and the large number of transducers results in the generation of a highly omnidirectional sound field. Comparisons with conventional dodecahedron loudspeakers have been made in terms of directivity, frequency response, and in applications such as the generation of diffuse acoustic fields in reverberant chambers. The OPL prototype has performed better than the conventional loudspeaker especially for frequencies higher than 500 Hz, its main drawback being the difficulty to generate intense pressure levels at low frequencies.

  6. 46 CFR 298.14 - Economic soundness.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 8 2010-10-01 2010-10-01 false Economic soundness. 298.14 Section 298.14 Shipping... Eligibility § 298.14 Economic soundness. (a) Economic Evaluation. We shall not issue a Letter Commitment for... you seek Title XI financing or refinancing, will be economically sound. The economic soundness and...

  7. 46 CFR 298.14 - Economic soundness.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 8 2012-10-01 2012-10-01 false Economic soundness. 298.14 Section 298.14 Shipping... Eligibility § 298.14 Economic soundness. (a) Economic Evaluation. We shall not issue a Letter Commitment for... you seek Title XI financing or refinancing, will be economically sound. The economic soundness and...

  8. 46 CFR 298.14 - Economic soundness.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 8 2013-10-01 2013-10-01 false Economic soundness. 298.14 Section 298.14 Shipping... Eligibility § 298.14 Economic soundness. (a) Economic Evaluation. We shall not issue a Letter Commitment for... you seek Title XI financing or refinancing, will be economically sound. The economic soundness and...

  9. 46 CFR 298.14 - Economic soundness.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 8 2014-10-01 2014-10-01 false Economic soundness. 298.14 Section 298.14 Shipping... Eligibility § 298.14 Economic soundness. (a) Economic Evaluation. We shall not issue a Letter Commitment for... you seek Title XI financing or refinancing, will be economically sound. The economic soundness and...

  10. 46 CFR 298.14 - Economic soundness.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 8 2011-10-01 2011-10-01 false Economic soundness. 298.14 Section 298.14 Shipping... Eligibility § 298.14 Economic soundness. (a) Economic Evaluation. We shall not issue a Letter Commitment for... you seek Title XI financing or refinancing, will be economically sound. The economic soundness and...

  11. Nonlinear Sound Field by Interdigital Transducers in Water

    NASA Astrophysics Data System (ADS)

    Maezawa, Miyuki; Kamada, Rui; Kamakura, Tomoo; Matsuda, Kazuhisa

    2008-05-01

    Nonlinear ultrasound beams in water radiated by a surface acoustic wave (SAW) device are examined experimentally and theoretically. SAWs on an 128° X-cut Y-propagation LiNbO3 substrate are excited by 50 pairs of interdigital transducers (IDTs). The device with a 2 ×10 mm2 rectangular aperture and a center frequency of 20 MHz radiate two ultrasound beams in the direction of the Rayleigh angle determined by the propagation speed of the SAW on the device and of the longitudinal wave in water. The Rayleigh angle becomes 22° in the present experimental situation. The fundamental and second harmonic sound pressures are respectively measured along and across the beam using a miniature hydrophone whose active element 0.4 mm in diameter and whose frequency response is calibrated up to 40 MHz. The Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation is utilized to theoretically predict sound pressure amplitudes. The theoretical predictions of both the fundamental and second harmonic pressures agree well with the measured sound pressures.

  12. Sounds of a Star

    NASA Astrophysics Data System (ADS)

    2001-06-01

    Acoustic Oscillations in Solar-Twin "Alpha Cen A" Observed from La Silla by Swiss Team Summary Sound waves running through a star can help astronomers reveal its inner properties. This particular branch of modern astrophysics is known as "asteroseismology" . In the case of our Sun, the brightest star in the sky, such waves have been observed since some time, and have greatly improved our knowledge about what is going on inside. However, because they are much fainter, it has turned out to be very difficult to detect similar waves in other stars. Nevertheless, tiny oscillations in a solar-twin star have now been unambiguously detected by Swiss astronomers François Bouchy and Fabien Carrier from the Geneva Observatory, using the CORALIE spectrometer on the Swiss 1.2-m Leonard Euler telescope at the ESO La Silla Observatory. This telescope is mostly used for discovering exoplanets (see ESO PR 07/01 ). The star Alpha Centauri A is the nearest star visible to the naked eye, at a distance of a little more than 4 light-years. The new measurements show that it pulsates with a 7-minute cycle, very similar to what is observed in the Sun . Asteroseismology for Sun-like stars is likely to become an important probe of stellar theory in the near future. The state-of-the-art HARPS spectrograph , to be mounted on the ESO 3.6-m telescope at La Silla, will be able to search for oscillations in stars that are 100 times fainter than those for which such demanding observations are possible with CORALIE. PR Photo 23a/01 : Oscillations in a solar-like star (schematic picture). PR Photo 23b/01 : Acoustic spectrum of Alpha Centauri A , as observed with CORALIE. Asteroseismology: listening to the stars ESO PR Photo 23a/01 ESO PR Photo 23a/01 [Preview - JPEG: 357 x 400 pix - 96k] [Normal - JPEG: 713 x 800 pix - 256k] [HiRes - JPEG: 2673 x 3000 pix - 2.1Mb Caption : PR Photo 23a/01 is a graphical representation of resonating acoustic waves in the interior of a solar-like star. Red and blue

  13. Sounds of silence: How to animate virtual worlds with sound

    NASA Technical Reports Server (NTRS)

    Astheimer, Peter

    1993-01-01

    Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.

  14. Marine Forage Fishes in Puget Sound

    DTIC Science & Technology

    2007-03-01

    Orcas in Puget Sound . Puget Sound Near- shore Partnership Report No. 2007-01. Published by Seattle District, U.S. Army Corps of Engineers, Seattle...Technical Report 2007-03 Marine Forage Fishes in Puget Sound Prepared in support of the Puget Sound Nearshore Partnership Dan Penttila Washington...Forage Fishes in Puget Sound Valued Ecosystem Components Report Series Front cover: Pacific herring (courtesy of Washington Sea Grant). Back cover

  15. Probe-tube microphone measures in hearing-impaired children and adults.

    PubMed

    Barlow, N L; Auslander, M C; Rines, D; Stelmachowicz, P G

    1988-10-01

    This study was designed to investigate the reliability of real-ear measurements of sound pressure level (SPL) and to compare these values with two coupler measures of SPL. A commercially available probe tube microphone system was used to measure real ear SPL in both children and adults. Test-retest reliability decreased as a function of frequency for both groups and, in general, was slightly poorer for the children. For both groups, coupler to real ear differences were larger for the 2 cm3 coupler than for the reduced volume coupler; however, no significant differences were observed between groups. In addition, a measure of ear canal volume was not found to be a good predictor of coupler to real ear discrepancies.

  16. Sound propagation from a ridge wind turbine across a valley.

    PubMed

    Van Renterghem, Timothy

    2017-04-13

    Sound propagation outdoors can be strongly affected by ground topography. The existence of hills and valleys between a source and receiver can lead to the shielding or focusing of sound waves. Such effects can result in significant variations in received sound levels. In addition, wind speed and air temperature gradients in the atmospheric boundary layer also play an important role. All of the foregoing factors can become especially important for the case of wind turbines located on a ridge overlooking a valley. Ridges are often selected for wind turbines in order to increase their energy capture potential through the wind speed-up effects often experienced in such locations. In this paper, a hybrid calculation method is presented to model such a case, relying on an analytical solution for sound diffraction around an impedance cylinder and the conformal mapping (CM) Green's function parabolic equation (GFPE) technique. The various aspects of the model have been successfully validated against alternative prediction methods. Example calculations with this hybrid analytical-CM-GFPE model show the complex sound pressure level distribution across the valley and the effect of valley ground type. The proposed method has the potential to include the effect of refraction through the inclusion of complex wind and temperature fields, although this aspect has been highly simplified in the current simulations.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).

  17. Interaction of Sound from Supersonic Jets with Nearby Structures

    NASA Technical Reports Server (NTRS)

    Fenno, C. C., Jr.; Bayliss, A.; Maestrello, L.

    1997-01-01

    A model of sound generated in an ideally expanded supersonic (Mach 2) jet is solved numerically. Two configurations are considered: (1) a free jet and (2) an installed jet with a nearby array of flexible aircraft type panels. In the later case the panels vibrate in response to loading by sound from the jet and the full coupling between the panels and the jet is considered, accounting for panel response and radiation. The long time behavior of the jet is considered. Results for near field and far field disturbance, the far field pressure and the vibration of and radiation from the panels are presented. Panel response crucially depends on the location of the panels. Panels located upstream of the Mach cone are subject to a low level, nearly continuous spectral excitation and consequently exhibit a low level, relatively continuous spectral response. In contrast, panels located within the Mach cone are subject to a significant loading due to the intense Mach wave radiation of sound and exhibit a large, relatively peaked spectral response centered around the peak frequency of sound radiation. The panels radiate in a similar fashion to the sound in the jet, in particular exhibiting a relatively peaked spectral response at approximately the Mach angle from the bounding wall.

  18. Robust Feedback Control of Flow Induced Structural Radiation of Sound

    NASA Technical Reports Server (NTRS)

    Heatwole, Craig M.; Bernhard, Robert J.; Franchek, Matthew A.

    1997-01-01

    A significant component of the interior noise of aircraft and automobiles is a result of turbulent boundary layer excitation of the vehicular structure. In this work, active robust feedback control of the noise due to this non-predictable excitation is investigated. Both an analytical model and experimental investigations are used to determine the characteristics of the flow induced structural sound radiation problem. The problem is shown to be broadband in nature with large system uncertainties associated with the various operating conditions. Furthermore the delay associated with sound propagation is shown to restrict the use of microphone feedback. The state of the art control methodologies, IL synthesis and adaptive feedback control, are evaluated and shown to have limited success for solving this problem. A robust frequency domain controller design methodology is developed for the problem of sound radiated from turbulent flow driven plates. The control design methodology uses frequency domain sequential loop shaping techniques. System uncertainty, sound pressure level reduction performance, and actuator constraints are included in the design process. Using this design method, phase lag was added using non-minimum phase zeros such that the beneficial plant dynamics could be used. This general control approach has application to lightly damped vibration and sound radiation problems where there are high bandwidth control objectives requiring a low controller DC gain and controller order.

  19. Sound source tracking device for telematic spatial sound field reproduction

    NASA Astrophysics Data System (ADS)

    Cardenas, Bruno

    This research describes an algorithm that localizes sound sources for use in telematic applications. The localization algorithm is based on amplitude differences between various channels of a microphone array of directional shotgun microphones. The amplitude differences will be used to locate multiple performers and reproduce their voices, which were recorded at close distance with lavalier microphones, spatially corrected using a loudspeaker rendering system. In order to track multiple sound sources in parallel the information gained from the lavalier microphones will be utilized to estimate the signal-to-noise ratio between each performer and the concurrent performers.

  20. Sounding the field: recent works in sound studies.

    PubMed

    Boon, Tim

    2015-09-01

    For sound studies, the publication of a 593-page handbook, not to mention the establishment of at least one society - the European Sound Studies Association - might seem to signify the emergence of a new academic discipline. Certainly, the books under consideration here, alongside many others, testify to an intensification of concern with the aural dimensions of culture. Some of this work comes from HPS and STS, some from musicology and cultural studies. But all of it should concern members of our disciplines, as it represents a long-overdue foregrounding of the aural in how we think about the intersections of science, technology and culture.

  1. Controlled sound field with a dual layer loudspeaker array

    NASA Astrophysics Data System (ADS)

    Shin, Mincheol; Fazi, Filippo M.; Nelson, Philip A.; Hirono, Fabio C.

    2014-08-01

    Controlled sound interference has been extensively investigated using a prototype dual layer loudspeaker array comprised of 16 loudspeakers. Results are presented for measures of array performance such as input signal power, directivity of sound radiation and accuracy of sound reproduction resulting from the application of conventional control methods such as minimization of error in mean squared pressure, maximization of energy difference and minimization of weighted pressure error and energy. Procedures for selecting the tuning parameters have also been introduced. With these conventional concepts aimed at the production of acoustically bright and dark zones, all the control methods used require a trade-off between radiation directivity and reproduction accuracy in the bright zone. An alternative solution is proposed which can achieve better performance based on the measures presented simultaneously by inserting a low priority zone named as the “gray” zone. This involves the weighted minimization of mean-squared errors in both bright and dark zones together with the gray zone in which the minimization error is given less importance. This results in the production of directional bright zone in which the accuracy of sound reproduction is maintained with less required input power. The results of simulations and experiments are shown to be in excellent agreement.

  2. Modeling of reverberant room responses for two-dimensional spatial sound field analysis and synthesis.

    PubMed

    Bai, Mingsian R; Li, Yi; Chiang, Yi-Hao

    2017-10-01

    A unified framework is proposed for analysis and synthesis of two-dimensional spatial sound field in reverberant environments. In the sound field analysis (SFA) phase, an unbaffled 24-element circular microphone array is utilized to encode the sound field based on the plane-wave decomposition. Depending on the sparsity of the sound sources, the SFA stage can be implemented in two manners. For sparse-source scenarios, a one-stage algorithm based on compressive sensing algorithm is utilized. Alternatively, a two-stage algorithm can be used, where the minimum power distortionless response beamformer is used to localize the sources and Tikhonov regularization algorithm is used to extract the source amplitudes. In the sound field synthesis (SFS), a 32-element rectangular loudspeaker array is employed to decode the target sound field using pressure matching technique. To establish the room response model, as required in the pressure matching step of the SFS phase, an SFA technique for nonsparse-source scenarios is utilized. Choice of regularization parameters is vital to the reproduced sound field. In the SFS phase, three SFS approaches are compared in terms of localization performance and voice reproduction quality. Experimental results obtained in a reverberant room are presented and reveal that an accurate room response model is vital to immersive rendering of the reproduced sound field.

  3. Can joint sound assess soft and hard endpoints of the Lachman test?: A preliminary study.

    PubMed

    Hattori, Koji; Ogawa, Munehiro; Tanaka, Kazunori; Matsuya, Ayako; Uematsu, Kota; Tanaka, Yasuhito

    2016-05-12

    The Lachman test is considered to be a reliable physical examination for anterior cruciate ligament (ACL) injury. Patients with a damaged ACL demonstrate a soft endpoint feeling. However, examiners judge the soft and hard endpoints subjectively. The purpose of our study was to confirm objective performance of the Lachman test using joint auscultation. Human and porcine knee joints were examined. Knee joint sound during the Lachman test (Lachman sound) was analyzed by fast Fourier transformation. As quantitative indices of Lachman sound, the peak sound as the maximum relative amplitude (acoustic pressure) and its frequency were used. The mean Lachman peak sound for healthy volunteer knees was 86.9 ± 12.9 Hz in frequency and -40 ± 2.5 dB in acoustic pressure. The mean Lachman peak sound for intact porcine knees was 84.1 ± 9.4 Hz and -40.5 ± 1.7 dB. Porcine knees with ACL deficiency had a soft endpoint feeling during the Lachman test. The Lachman peak sounds of porcine knees with ACL deficiency were dispersed into four distinct groups, with center frequencies of around 40, 160, 450, and 1600. The Lachman peak sound was capable of assessing soft and hard endpoints of the Lachman test objectively.

  4. Sound field measurement in a double layer cavitation cluster by rugged miniature needle hydrophones.

    PubMed

    Koch, Christian

    2016-03-01

    During multi-bubble cavitation the bubbles tend to organize themselves into clusters and thus the understanding of properties and dynamics of clustering is essential for controlling technical applications of cavitation. Sound field measurements are a potential technique to provide valuable experimental information about the status of cavitation clouds. Using purpose-made, rugged, wide band, and small-sized needle hydrophones, sound field measurements in bubble clusters were performed and time-dependent sound pressure waveforms were acquired and analyzed in the frequency domain up to 20 MHz. The cavitation clusters were synchronously observed by an electron multiplying charge-coupled device (EMCCD) camera and the relation between the sound field measurements and cluster behaviour was investigated. Depending on the driving power, three ranges could be identified and characteristic properties were assigned. At low power settings no transient and no or very low stable cavitation activity can be observed. The medium range is characterized by strong pressure peaks and various bubble cluster forms. At high power a stable double layer was observed which grew with further increasing power and became quite dynamic. The sound field was irregular and the fundamental at driving frequency decreased. Between the bubble clouds completely different sound field properties were found in comparison to those in the cloud where the cavitation activity is high. In between the sound field pressure amplitude was quite small and no collapses were detected. Copyright © 2015. Published by Elsevier B.V.

  5. Identification of impact force acting on composite laminated plates using the radiated sound measured with microphones

    NASA Astrophysics Data System (ADS)

    Atobe, Satoshi; Nonami, Shunsuke; Hu, Ning; Fukunaga, Hisao

    2017-09-01

    Foreign object impact events are serious threats to composite laminates because impact damage leads to significant degradation of the mechanical properties of the structure. Identification of the location and force history of the impact that was applied to the structure can provide useful information for assessing the structural integrity. This study proposes a method for identifying impact forces acting on CFRP (carbon fiber reinforced plastic) laminated plates on the basis of the sound radiated from the impacted structure. Identification of the impact location and force history is performed using the sound pressure measured with microphones. To devise a method for identifying the impact location from the difference in the arrival times of the sound wave detected with the microphones, the propagation path of the sound wave from the impacted point to the sensor is examined. For the identification of the force history, an experimentally constructed transfer matrix is employed to relate the force history to the corresponding sound pressure. To verify the validity of the proposed method, impact tests are conducted by using a CFRP cross-ply laminate as the specimen, and an impulse hammer as the impactor. The experimental results confirm the validity of the present method for identifying the impact location from the arrival time of the sound wave detected with the microphones. Moreover, the results of force history identification show the feasibility of identifying the force history accurately from the measured sound pressure using the experimental transfer matrix.

  6. Microphone array measurement system for analysis of directional and spatial variations of sound fields.

    PubMed

    Gover, Bradford N; Ryan, James G; Stinson, Michael R

    2002-11-01

    A measurement system has been developed that is capable of analyzing the directional and spatial variations in a reverberant sound field. A spherical, 32-element array of microphones is used to generate a narrow beam that is steered in 60 directions. Using an omnidirectional loudspeaker as excitation, the sound pressure arriving from each steering direction is measured as a function of time, in the form of pressure impulse responses. By subsequent analysis of these responses, the variation of arriving energy with direction is studied. The directional diffusion and directivity index of the arriving sound can be computed, as can the energy decay rate in each direction. An analysis of the 32 microphone responses themselves allows computation of the point-to-point variation of reverberation time and of sound pressure level, as well as the spatial cross-correlation coefficient, over the extent of the array. The system has been validated in simple sound fields in an anechoic chamber and in a reverberation chamber. The system characterizes these sound fields as expected, both quantitatively from the measures and qualitatively from plots of the arriving energy versus direction. It is anticipated that the system will be of value in evaluating the directional distribution of arriving energy and the degree and diffuseness of sound fields in rooms.

  7. Dynamic sound localization in cats

    PubMed Central

    Ruhland, Janet L.; Jones, Amy E.

    2015-01-01

    Sound localization in cats and humans relies on head-centered acoustic cues. Studies have shown that humans are able to localize sounds during rapid head movements that are directed toward the target or other objects of interest. We studied whether cats are able to utilize similar dynamic acoustic cues to localize acoustic targets delivered during rapid eye-head gaze shifts. We trained cats with visual-auditory two-step tasks in which we presented a brief sound burst during saccadic eye-head gaze shifts toward a prior visual target. No consistent or significant differences in accuracy or precision were found between this dynamic task (2-step saccade) and the comparable static task (single saccade when the head is stable) in either horizontal or vertical direction. Cats appear to be able to process dynamic auditory cues and execute complex motor adjustments to accurately localize auditory targets during rapid eye-head gaze shifts. PMID:26063772

  8. An ultrasound look at Korotkoff sounds: the role of pulse wave velocity and flow turbulence.

    PubMed

    Benmira, Amir; Perez-Martin, Antonia; Schuster, Iris; Veye, Florent; Triboulet, Jean; Berron, Nicolas; Aichoun, Isabelle; Coudray, Sarah; Laurent, Jérémy; Bereksi-Reguig, Fethi; Dauzat, Michel

    2017-04-01

    The aim of this study was to analyze the temporal relationships between pressure, flow, and Korotkoff sounds, providing clues for their comprehensive interpretation. When measuring blood pressure in a group of 23 volunteers, we used duplex Doppler ultrasonography to assess, under the arm-cuff, the brachial artery flow, diameter changes, and local pulse wave velocity (PWV), while recording Korotkoff sounds 10 cm downstream together with cuff pressure and ECG. The systolic (SBP) and diastolic (DBP) blood pressures were 118.8±17.7 and 65.4±10.4 mmHg, respectively (n=23). The brachial artery lumen started opening when cuff pressure decreased below the SBP and opened for an increasing length of time until cuff pressure reached the DBP, and then remained open but pulsatile. A high-energy low-frequency Doppler signal, starting a few milliseconds before flow, appeared and disappeared together with Korotkoff sounds at the SBP and DBP, respectively. Its median duration was 42.7 versus 41.1 ms for Korotkoff sounds (P=0.54; n=17). There was a 2.20±1.54 ms/mmHg decrement in the time delay between the ECG R-wave and the Korotkoff sounds during cuff deflation (n=18). The PWV was 10±4.48 m/s at null cuff pressure and showed a 0.62% decrement per mmHg when cuff pressure increased (n=13). Korotkoff sounds are associated with a high-energy low-frequency Doppler signal of identical duration, typically resulting from wall vibrations, followed by flow turbulence. Local arterial PWV decreases when cuff pressure increases. Exploiting these changes may help improve SBP assessment, which remains a challenge for oscillometric techniques.

  9. Investigation of the sound generation mechanisms for in-duct orifice plates.

    PubMed

    Tao, Fuyang; Joseph, Phillip; Zhang, Xin; Stalnov, Oksana; Siercke, Matthias; Scheel, Henning

    2017-08-01

    Sound generation due to an orifice plate in a hard-walled flow duct which is commonly used in air distribution systems (ADS) and flow meters is investigated. The aim is to provide an understanding of this noise generation mechanism based on measurements of the source pressure distribution over the orifice plate. A simple model based on Curle's acoustic analogy is described that relates the broadband in-duct sound field to the surface pressure cross spectrum on both sides of the orifice plate. This work describes careful measurements of the surface pressure cross spectrum over the orifice plate from which the surface pressure distribution and correlation length is deduced. This information is then used to predict the radiated in-duct sound field. Agreement within 3 dB between the predicted and directly measured sound fields is obtained, providing direct confirmation that the surface pressure fluctuations acting over the orifice plates are the main noise sources. Based on the developed model, the contributions to the sound field from different radial locations of the orifice plate are calculated. The surface pressure is shown to follow a U 3.9 velocity scaling law and the area over which the surface sources are correlated follows a U 1.8 velocity scaling law.

  10. Human knee joint sound during the Lachman test: Comparison between healthy and anterior cruciate ligament-deficient knees.

    PubMed

    Tanaka, Kazunori; Ogawa, Munehiro; Inagaki, Yusuke; Tanaka, Yasuhito; Nishikawa, Hitoshi; Hattori, Koji

    2017-05-01

    The Lachman test is clinically considered to be a reliable physical examination for anterior cruciate ligament (ACL) deficiency. However, the test involves subjective judgement of differences in tibial translation and endpoint quality. An auscultation system has been developed to allow assessment of the Lachman test. The knee joint sound during the Lachman test was analyzed using fast Fourier transformation. The purpose of the present study was to quantitatively evaluate knee joint sounds in healthy and ACL-deficient human knees. Sixty healthy volunteers and 24 patients with ACL injury were examined. The Lachman test with joint auscultation was evaluated using a microphone. Knee joint sound during the Lachman test (Lachman sound) was analyzed by fast Fourier transformation. As quantitative indices of the Lachman sound, the peak sound (Lachman peak sound) as the maximum relative amplitude (acoustic pressure) and its frequency were used. In healthy volunteers, the mean Lachman peak sound of intact knees was 100.6 Hz in frequency and -45 dB in acoustic pressure. Moreover, a sex difference was found in the frequency of the Lachman peak sound. In patients with ACL injury, the frequency of the Lachman peak sound of the ACL-deficient knees was widely dispersed. In the ACL-deficient knees, the mean Lachman peak sound was 306.8 Hz in frequency and -63.1 dB in acoustic pressure. If the reference range was set at the frequency of the healthy volunteer Lachman peak sound, the sensitivity, specificity, positive predictive value, and negative predictive value were 83.3%, 95.6%, 95.2%, and 85.2%, respectively. Knee joint auscultation during the Lachman test was capable of judging ACL deficiency on the basis of objective data. In particular, the frequency of the Lachman peak sound was able to assess ACL condition. Copyright © 2016 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.

  11. Tipping point analysis of a large ocean ambient sound record

    NASA Astrophysics Data System (ADS)

    Livina, Valerie N.; Harris, Peter; Brower, Albert; Wang, Lian; Sotirakopoulos, Kostas; Robinson, Stephen

    2017-04-01

    We study a long (2003-2015) high-resolution (250Hz) sound pressure record provided by the Comprehensive Nuclear-Test-Ban Treaty Organisation (CTBTO) from the hydro-acoustic station Cape Leeuwin (Australia). We transform the hydrophone waveforms into five bands of 10-min-average sound pressure levels (including the third-octave band) and apply tipping point analysis techniques [1-3]. We report the results of the analysis of fluctuations and trends in the data and discuss the BigData challenges in processing this record, including handling data segments of large size and possible HPC solutions. References: [1] Livina et al, GRL 2007, [2] Livina et al, Climate of the Past 2010, [3] Livina et al, Chaos 2015.

  12. Making sound vortices by metasurfaces

    SciT

    Ye, Liping; Qiu, Chunyin, E-mail: cyqiu@whu.edu.cn; Lu, Jiuyang

    Based on the Huygens-Fresnel principle, a metasurface structure is designed to generate a sound vortex beam in airborne environment. The metasurface is constructed by a thin planar plate perforated with a circular array of deep subwavelength resonators with desired phase and amplitude responses. The metasurface approach in making sound vortices is validated well by full-wave simulations and experimental measurements. Potential applications of such artificial spiral beams can be anticipated, as exemplified experimentally by the torque effect exerting on an absorbing disk.

  13. Making sound vortices by metasurfaces

    NASA Astrophysics Data System (ADS)

    Ye, Liping; Qiu, Chunyin; Lu, Jiuyang; Tang, Kun; Jia, Han; Ke, Manzhu; Peng, Shasha; Liu, Zhengyou

    2016-08-01

    Based on the Huygens-Fresnel principle, a metasurface structure is designed to generate a sound vortex beam in airborne environment. The metasurface is constructed by a thin planar plate perforated with a circular array of deep subwavelength resonators with desired phase and amplitude responses. The metasurface approach in making sound vortices is validated well by full-wave simulations and experimental measurements. Potential applications of such artificial spiral beams can be anticipated, as exemplified experimentally by the torque effect exerting on an absorbing disk.

  14. The Multisensory Sound Lab: Sounds You Can See and Feel.

    ERIC Educational Resources Information Center

    Lederman, Norman; Hendricks, Paula

    1994-01-01

    A multisensory sound lab has been developed at the Model Secondary School for the Deaf (District of Columbia). A special floor allows vibrations to be felt, and a spectrum analyzer displays frequencies and harmonics visually. The lab is used for science education, auditory training, speech therapy, music and dance instruction, and relaxation…

  15. The sound intensity and characteristics of variable-pitch pulse oximeters.

    PubMed

    Yamanaka, Hiroo; Haruna, Junichi; Mashimo, Takashi; Akita, Takeshi; Kinouchi, Keiko

    2008-06-01

    Various studies worldwide have found that sound levels in hospitals significantly exceed the World Health Organization (WHO) guidelines, and that this noise is associated with audible signals from various medical devices. The pulse oximeter is now widely used in health care; however the health effects associated with the noise from this equipment remain largely unclarified. Here, we analyzed the sounds of variable-pitch pulse oximeters, and discussed the possible associated risk of sleep disturbance, annoyance, and hearing loss. The Nellcor N 595 and Masimo SET Radical pulse oximeters were measured for equivalent continuous A-weighted sound pressure levels (L(Aeq)), loudness levels, and loudness. Pulse beep pitches were also identified using Fast Fourier Transform (FFT) analysis and compared with musical pitches as controls. Almost all alarm sounds and pulse beeps from the instruments tested exceeded 30 dBA, a level that may induce sleep disturbance and annoyance. Several alarm sounds emitted by the pulse oximeters exceeded 70 dBA, which is known to induce hearing loss. The loudness of the alarm sound of each pulse oximeter did not change in proportion to the sound volume level. The pitch of each pulse beep did not correspond to musical pitch levels. The results indicate that sounds from pulse oximeters pose a potential risk of not only sleep disturbance and annoyance but also hearing loss, and that these sounds are unnatural for human auditory perception.

  16. Cochlear transducer operating point adaptation.

    PubMed

    Zou, Yuan; Zheng, Jiefu; Ren, Tianying; Nuttall, Alfred

    2006-04-01

    The operating point (OP) of outer hair cell (OHC) mechanotransduction can be defined as any shift away from the center position on the transduction function. It is a dc offset that can be described by percentage of the maximum transduction current or as an equivalent dc pressure in the ear canal. The change of OP can be determined from the changes of the second and third harmonics of the cochlear microphonic (CM) following a calibration of its initial value. We found that the initial OP was dependent on sound level and cochlear sensitivity. From CM generated by a lower sound level at 74 dB SPL to avoid saturation and suppression of basal turn cochlear amplification, the OHC OP was at constant 57% of the maximum transduction current (an ear canal pressure of -0.1 Pa). To perturb the OP, a constant force was applied to the bony shell of the cochlea at the 18 kHz best frequency location using a blunt probe. The force applied over the scala tympani induced an OP change as if the organ of Corti moved toward the scala vestibuli (SV) direction. During an application of the constant force, the second harmonic of the CM partially recovered toward the initial level, which could be described by two time constants. Removing the force induced recovery of the second harmonic to its normal level described by a single time constant. The force applied over the SV caused an opposite result. These data indicate an active mechanism for OHC transduction OP.

  17. Assessment on transient sound radiation of a vibrating steel bridge due to traffic loading

    NASA Astrophysics Data System (ADS)

    Zhang, He; Xie, Xu; Jiang, Jiqing; Yamashita, Mikio

    2015-02-01

    Structure-borne noise induced by vehicle-bridge coupling vibration is harmful to human health and living environment. Investigating the sound pressure level and the radiation mechanism of structure-borne noise is of great significance for the assessment of environmental noise pollution and noise control. In this paper, the transient noise induced by vehicle-bridge coupling vibration is investigated by employing the hybrid finite element method (FEM) and boundary element method (BEM). The effect of local vibration of the bridge deck is taken into account and the sound responses of the structure-borne noise in time domain is obtained. The precision of the proposed method is validated by comparing numerical results to the on-site measurements of a steel girder-plate bridge in service. It implies that the sound pressure level and its distribution in both time and frequency domains may be predicted by the hybrid approach of FEM-BEM with satisfactory accuracy. Numerical results indicate that the vibrating steel bridge radiates high-level noise because of its extreme flexibility and large surface area for sound radiation. The impact effects of the vehicle on the sound pressure when leaving the bridge are observed. The shape of the contour lines in the area around the bridge deck could be explained by the mode shapes of the bridge. The moving speed of the vehicle only affects the sound pressure components with frequencies lower than 10 Hz.

  18. DETERMINATION OF THE SPEED OF SOUND ALONG THE HUGONIOT IN A SHOCKED MATERIAL

    DTIC Science & Technology

    2017-04-25

    correctly predict higher speeds of sound for the higher energy shocked states. The approximations of higher shock pressures diverge progressively...List 11 FIGURES 1 Copper Hugoniot pressure-specific volume plane 4 2 Copper Hugoniot energy -specific volume plane 4 3 Comparison between rate of...volume and energy are being used. = (, ) Then by the chain rule: = | + | Dividing by dv

  19. Sound Propagation in Shallow Water. Volume 2. Unclassified Papers

    DTIC Science & Technology

    1974-11-15

    and 22°. The component of the sound pressure normal to the sea-bottom has been received by a movable, motor -driven hydrophone (LC 10...the motor , the operation-status of which was controlled by magnetic relays. The total measuring interval was 44.5 cm. 16 NATO UNCLASSIFIED jmm...Then one may hope to learn , which criteria make different sea areas acoustically similar. To estimate the hierarchy of the environmental influences, a

  20. Coupling of FM Systems to Individuals with Unilateral Hearing Loss.

    ERIC Educational Resources Information Center

    Kopun, Judy G.; And Others

    1992-01-01

    This study examined the attenuation characteristics of 5 Frequency Modulation system sound delivery options for 25 adults and children (ages 5-13). Degree of ear canal occlusion was a major factor in degree of attenuation. For children with unilateral hearing impairments, the most acoustically appropriate option was the tube-fitting. (Author/JDD)

  1. Automated Speech Intelligibility System for Head-Borne Personal Protective Equipment: Proof of Concept

    DTIC Science & Technology

    2008-04-01

    selected as the listener headform for this effort. The HATS has binaural sound quality microphones inserted into the ear canals and rubber pinnae that...Blank 16 APPENDIX - WORD LISTS AND SUBJECT RESPONSES MRT Set 1 (spoken word is in bold type) kick, lick, sick, tick, wick, pick neat, beat , seat, meat

  2. 30 CFR 62.101 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... noise dose. For the purposes of this part, the exchange rate is 5 decibels (5 dB). Hearing protector. Any device or material, capable of being worn on the head or in the ear canal, sold wholly or in part on the basis of its ability to reduce the level of sound entering the ear, and which has a...

  3. "Epic Ear Defence"-A Game to Educate Children on the Risks of Noise-Related Hearing Loss.

    PubMed

    Eikelboom, Robert H; Leishman, Natalie F; Munro, Tyler J; Nguyen, Bach; Riggs, Peter R; Tennant, Jonathon; West, Rhiannon K; Robertson, William B

    2012-12-01

    Hearing loss resulting from overexposure to entertainment-related sounds is a modern concern. "Epic Ear Defence" places the player in the three-dimensional environment of the ear canal and challenges the player to defend the ear from various noises, to delay the onset of noise-related hearing loss.

  4. Effects of sounds generated by a dental turbine and a stream on regional cerebral blood flow and cardiovascular responses.

    PubMed

    Mishima, Riho; Kudo, Takumu; Tsunetsugu, Yuko; Miyazaki, Yoshifumi; Yamamura, Chie; Yamada, Yoshiaki

    2004-09-01

    Effects of sound generated by a dental turbine and a small stream (murmur) and the effects of no sound (null, control) on heart rate, systolic and diastolic blood pressure, and hemodynamic changes (oxygenated, deoxygenated, and total hemoglobin concentrations) in the frontal cortex were measured in 18 young volunteers. Questionnaires completed by the volunteers were also evaluated. Near-infrared spectroscopy and the Finapres technique were employed to measure hemodynamic and vascular responses, respectively. The subjects assessed the murmur, null, and turbine sounds as "pleasant," "natural," and "unpleasant," respectively. Blood pressures changed in response to the murmur, null, and turbine sound stimuli as expected: lower than the control level, unchanged, and higher than the control level, respectively. Mean blood pressure values tended to increase gradually over the recording time even during the null sound stimulation, possibly because of the recording environment. Oxygenated hemoglobin concentrations decreased drastically in response to the dental turbine sound, while deoxygenated hemoglobin concentrations remained unchanged and thus total hemoglobin concentrations decreased (due to the decreased oxygenated hemoglobin concentrations). Hemodynamic responses to the murmuring sound and the null sound were slight or unchanged, respectively. Surprisingly, heart rate measurements remained fairly stable in response to the stimulatory noises. In conclusion, we demonstrate here that sound generated by a dental turbine may affect cerebral blood flow and metabolism as well as autonomic responses. Copyright 2004 The Society of the Nippon Dental University

  5. Vocal Imitations of Non-Vocal Sounds

    PubMed Central

    Houix, Olivier; Voisin, Frédéric; Misdariis, Nicolas; Susini, Patrick

    2016-01-01

    Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long

  6. Sound Environments Surrounding Preterm Infants Within an Occupied Closed Incubator.

    PubMed

    Shimizu, Aya; Matsuo, Hiroya

    2016-01-01

    Preterm infants often exhibit functional disorders due to the stressful environment in the neonatal intensive care unit (NICU). The sound pressure level (SPL) in the NICU is often much higher than the levels recommended by the American Academy of Pediatrics. Our study aims to describe the SPL and sound frequency levels surrounding preterm infants within closed incubators that utilize high frequency oscillation (HFO) or nasal directional positive airway pressure (nasal-DPAP) respiratory settings. This is a descriptive research study of eight preterm infants (corrected age<33 weeks) exposed to the equipment when placed in an incubator. The actual noise levels were observed and the results were compared to the recommendations made by neonatal experts. Increased noise levels, which have reported to affect neonates' ability to self-regulate, could increase the risk of developing attention deficit disorder, and may result in tachycardia, bradycardia, increased intracranial pressure, and hypoxia. The care provider should closely assess for adverse effects of higher sound levels generated by different modes of respiratory support and take measures to ensure that preterm infants are protected from exposure to noise exceeding the optimal safe levels. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Adaptive near-field beamforming techniques for sound source imaging.

    PubMed

    Cho, Yong Thung; Roan, Michael J

    2009-02-01

    Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.

  8. Problems in nonlinear acoustics: Scattering of sound by sound, parametric receiving arrays, nonlinear effects in asymmetric sound beams and pulsed finite amplitude sound beams

    NASA Astrophysics Data System (ADS)

    Hamilton, Mark F.

    1989-08-01

    Four projects are discussed in this annual summary report, all of which involve basic research in nonlinear acoustics: Scattering of Sound by Sound, a theoretical study of two nonconlinear Gaussian beams which interact to produce sum and difference frequency sound; Parametric Receiving Arrays, a theoretical study of parametric reception in a reverberant environment; Nonlinear Effects in Asymmetric Sound Beams, a numerical study of two dimensional finite amplitude sound fields; and Pulsed Finite Amplitude Sound Beams, a numerical time domain solution of the KZK equation.

  9. Sound Naming in Neurodegenerative Disease

    ERIC Educational Resources Information Center

    Chow, Maggie L.; Brambati, Simona M.; Gorno-Tempini, Maria Luisa; Miller, Bruce L.; Johnson, Julene K.

    2010-01-01

    Modern cognitive neuroscientific theories and empirical evidence suggest that brain structures involved in movement may be related to action-related semantic knowledge. To test this hypothesis, we examined the naming of environmental sounds in patients with corticobasal degeneration (CBD) and progressive supranuclear palsy (PSP), two…

  10. Sound, Noise, and Vibration Control.

    ERIC Educational Resources Information Center

    Yerges, Lyle F.

    This working guide on the principles and techniques of controlling acoustical environment is discussed in the light of human, environmental and building needs. The nature of sound and its variables are defined. The acoustical environment and its many materials, spaces and functional requirements are described, with specific methods for planning,…

  11. Sound Assessment through Proper Policy

    ERIC Educational Resources Information Center

    Chappuis, Stephen J.

    2007-01-01

    Aligning a school board policy manual with the faculty handbook would be an excellent application of systems thinking in support of school district mission and goals. This article talks about changing sound assessment practice in accordance with the school's proper policy. One obstacle to changing assessment practice is the prevailing belief that…

  12. Sound control by temperature gradients

    NASA Astrophysics Data System (ADS)

    Sánchez-Dehesa, José; Angelov, Mitko I.; Cervera, Francisco; Cai, Liang-Wu

    2009-11-01

    This work reports experiments showing that airborne sound propagation can be controlled by temperature gradients. A system of two heated tubes is here used to demonstrate the collimation and focusing of an ultrasonic beam by the refractive index profile created by the temperature gradients existing around the tubes. Numerical simulations supporting the experimental findings are also reported.

  13. Demonstrating Sound Impulses in Pipes.

    ERIC Educational Resources Information Center

    Raymer, M. G.; Micklavzina, Stan

    1995-01-01

    Describes a simple, direct method to demonstrate the effects of the boundary conditions on sound impulse reflections in pipes. A graphical display of the results can be made using a pipe, cork, small hammer, microphone, and fast recording electronics. Explains the principles involved. (LZ)

  14. Rocket ozone sounding network data

    NASA Technical Reports Server (NTRS)

    Wright, D. U.; Krueger, A. J.; Foster, G. M.

    1978-01-01

    During the period December 1976 through February 1977, three regular monthly ozone profiles were measured at Wallops Flight Center, two special soundings were taken at Antigua, West Indies, and at the Churchill Research Range, monthly activities were initiated to establish stratospheric ozone climatology. This report presents the data results and flight profiles for the period covered.

  15. Sound Stories for General Music

    ERIC Educational Resources Information Center

    Cardany, Audrey Berger

    2013-01-01

    Language and music literacy share a similar process of understanding that progresses from sensory experience to symbolic representation. The author identifies Bruner’s modes of understanding as they relate to using narrative in the music classroom to enhance music reading at iconic and symbolic levels. Two sound stories are included for…

  16. Intercepting a sound without vision

    PubMed Central

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2017-01-01

    Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939

  17. Method and Apparatus for Characterizing Pressure Sensors using Modulated Light Beam Pressure

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C. (Inventor)

    2003-01-01

    Embodiments of apparatuses and methods are provided that use light sources instead of sound sources for characterizing and calibrating sensors for measuring small pressures to mitigate many of the problems with using sound sources. In one embodiment an apparatus has a light source for directing a beam of light on a sensing surface of a pressure sensor for exerting a force on the sensing surface. The pressure sensor generates an electrical signal indicative of the force exerted on the sensing surface. A modulator modulates the beam of light. A signal processor is electrically coupled to the pressure sensor for receiving the electrical signal.

  18. What makes for sound science?

    PubMed

    Costa, Fabrizio; Cramer, Grant; Finnegan, E Jean

    2017-11-10

    The inclusive threshold policy for publication in BMC journals including BMC Plant Biology means that editorial decisions are largely based on the soundness of the research presented rather than the novelty or potential impact of the work. Here we discuss what is required to ensure that research meets the requirement of scientific soundness. BMC Plant Biology and the other BCM-series journals ( https://www.biomedcentral.com/p/the-bmc-series-journals ) differ in policy from many other journals as they aim to provide a home for all publishable research. The inclusive threshold policy for publication means that editorial decisions are largely based on the soundness of the research presented rather than the novelty or potential impact of the work. The emphasis on scientific soundness ( http://blogs.biomedcentral.com/bmcseriesblog/2016/12/05/vital-importance-inclusive/ ) rather than novelty or impact is important because it means that manuscripts that may be judged to be of low impact due to the nature of the study as well as those reporting negative results or that largely replicate earlier studies, all of which can be difficult to publish elsewhere, are available to the research community. Here we discuss the importance of the soundness of research and provide some basic guidelines to assist authors to determine whether their research is appropriate for submission to BMC Plant Biology.Prior to a research article being sent out for review, the handling editor will first determine whether the research presented is scientifically valid. To be valid the research must address a question of biological significance using suitable methods and analyses, and must follow community-agreed standards relevant to the research field.

  19. Geometric Constraints on Human Speech Sound Inventories

    PubMed Central

    Dunbar, Ewan; Dupoux, Emmanuel

    2016-01-01

    We investigate the idea that the languages of the world have developed coherent sound systems in which having one sound increases or decreases the chances of having certain other sounds, depending on shared properties of those sounds. We investigate the geometries of sound systems that are defined by the inherent properties of sounds. We document three typological tendencies in sound system geometries: economy, a tendency for the differences between sounds in a system to be definable on a relatively small number of independent dimensions; local symmetry, a tendency for sound systems to have relatively large numbers of pairs of sounds that differ only on one dimension; and global symmetry, a tendency for sound systems to be relatively balanced. The finding of economy corroborates previous results; the two symmetry properties have not been previously documented. We also investigate the relation between the typology of inventory geometries and the typology of individual sounds, showing that the frequency distribution with which individual sounds occur across languages works in favor of both local and global symmetry. PMID:27462296

  20. Sound source localization inspired by the ears of the Ormia ochracea

    NASA Astrophysics Data System (ADS)

    Kuntzman, Michael L.; Hall, Neal A.

    2014-07-01

    The parasitoid fly Ormia ochracea has the remarkable ability to locate crickets using audible sound. This ability is, in fact, remarkable as the fly's hearing mechanism spans only 1.5 mm which is 50× smaller than the wavelength of sound emitted by the cricket. The hearing mechanism is, for all practical purposes, a point in space with no significant interaural time or level differences to draw from. It has been discovered that evolution has empowered the fly with a hearing mechanism that utilizes multiple vibration modes to amplify interaural time and level differences. Here, we present a fully integrated, man-made mimic of the Ormia's hearing mechanism capable of replicating the remarkable sound localization ability of the special fly. A silicon-micromachined prototype is presented which uses multiple piezoelectric sensing ports to simultaneously transduce two orthogonal vibration modes of the sensing structure, thereby enabling simultaneous measurement of sound pressure and pressure gradient.

  1. Spatial attenuation of different sound field components in a water layer and shallow-water sediments

    NASA Astrophysics Data System (ADS)

    Belov, A. I.; Kuznetsov, G. N.

    2017-11-01

    The paper presents the results of an experimental study of spatial attenuation of low-frequency vector-scalar sound fields in shallow water. The experiments employed a towed pneumatic cannon and spatially separated four-component vector-scalar receiver modules. Narrowband analysis of received signals made it possible to estimate the attenuation coefficients of the first three modes in the frequency of range of 26-182 Hz and calculate the frequency dependences of the sound absorption coefficients in the upper part of bottom sediments. We analyze the experimental and calculated (using acoustic calibration of the waveguide) laws of the drop in sound pressure and orthogonal vector projections of the oscillation velocity. It is shown that the vertical projection of the oscillation velocity vector decreases significantly faster than the sound pressure field.

  2. Numerical model for the weakly nonlinear propagation of sound through turbulence

    NASA Technical Reports Server (NTRS)

    Lipkens, Bart; Blanc-Benon, Philippe

    1994-01-01

    When finite amplitude (or intense) sound, such as a sonic boom, propagates through a turbulent atmosphere, the propagation is strongly affected by the turbulence. The interaction between sound and turbulence has mostly been studied as a linear phenomenon, i.e., the nonlinear behavior of the intense sound has been neglected. It has been shown that turbulence has an effect on the perceived loudness of sonic booms, mainly by changing its peak pressure and rise time. Peak pressure and rise time are important factors that determine the loudness of the sonic boom when heard outdoors. However, the interaction between turbulence and nonlinear effects has mostly not been included in propagation studies of sonic booms. It is therefore important to investigate the influence of acoustical nonlinearity on the interaction of intense sound with turbulence.

  3. Uncovering Spatial Variation in Acoustic Environments Using Sound Mapping.

    PubMed

    Job, Jacob R; Myers, Kyle; Naghshineh, Koorosh; Gill, Sharon A

    2016-01-01

    Animals select and use habitats based on environmental features relevant to their ecology and behavior. For animals that use acoustic communication, the sound environment itself may be a critical feature, yet acoustic characteristics are not commonly measured when describing habitats and as a result, how habitats vary acoustically over space and time is poorly known. Such considerations are timely, given worldwide increases in anthropogenic noise combined with rapidly accumulating evidence that noise hampers the ability of animals to detect and interpret natural sounds. Here, we used microphone arrays to record the sound environment in three terrestrial habitats (forest, prairie, and urban) under ambient conditions and during experimental noise introductions. We mapped sound pressure levels (SPLs) over spatial scales relevant to diverse taxa to explore spatial variation in acoustic habitats and to evaluate the number of microphones needed within arrays to capture this variation under both ambient and noisy conditions. Even at small spatial scales and over relatively short time spans, SPLs varied considerably, especially in forest and urban habitats, suggesting that quantifying and mapping acoustic features could improve habitat descriptions. Subset maps based on input from 4, 8, 12 and 16 microphones differed slightly (< 2 dBA/pixel) from those based on full arrays of 24 microphones under ambient conditions across habitats. Map differences were more pronounced with noise introductions, particularly in forests; maps made from only 4-microphones differed more (> 4 dBA/pixel) from full maps than the remaining subset maps, but maps with input from eight microphones resulted in smaller differences. Thus, acoustic environments varied over small spatial scales and variation could be mapped with input from 4-8 microphones. Mapping sound in different environments will improve understanding of acoustic environments and allow us to explore the influence of spatial variation

  4. Propagation of Finite Amplitude Sound in Multiple Waveguide Modes.

    NASA Astrophysics Data System (ADS)

    van Doren, Thomas Walter

    1993-01-01

    This dissertation describes a theoretical and experimental investigation of the propagation of finite amplitude sound in multiple waveguide modes. Quasilinear analytical solutions of the full second order nonlinear wave equation, the Westervelt equation, and the KZK parabolic wave equation are obtained for the fundamental and second harmonic sound fields in a rectangular rigid-wall waveguide. It is shown that the Westervelt equation is an acceptable approximation of the full nonlinear wave equation for describing guided sound waves of finite amplitude. A system of first order equations based on both a modal and harmonic expansion of the Westervelt equation is developed for waveguides with locally reactive wall impedances. Fully nonlinear numerical solutions of the system of coupled equations are presented for waveguides formed by two parallel planes which are either both rigid, or one rigid and one pressure release. These numerical solutions are compared to finite -difference solutions of the KZK equation, and it is shown that solutions of the KZK equation are valid only at frequencies which are high compared to the cutoff frequencies of the most important modes of propagation (i.e., for which sound propagates at small grazing angles). Numerical solutions of both the Westervelt and KZK equations are compared to experiments performed in an air-filled, rigid-wall, rectangular waveguide. Solutions of the Westervelt equation are in good agreement with experiment for low source frequencies, at which sound propagates at large grazing angles, whereas solutions of the KZK equation are not valid for these cases. At higher frequencies, at which sound propagates at small grazing angles, agreement between numerical solutions of the Westervelt and KZK equations and experiment is only fair, because of problems in specifying the experimental source condition with sufficient accuracy.

  5. Urban sound energy reduction by means of sound barriers

    NASA Astrophysics Data System (ADS)

    Iordache, Vlad; Ionita, Mihai Vlad

    2018-02-01

    In urban environment, various heating ventilation and air conditioning appliances designed to maintain indoor comfort become urban acoustic pollution vectors due to the sound energy produced by these equipment. The acoustic barriers are the recommended method for the sound energy reduction in urban environment. The current sizing method of these acoustic barriers is too difficult and it is not practical for any 3D location of the noisy equipment and reception point. In this study we will develop based on the same method a new simplified tool for acoustic barriers sizing, maintaining the same precision characteristic to the classical method. Abacuses for acoustic barriers sizing are built that can be used for different 3D locations of the source and the reception points, for several frequencies and several acoustic barrier heights. The study case presented in the article represents a confirmation for the rapidity and ease of use of these abacuses in the design of the acoustic barriers.

  6. Dredged Material Management in Long Island Sound

    EPA Pesticide Factsheets

    Information on Western and Central Long Island Sound Dredged Material Disposal Sites including the Dredged Material Management Plan and Regional Dredging Team. Information regarding the Eastern Long Island Sound Selected Site including public meetings.

  7. Cirrus Cloud Retrieval Using Infrared Sounding Data: Multilevel Cloud Errors.

    NASA Astrophysics Data System (ADS)

    Baum, Bryan A.; Wielicki, Bruce A.

    1994-01-01

    In this study we perform an error analysis for cloud-top pressure retrieval using the High-Resolution Infrared Radiometric Sounder (HIRS/2) 15-µm CO2 channels for the two-layer case of transmissive cirrus overlying an overcast, opaque stratiform cloud. This analysis includes standard deviation and bias error due to instrument noise and the presence of two cloud layers, the lower of which is opaque. Instantaneous cloud pressure retrieval errors are determined for a range of cloud amounts (0.1 1.0) and cloud-top pressures (850250 mb). Large cloud-top pressure retrieval errors are found to occur when a lower opaque layer is present underneath an upper transmissive cloud layer in the satellite field of view (FOV). Errors tend to increase with decreasing upper-cloud elective cloud amount and with decreasing cloud height (increasing pressure). Errors in retrieved upper-cloud pressure result in corresponding errors in derived effective cloud amount. For the case in which a HIRS FOV has two distinct cloud layers, the difference between the retrieved and actual cloud-top pressure is positive in all casts, meaning that the retrieved upper-cloud height is lower than the actual upper-cloud height. In addition, errors in retrieved cloud pressure are found to depend upon the lapse rate between the low-level cloud top and the surface. We examined which sounder channel combinations would minimize the total errors in derived cirrus cloud height caused by instrument noise and by the presence of a lower-level cloud. We find that while the sounding channels that peak between 700 and 1000 mb minimize random errors, the sounding channels that peak at 300—500 mb minimize bias errors. For a cloud climatology, the bias errors are most critical.

  8. Sound localization by echolocating bats

    NASA Astrophysics Data System (ADS)

    Aytekin, Murat

    Echolocating bats emit ultrasonic vocalizations and listen to echoes reflected back from objects in the path of the sound beam to build a spatial representation of their surroundings. Important to understanding the representation of space through echolocation are detailed studies of the cues used for localization, the sonar emission patterns and how this information is assembled. This thesis includes three studies, one on the directional properties of the sonar receiver, one on the directional properties of the sonar transmitter, and a model that demonstrates the role of action in building a representation of auditory space. The general importance of this work to a broader understanding of spatial localization is discussed. Investigations of the directional properties of the sonar receiver reveal that interaural level difference and monaural spectral notch cues are both dependent on sound source azimuth and elevation. This redundancy allows flexibility that an echolocating bat may need when coping with complex computational demands for sound localization. Using a novel method to measure bat sonar emission patterns from freely behaving bats, I show that the sonar beam shape varies between vocalizations. Consequently, the auditory system of a bat may need to adapt its computations to accurately localize objects using changing acoustic inputs. Extra-auditory signals that carry information about pinna position and beam shape are required for auditory localization of sound sources. The auditory system must learn associations between extra-auditory signals and acoustic spatial cues. Furthermore, the auditory system must adapt to changes in acoustic input that occur with changes in pinna position and vocalization parameters. These demands on the nervous system suggest that sound localization is achieved through the interaction of behavioral control and acoustic inputs. A sensorimotor model demonstrates how an organism can learn space through auditory-motor contingencies

  9. Aerodynamic sound of flow past an airfoil

    NASA Technical Reports Server (NTRS)

    Wang, Meng

    1995-01-01

    Reynolds number of 104. The far-field noise is computed using Curle's extension to the Lighthill analogy (Curle 1955). An effective method for separating the physical noise source from spurious boundary contributions is developed. This allows an accurate evaluation of the Reynolds stress volume quadrupoles, in addition to the more readily computable surface dipoles due to the unsteady lift and drag. The effect of noncompact source distribution on the far-field sound is assessed using an efficient integration scheme for the Curle integral, with full account of retarded-time variations. The numerical results confirm in quantitative terms that the far-field sound is dominated by the surface pressure dipoles at low Mach number. The techniques developed are applicable to a wide range of flows, including jets and mixing layers, where the Reynolds stress quadrupoles play a prominent or even dominant role in the overall sound generation.

  10. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...

  11. 47 CFR 74.603 - Sound channels.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Sound channels. 74.603 Section 74.603... Stations § 74.603 Sound channels. (a) The frequencies listed in § 74.602(a) may be used for the simultaneous transmission of the picture and sound portions of TV broadcast programs and for cue and order...

  12. 47 CFR 74.603 - Sound channels.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Sound channels. 74.603 Section 74.603... Stations § 74.603 Sound channels. (a) The frequencies listed in § 74.602(a) may be used for the simultaneous transmission of the picture and sound portions of TV broadcast programs and for cue and order...

  13. 47 CFR 74.603 - Sound channels.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Sound channels. 74.603 Section 74.603... Stations § 74.603 Sound channels. (a) The frequencies listed in § 74.602(a) may be used for the simultaneous transmission of the picture and sound portions of TV broadcast programs and for cue and order...

  14. Sound-Symbolism Boosts Novel Word Learning

    ERIC Educational Resources Information Center

    Lockwood, Gwilym; Dingemanse, Mark; Hagoort, Peter

    2016-01-01

    The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally occurring sound-symbolic words that depict sensory…

  15. Evaluating Warning Sound Urgency with Reaction Times

    ERIC Educational Resources Information Center

    Suied, Clara; Susini, Patrick; McAdams, Stephen

    2008-01-01

    It is well-established that subjective judgments of perceived urgency of alarm sounds can be affected by acoustic parameters. In this study, the authors investigated an objective measurement, the reaction time (RT), to test the effectiveness of temporal parameters of sounds in the context of warning sounds. Three experiments were performed using a…

  16. Sound production in the clownfish Amphiprion clarkii.

    PubMed

    Parmentier, Eric; Colleye, Orphal; Fine, Michael L; Frédérich, Bruno; Vandewalle, Pierre; Herrel, Anthony

    2007-05-18

    Although clownfish sounds were recorded as early as 1930, the mechanism of sound production has remained obscure. Yet, clownfish are prolific "singers" that produce a wide variety of sounds, described as "chirps" and "pops" in both reproductive and agonistic behavioral contexts. Here, we describe the sonic mechanism of the clownfish Amphiprion clarkii.

  17. A Lexical Analysis of Environmental Sound Categories

    ERIC Educational Resources Information Center

    Houix, Olivier; Lemaitre, Guillaume; Misdariis, Nicolas; Susini, Patrick; Urdapilleta, Isabel

    2012-01-01

    In this article we report on listener categorization of meaningful environmental sounds. A starting point for this study was the phenomenological taxonomy proposed by Gaver (1993b). In the first experimental study, 15 participants classified 60 environmental sounds and indicated the properties shared by the sounds in each class. In a second…

  18. Bubbles That Change the Speed of Sound

    ERIC Educational Resources Information Center

    Planinsic, Gorazd; Etkina, Eugenia

    2012-01-01

    The influence of bubbles on sound has long attracted the attention of physicists. In his 1920 book Sir William Bragg described sound absorption caused by foam in a glass of beer tapped by a spoon. Frank S. Crawford described and analyzed the change in the pitch of sound in a similar experiment and named the phenomenon the "hot chocolate effect."…

  19. The Early Years: Becoming Attuned to Sound

    ERIC Educational Resources Information Center

    Ashbrook, Peggy

    2014-01-01

    Exploration of making and changing sounds is part of the first-grade performance expectation 1-PS4-1, "Plan and conduct investigations to provide evidence that vibrating materials can make sound and that sound can make materials vibrate" (NGSS Lead States 2013, p. 10; see Internet Resource). Early learning experiences build toward…

  20. 33 CFR 62.47 - Sound signals.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Sound signals. 62.47 Section 62... UNITED STATES AIDS TO NAVIGATION SYSTEM The U.S. Aids to Navigation System § 62.47 Sound signals. (a) Often sound signals are located on or adjacent to aids to navigation. When visual signals are obscured...